Widget Image
CURRENT ISSUE
ISSUE 00
SUPPORT US
PURCHASE PRINT
Follow us:
Sunday / December 5.
  • No products in the cart.

What History Might Tell Us About AI

[IMAGE: GETTY IMAGES]

We now live in the age of Big Data, an age in which we have the capacity to collect huge sums of information. The application of Artificial Intelligence in this regard has already been quite fruitful in several industries.

The dream of thinking machines goes back centuries, at least to Gottfried Wilhelm Leibniz, in the 17th century. Leibniz helped invent mechanical calculators, independently of Isaac Newton developed the integral calculus, and had a lifelong fascination with reducing thinking to calculation. His Mathesis Universalis was a vision of universal science made possible by a mathematical language more precise than natural languages, like English. The Mathesis was never finished, but as a posthumous consolation, it helped usher in modern symbolic logic in later work by George Boole, and others.

In the 18th Century the Enlightenment philosopher and proto-psychologist Étienne Bonnot de Condillac imagined a statue outwardly appearing like a man and also with what he called “the inward organization.” In an example of supreme armchair speculation, Condillac imagined pouring facts bits of knowledge into its head, wondering when intelligence would emerge. Condillac’s musings drew inspiration from the early mechanical philosophy of Thomas Hobbes, who had famously declared that thinking was nothing but ratiocination calculating. Yet precise ideas of computation, along with the technology to realize it, were not yet available.

In the 19th century Charles Babbage took the first real steps towards Artificial Intelligence as technology, as an engineering project aimed at building a thinking machine. Babbage was a world-famous scientist, a recognized genius, and a polymath. His blueprint for his Analytic Engine was the first design for a general purpose computer, incorporating components that are now part of modern computers: an arithmetical logic unit, program control flow in the form of loops and branches, and integrated memory. The design extended work on his earlier Difference Engine a device for automating classes of mathematical calculations and though the completion of decades of work was at hand, the British Association for the Advancement of Science refused additional funding for the project. The project languished, and Babbage himself is now a mostly forgotten chapter in the history of computing and Artificial Intelligence.

The idea of a general calculating machine was finally realized in the 20th Century, with the work of British mathematician and code-breaker Alan Turing. Turing, more than anyone else, also launched what we now call AI. Though the term Artificial Intelligence was not coined officially for another five years.

John McCarthy coined the phrase Artificial Intelligence in 1955, and by 1956 Artificial Intelligence, or AI, was officially launched as an independent research field at the now-famous Dartmouth Conference, a meeting of minds that included notable scientists such as Alan Newell and Herbert Simon, Marvin Minsky, and John McCarthy.

Herbert Simon declared in 1957 that AI had arrived, with machines that as he put it can think. Further, even at this early stage of AI work, the field was already showing marked signs of the bluster that would come to dominate and embarrass later researchers and research efforts. Part of the hype was simple over-confidence. Mathematician and linguist Yehoshua Bar-Hillel dubbed this the “fallacy of the successful first step,” pointing out that early progress does not imply that subsequent steps of the same kind guarantee an eventual solution. It is always possible that the full problem requires methods of an entirely different kind than those used to solve the initial, relatively easier parts of the problem.

Advances in AI have given the world computers that can beat people at chess and Jeopardy, as well as drive cars and manage calendars. But despite the progress, engineers are still years away from developing machines that are self-aware.  So what is in store for the future? In the immediate future, AI language is looking like the next big thing. In fact, it is already underway.