The History and Development of AI Software and Tech
While the field of artificial intelligence has led to the creation of some of the most innovative and cutting-edge technologies that the world has ever witnessed, it has taken several decades for such developments to occur. With this being said, while many consumers around the world will undoubtedly be aware of the role that artificial intelligence has played in the advancement of popular technologies such as AI voice assistants and driverless vehicles, the history that had led to the creation of such products is less widely known. Moreover, as artificial intelligence has been regarded as one of the most elusive and nebulous topics in all of computer science, the foundations of the field of study can be hard to determine.
Computing Machinery and Intelligence
While concepts and artistic portrayals of themes that could be akin to artificial intelligence have been present since ancient times, such as tales presented in Greek mythology, the collaboration of scientists within multiple disciplines during the 1940s and 1950s, including engineering, mathematics, and economics, among others, set the stage for the creation of artificial intelligence as we know it today. As the concept of artificial intelligence is rooted in the idea of machines that can think and make decisions on a level similar to that of human beings, “Computing Machinery and Intelligence”, a paper written by English mathematician and scientist Alan Turing in 1950, is perhaps the best representation of the introduction of artificial intelligence to the general public.
Within his paper, Turing questions what it means for a machine to think, as the idea of having a conscience is something that has been exclusive to the human mind. More specifically, the paper introduced the Turing Test, a metric that could be used to gauge the level of intelligence or sentience that a particular machine possesses. In his test, Turing proposes that in order for a machine to truly think or achieve intelligence, the machine must be able to converse with a natural language speaker in a manner that is identical to that of another human being. If a machine could not answer questions and provide responses in accordance with the function of the human mind, the machine could not be truly intelligent.
The term artificial intelligence
Six years after Alan Turing published his seminal paper “Computing Machinery and Intelligence”, American computer and cognitive scientist John McCarthy coined the term artificial intelligence while speaking at the Dartmouth Summer Research Project on Artificial Intelligence, the first-ever academic conference on the subject. Furthermore, in a proposal for funding McCarthy made to the Rockefeller Foundation prior to the conference, the computer and cognitive scientist discussed many topics and ideas that continue to be prevalent within the field of artificial intelligence today, including but not limited to neural networks, natural language processing, creativity, abstraction, and the theory of computation.
Artificial intelligence and chess
As the game of chess has historically been considered a game of intellect, many of the pioneers within the fields of computing and artificial intelligence were of the opinion that a machine that could beat a human being in chess would be the personification of human intelligence in machine form. To this point, after many more decades of advancement and development, IBM’s Deep Blue, a Type-A or brute force computer chess program, was able to defeat then world chess champion, Gary Kasparov, in 1997. While the chess-playing expert system Deep Blue did not decisively beat Kasparov, the notion that a computer program could even compete with, much less beat a human opponent in a game of chess was considered to be a major advancement within the field of artificial intelligence.
Machine learning advancements
Within the past decade, advancements in machine learning algorithms have allowed artificial intelligence to be implemented in a manner that has previously been unfeasible and impractical. To illustrate this point further, the vast majority of artificial intelligence systems throughout history have been rule-based systems. These rule-based systems function on the basis of hardline facts and statements that take the form of “if-then” coding statements that are created by software engineers and developers. While these systems can prove to be very effective in achieving certain tasks, such as medical diagnosis, such systems also come with limitations, as all of the capabilities and characteristics of these systems must be created by human inputs.
Conversely, the development of supervised, unsupervised, and reinforcement learning algorithms has opened the door to new possibilities within the field of artificial intelligence. These algorithms all enable computer software programs to learn from a particular data set, as opposed to having the rules created by a programmer during the initial development stages. Popular AI assistants such as Siri and Alexa are prime examples of these technological advancements, as these assistants are able to communicate with human beings through the implementation of Natural Language Processing or NLP, which in turn utilizes a variety of machine learning algorithms to gradually create a database of words, phrases, and sentences than an AI assistant can effectively respond to.
While the history of artificial intelligence is difficult to define, as ideas concerning the parameters or origin of human intelligence have been topics of discussion for thousands of years, advancements that were made by computer scientists during the 1950s and 1960s undoubtedly created an environment that allowed for artificial intelligence as we know it today to flourish. Through the efforts of these computer scientists, machine learning algorithms have advanced to a level that allows for machines to learn from large sets of data, a far cry from the rule-based AI systems of years past. As such, the development of artificial intelligence will continue for years to come, as new advancements will allow for possibilities that are currently unknown.