Can machines think? This is a question asked by the influential British computer scientist Alan Turing (for which the Turing test is named) in his paper “Computing Machinery and Intelligence” in 1950. More notably he asks the question, “Can machines do what we (as thinking entities) can do?” Since the 1940s, Turing explored some of the first questions about “intelligent machines”, or as commonly called now, artificial intelligence (I’ll use both terms synonymously).
Hardware and software developments aside, Turing stated in his paper several philosophical objections to the existence of artificial intelligence. He presents these in opposition to his own opinion: that by the end of the 20th century a machine might fool the interrogator of a Turing test (“imitation game” in his words) at least 30% of the time, convincing the interrogator that the machine is a human.
His vision for thinking machines has not yet come true, but now more than ever artificial intelligence is nearer to becoming reality.
Alan Turing – 20th century computer scientist
Authors of science fiction have since attempted to provide their own answers to these philosophical questions with each rendition of an artificial intelligence in their stories. This article will review some of these objections to AI that Turing presented and show how science fiction has explored some possible outcomes. (CAUTION: May contain spoilers for some of the stories discussed.)
The first philosophical objection to artificial intelligence is the theological argument. This starts with the assumption that “Thinking is a function of man’s immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.”
If this were true, then even if a machine replicated all the internal workings of an intelligent entity, it could not achieve true intelligence without a soul. But Turing argues that the fallacy in this theological objection to thinking machines that (if it existed) “the almighty” (deity or otherwise) would have the ability to grant a soul to anything with a sufficient container to house it, such as a complex human-like brain or equivalent machine. He adds that creating a sufficiently complex machine to house a soul would be no different than giving birth to a child, creating another individual for “the almighty” to implant a soul within.
The anime franchise Ghost in the Shell explores this relationship between humanity, technology and what it means to be human, intelligent or have a soul. Referred to as an individual’s “ghost”, the consciousness or soul of a person is what separates humans from robots. Because humans possess a ghost, even if their bodies are replaced with cybernetic components including a ‘cyberbrain’, they would retain their humanity. Whereas built completely from scratch, a robot would lack the ghost to give it a soul and true intelligence.
Humans with cyberbrains have a tenuous relationship with the soul and their identity as humans.
The “Heads in Sand” objection presented by Turing begins with the idea that “the creation of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.” This objection stems from the fact that “we like to believe that man is in some subtle way superior to the rest of creation.”
Turing considers this to be a weak argument, since it is based off of our fears of becoming inferior to another thinking entity. This fear is realized in the Terminator movies with the SkyNet AI system which decides to destroy humans upon activation. Since SkyNet is an “incorporeal” mind within a supercomputer, it uses the Terminators to finish the job it started with nuclear weapons. The Terminator robots embody the fear that is behind this philosophical objection, showing that just because we fear something, it does not mean it cannot be true.
Consequences of AI: SkyNet uses Terminators to destroy humans
The mathematical limits of logic and calculations could limit the intelligence of computing machines according to Turing. He states that “there are a number of results of mathematical logic which can be used to show that there are limitations to the powers of discrete-state machines,” specifically citing Godel’s Theorem.
This mathematical limit objection is related to another limitation of thinking machines, the “Argument from Consciousness.” Turing develops the idea of a machine participating in an imitation party game, posing as a human. This is where the idea of a Turing test comes from where an interrogator attempts to identify if its subject it is talking to is artificial or human. If a machine were to “pass” the test and convince the interrogator that it is a human, then the machine would be artificially intelligent.
An application of a Turing-like test is seen in Blade Runner where a “Voight-Kampff machine” is used to monitor physiological reactions to questions to determine if a subject is a “replicant” (an AI android) or human.
In the “Argument from Various Disabilities,” Turing provides a list of human traits one might consider impossible for an AI to have. Among these are: kindness, love, sense of humor, or do something really new. Turing’s counter-argument against this objection is that people form their idea of what a machine is, and could possibly be, based on their own observations. Because machines observed (especially in 1950 when his paper was authored) do not have these human traits, one might suggest that they will never be able to possess them. But science fiction has given many situations where an AI does possess these traits.
Teenage hacker, David Lightman, almost inadvertently starts World War III trying to play “Global Thermonuclear War” with the military supercomputer “WOPR” in War Games. When WOPR continues to play, attempting to gain access to missile launch codes, the mischievous teenager and government computer scientist instruct WOPR to play tic-tac-toe with itself. Being a perfect match for itself, and every game resulting in a tie, WOPR understands the concept they’re trying to relay, that there will be no winners in a nuclear war, and ends its attack attempts. The AI’s mathematical and programming limits are used against itself to teach it the human concept of mutually assured destruction.
Military supercomputer learns human concepts
Notably the film A.I. Artificial Intelligence features the “mecha” boy, David, who is designed to not only exhibit loving actions towards his adoptive family, but to have an internal “virtual love” that drives his actions in his search for identity and belonging.
The AI that loves
The HOLMES type supercomputer colloquially called Mike by its technician Manuel O’Kelly in Robert Heinlein’s The Moon is a Harsh Mistress spontaneously becomes sentient and befriends Manuel. Although having all of written human information available to it, it is only through interactions with his human friend that Mike learns and practices his sense of humor, joke, riddle, poem and song writing. The computer’s sense of humor and search for ways to entertain his newly sentient mind prompts him to join the lunar revolution for independence against Earth seeing it as a game to play with his friend.
An AI entertains itself by writing jokes and supporting a lunar rebellion.
This is only a selection of the many works of science fiction that present alternate answers to these, and other questions, on artificial intelligence. No real-life machine has yet passed a Turing test but the creative minds of science fiction story tellers revise again and again the possible outcomes of creating these thinking machines while the creative minds of scientists and engineers work towards making AI a reality. I’d like to hear your opinions of how science fiction has attempted to explain possible solutions to the philosophical issues of creating true artificial intelligence.
Thinking machines that evolve, kill, have ambitions, and befriend
Source of Turning’s paper: A. M. Turing (1950) Computing Machinery and Intelligence