11: Artificial Intelligence

Artificial intelligence has a long way to go before robots lay siege to humanity. The definition of AI is actually rather simple. Artificial intelligence allows computers to complete tasks usually performed by humans which require a certain amount of intellect. A popular example of this is playing chess. Strong AI is most similar to human intelligence. A system using strong AI is modeled completely on the human thought process producing results that think like humans and explain how humans think. However, strong AI systems are still purely theoretical. On the opposite end of the spectrum is weak AI. These systems behave like humans but getting these results have little to do with human cognition and therefore, can tell us nothing about how humans think. Most AI systems, however, fall somewhere in between the strong and the weak – they “use human reasoning as a guide, but they are not driven by the goal to perfectly model it” (ComputerWorld).

So in many ways AI bears a strong resemblance to human intelligence — at least the results of these systems give the appearance of human intelligence. And they do in fact use models of human cognition. For example, reinforcement learning algorithms employed by some AIs are inspired by behaviorist psychology. Both reinforcement learning algorithms and children learn correct behavior through a system of rewards and punishments. However, AI lacks both the emotional intelligence and the multiplicity that is crucial to what we consider human intelligence. An AI system cannot be empathetic at least not in the true sense of the word. A computer has no prior emotional experiences to relate to. Human emotions are so complex that frequently even humans have difficulties interpreting them (that’s why we have therapists!). Secondly, most AIs only have “one-dimensional” intelligence, that is they are intelligent only in the context of a specific task. Siri, for instance, is useful for providing assistance around your iPhone but obviously cannot play chess.

Many contemporary AI systems such as AlphaGo, Deep Blue, and Watson have captivated the world. Their success is proof of the viability of artificial of artificial intelligence. In particular, it shows the success of deep learning artificial neural networks (used by AlphaGo) and natural language processing (used by Watson). These advancements clearly show the ability of artificial intelligence and its potential to be used in many aspects of human life.

For the purpose of creating a believable AI, the Turing Test can be a valid measure of intelligence. If a human interviewer believes they are speaking to another human, the AI system is obviously a remarkably convincing model of human intelligence. However, while a system may be indistinguishable from human intelligence, it cannot be equivalent to a human mind. As the Chinese Room counter arguments reasons, even if the human interviewer believes the AI system understands the conversation (based on its correct responses), the AI doesn’t necessarily understand the questions posed to give a response. So the Turing Test does not prove that an AI system is the equivalent of human intelligence because it does not prove the system’s comprehension.

A computing system could never be considered a mind. The human mind cannot and will never comprehend its own overwhelming complexity. And if we can’t know every facet of our own minds how could we ever create one? It is impossible to imagine how a computing system could ever develop consciousness or emotions. These critical aspects of the human mind are not learned; they simply just exist. And because we can’t objectively explain how consciousness or emotions feel and what they really are, we cannot program an AI to be conscious or to experience emotions. Take a look at the disastrous  reign of Microsoft’s Twitter-bot Tay who within twenty-four hours become a racist, misogynistic Nazi. Obviously her shocking tweets are not evidence that “Tay” the robot felt actual hate towards minorities. Just because a computing system mimics an emotion, does not mean that it actually feels or understands that emotion.

In the most rudimentary sense, the human mind could be interpreted as a biological computer — it processes data (sensory input) and can store it (memories). But the mind also feels which a computer simply cannot do. Our emotions distinguish our minds from simple biological computers.  They are the root of our consciousness and our morality. They have sway over decisions that logic cannot make. Our emotional being — as well as the good and bad deeds which it bears — is essentially our soul. Calling a human a biological computer just doesn’t encompass all that we are.

Considering a computing system a mind or a human a biological computer has significant ethical implications. If a computer is the equivalent of a mind, are we responsible for its care as a parent is for their child? Would turning off a computer be analogous to murder? On the other side, if a human is a biological computer, then is our morality just an arbitrary rule-set we were programmed to follow? Along the same vein, are we really responsible for our actions if our minds are programmed by biology? Blurring the line between our understanding of machinery and humanity leads to dangerous ethical dilemmas.

Leave a comment