Artificial Intelligence and Consciousness

Artificial Intelligence and Consciousness

A Story by Zaw Lin Htet

Artificial Intelligence and Consciousness


After watching science-fiction movies such as Ghost in the Shell and Ex Machina in which robots outsmarted and eventually dominated humans, I became interested in the question of whether robots can one day become conscious. So, I began to wonder if this is simply a fantasy or a possible future. In fact, big shots like Bill Gates, Elon Musk and Stephen Hawkings have raised similar concern that artificial intelligence can be disruptive to human life. This fear of artificial intelligence is named technological singularity. It is a hypothesis that rapid technological increases in artificial intelligence will surpass human intelligence and result in fundamental changes in the way humans live.

My initial curiosity with robotic domination is further divided into three different questions: Can we make robots conscious? How can we know if robots are conscious? What is consciousness?. The aim of this paper is to inquire these questions consecutively.

Recently, a lab in New York revealed so-called robotic self-awareness (Bringsjord 2015). A professor at Rensselaer Polytechnic Institute Selmer Bringsjord programmed three robots to believe that two of them have being given a dumming pill meaning that they wouldn’t be able to speak and they would not know which two had the pill. And then they were asked which of them had been given the pill. The robot that wasn’t given the pill (or not muted) answered “Sorry I don’t know”. But then it corrected itself after hearing its own voice: “Sorry I know now. I was able to prove that I was given the dumbing pill”. Because it could answer the question the robot realized that it wasn’t the one given the pill. This experiment was extravagantly publicised in the media but the fact of the matter is it is very far from expressing consciousness. This is because the robot is only recognising its voice not that it exists as an entity. There is a vast difference. To be conscious involves being aware of one’s own existence and surroundings not just a particular aspect of one’s existence. Similar experiments have been reported in which an artificial intelligence can recognize its own hand and a self-image in the mirror. These are remarkable works nonetheless but they only demonstrate a very partial self-awareness.

The most famous breakthrough in AI research is Alpha Go, a program created by Google DeepMind to play the ancient game of Go (Internet Society 2017). It became famous after beating the Go grandmaster Lee Sedol in 2016. Go is a very complex board game with 10^117 possible configurations. It is impossible for current computer systems to perform calculations that involve such a big number. Thus instead of programming Alpha Go with knowledge and logical reasoning, researchers let it analyse patterns from huge data sets of previous Go games. This method is known as machine learning. The mechanism of machine learning involves ‘learning algorithm’ that creates new rules based on inferences from the data it is given. Thus, programmers need not to manually program the AI. Instead, the AI learns new patterns and rules from the data it is given.

Before playing against Lee Sedol, Alpha Go played myriads of games and learnt from its own mistakes. This feature is a subset of machine learning called deep learning and it is also known as neural network as its inspiration came from neurons in the human brain. For instance, consider two neurons, one that receives an input information and the other that sends an output information. Once the input neuron receives the information, it modifies and passes the information to another neuron. In Alpha Go just like in the brain, there are multiple processing layers composed of multiple linear and nonlinear neurons that update information at every turn. The advantage of having a neural network is adaptability (Internet Society 2017). Our brain can operate even with limited information; it can rewire itself as it learns new information. Thus, learning from self-play games and records of human games, Alpha Go functions like an autonomous being.   

Machine learning has has tremendous potential in predicting the stock market, translating languages and photo recognition for visually impaired. Although it has been around since the 60s, machine learning becomes a hot topic of research and investment recently due to the rise of internet users that provide huge amounts of data for AI to analyze.

Artificial intelligence has been around us for quite some time. Google Translate, Siri, proofreading, autocorrect and spell check are some examples. But these are called weak AIs because they only mimic some aspect of human intelligence. Strong AI on the other hand is a machine or a system that thinks like us. Strong AI is an inorganic system that does whatever our brain does. Although Alpha Go beat the best human Go player, its cognitive ability and consciousness is limited to the Go board. So far we have not yet been able to create a robot that think like us. But developers of Alpha Go are positive about the future robotic consciousness and for this reason they can be called computationalists because they believe that AI can become conscious by emulating the human brain. Proponents of computationalism believe that the brain is essentially a computer that processes information by computing (Mcdermott 2007).

Even if we have built so-called conscious robot in the future, how do we know that they are really conscious?

Back in 1950, British mathematician Alan Turing proposed a test that will demonstrate whether a machine could think like us. The test goes like this - you are having a conversation with two people. One is AI and the other human. But you are not told which is which. You can ask both of them anything you like. A machine with complex programming could fool you into thinking that you are speaking to a human. And Turing said, if it is so, it has strong AI. This is a behavioral test because behavior is the standard we use for judging other humans in daily life. The reason I don’t think any of my friends are robots is because they act like the way I had expect people to act.

Contemporary american philosopher John Searle constructed a famous thought experiment called the Chinese room designed to show that passing the Turing test does not equate consciousness (Searle). Searle contends that just because people agree that an entity is conscious does not mean it is really conscious.

Imagine you are a person who doesn’t speak Chinese. You are locked in a room with Chinese characters. You are given a code book on how to respond to questions in Chinese. Native speakers pass Chinese messages into the room. Using the code book you figure out how to respond to the characters you receive and you pass out the appropriate characters in return. You have no idea what any of it means but you have fooled the Chinese into thinking that you speak Chinese. You pass the Chinese Turing Test but you do not understand Chinese. You just know how to manipulate symbols with no understanding of the meaning.

Searle’s point is that the fact that a machine can fool us into thinking it is a person does not mean it is conscious. Strong AI would require actual understanding, which Searle thinks it is impossible for a computer to achieve. This hypothesis is also known as ‘semantic over syntax’. Syntax means the rules and symbols (i.e words) on how to create a sentence. Semantic is the understanding of the meaning of a sentence that arises from the combination of words and grammar. Thus, the person in the Chinese room or Artificial Intelligence may know syntax but it does not know semantic. Searle believes that as long as artificial intelligence is built on binary systems of computation it cannot achieve consciousness. This is because computers are symbol-manipulating machines not symbol-understanding thinking beings. They use symbols-  1s and 0s - to store and represent information like images, audios and numbers but do not understand what they mean.

Before we can understand whether robots can be conscious, we need to understand what consciousness is. There has been over thousands of scholarly studies on consciousness yet scientists have not agreed upon a formal definition. This is because consciousness is a subjective first person-experience that is impossible to prove by experiment and observation. For example, it is hard to prove that your sensation of the redness of a tomato is similar to my sensation of redness. This quality of intrinsic perception is known as qualia and the inability to define such subjective perception is known as the hard problem of consciousness (Chalmers 1996). In his essay What Is It Like To Be A Bat?, Thomas Nagel similarly argues that the objective viewpoint of science is insufficient to understand the subjective facts of the mind such as the redness of a tomato or the spiciness of a taco. Thus, Nagel concludes that ‘if I try to imagine this [what it is to be a bat], I am restricted to the resources of my own mind, and those resources are inadequate to the task’ (Nagel 1974).

In fact, philosophers have been debating about consciousness for centuries. Traditionally, there has been two standing camps that argue for the cause of consciousness: dualism and physicalism (Robinson 2003). Dualism is a hypothesis set forth by Rene Descartes that argues that the mental and the physical -or the mind and the body- are very distinct entities. It associates consciousness with the mind that exists independently of the body. Therefore, dualists believe that there exists a soul that is responsible for consciousness and some trace the origin of the soul to a divine entity. The problem with dualism is the same as the hard problem in the sense that it is impossible to be proved by scientific inquiry. Physicalism, on the other hand, is the view that everything in the world is made up of material things. Therefore, they believe that consciousness is a biological phenomenon that can be explained through the understanding of the human brain.

Physicalism inhibits modern neuroscience because neuroscientists believe that consciousness can be explained by physical and chemical processes of the brain. So far neuroscience has only shown us the correlation between certain brain areas and its functions. For example, we now know that the back part part of the brain or the reptilian brain is responsible for mating, territoriality and balance. The center of the brain or the monkey brain is responsible for emotions and sociability. The front of the brain the prefrontal cortex is responsible for rational thinking (Sousa 2011). But understanding the brain is a hard task and we might still be at the initial steps. There might be other unexplained casualties at work between the brain and our subjective perceptions.    

A radical approach to consciousness is invented by Roger Penrose, a mathematical physicist who argues that consciousness cannot be explained by conventional physics because the structure and function of neurons do not hold the computational power necessary to create a conscious experience as we know it. Instead, he thinks that consciousness arises at the quantum stage. This idea was further supported by Stuart Hameroff, an anesthesiologist, who believes that there are tiny structures in the neurons called microtubules that allow the brain to compute at a quantum space. This idea has garnered some evidence when in 2013 researchers at the National Institute of Material Sciences in Japan revealed the discovery of quantum vibrations in the microtubules in the neurons (Bandyopadhyay 2013). However, quantum consciousness is still hotly debated and remains very controversial.

In conclusion, I am persuaded by the fact that consciousness is a biological phenomenon because it is the only explanation that does not put humans at a transcendental level and it is testable. However, I am not a computationalist because I think that our brain is more complex than computer algorithm. Therefore, development in artificial intelligence is intertwined with development in neuroscience because human consciousness results from human brain. A robot may attain human-like consciousness through computing but it has to emulate the brain in order to attain human consciousness. But I am also troubled by the fact that if consciousness is explained in biological and physical causes, we might lose our moral accountability as autonomous beings. We can then blame our misdeeds to chemical and physical processes in the brain and avoid responsibility of our actions. Nonetheless, for now, it is hard to state what consciousness is and if robots will become conscious like humans.


References


Bandyopadhyay A. (2013). Atomic water channel controlling remarkable properties of a single brain microtubule: correlating single protein to its supramolecular assembly. Biosens Bioelectron, 47(12), 141�"8.


Bringsjord, S., Licato, J., Govindarajulu, N. S., Ghosh, R., & Sen, A. (2015). Real robots that pass human tests of self-consciousness. 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). Retrieved from http://kryten.mm.rpi.edu/SBringsjord_etal_self-con_robots_kg4_0601151615NY.pdf


Chalmers, D. (1996). The conscious mind: In search of a fundamental theory. New York: Oxford University Press.


Internet Society. (2017, April 26). Artificial Intelligence and Machine Learning: Policy Paper. Retrieved May 15, 2017, from https://www.internetsociety.org/doc/artificial-intelligence-and-machine-learning-policy-paper#_ftn11


Mcdermott, D. (2007). Artificial Intelligence and Consciousness. The Cambridge Handbook of Consciousness, 117-150. Retrieved from http://www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf

Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 3(4), 435-450. Retrieved from http://organizations.utep.edu/portals/1475/nagel_bat.pdf


Robinson, H. (2003, August 19). Dualism. Retrieved May 15, 2017, from https://plato.stanford.edu/entries/dualism/


Searle, J. (n.d.). The Chinese Room Argument. Retrieved May 15, 2017, from http://globetrotter.berkeley.edu/people/Searle/searle-con4.html

Sousa, D. A. (2011). How the brain learns (4th ed.). Thousand Oaks, CA: Corwin, a Sage Publishing Company.












.




© 2017 Zaw Lin Htet


My Review

Would you like to review this Story?
Login | Register




Reviews

This was very interesting and thought provoking. I gave much thought to the many issues/sides you presented and came to the same conclusion as yourself: that to define human consciousness as a mere "computing" process would indeed lead mankind to deny responsibility for actions by claiming biological influences. I further offer that, as I believe humans have a "soul", that one element will always distinguish humans from AI. The soul allows for painful emotions, something AI would protect itself from. That I read this all the way through is a credit to your skill in presenting such weighty material in thorough but plain language. A+.

Posted 6 Years Ago



Share This
Email
Facebook
Twitter
Request Read Request
Add to Library My Library
Subscribe Subscribe


Stats

129 Views
1 Review
Added on July 7, 2017
Last Updated on July 7, 2017