Artificial Intelligence

Artificial Intelligence

A Chapter by Debbie Barry
"

An essay, written for INF 103: Computer Literacy.

"

Artificial Intelligence


4/19/2010


 

In the not-so-distant past, artificial intelligence, or AI, was the stuff of science fiction books and movies.  One of the most famous examples of AI is the HAL 9000 computer, which was a major character in Arthur C. Clarke’s 2001: A Space Odyssey.  HAL interacted with the human character, Dave, on an almost human level.  Today, however, AI has ceased to be the sole property of science fiction and has become, instead, a very real and practical reality in our modern world.

The Merriam-Webster Dictionary Online (2010) defines artificial intelligence as “a branch of computer science dealing with the simulation of intelligent behavior in computers … [and] the capability of a machine to imitate intelligent human behavior” (para. 1).  The scientific effort to develop AI got its start in the 1950’s, when “a group of scientists decided to try to provide the computer with intelligence. Their goal seemed attainable due to a common metaphorical identification of the computer with a brain” (Gozzi, 1997, para. 2).  In the early 1960’s, AI received attention from the government, with “funding from the Defense Advanced Research Projects Agency (DARPA) and Office of Naval Research (ONR)” (Waltz, 1996, para. 6).  Government interest in AI continued into the 1970’s, with the U.S. Army, NASA, and other government agencies adding their support to AI research (Waltz, 1996).  AI did not remain only in the U.S., of course, and “[b]y the early 1980's an "expert systems" industry had emerged, and Japan and Europe dramatically increased their funding of AI research” (Waltz, 1996, para. 7).  In 1970, Darrach predicted:

In from three to eight years we will have a machine with the general intelligence of an average human being … [and] [i]n a few months it will be at genius level and a few months after that its powers will be incalculable.  (cited in Gozzi, 1997, para. 5)

While that prediction has not come true, the Chinook checkers program, developed by Jonathan Schaeffer of the University of Alberta, has advanced to the point where “[t]here isn’t a human alive today that can ever win a game anymore against the full program” (Grayson, 2007, para. 2) because it has been programmed to learn and to adapt.  AI affects many aspects of modern life, with Amtrak, Wells Fargo, Land's End, and many other organizations … replacing keypad-menu call centers with speech-recognition systems … [,] General Motors OnStar driver assistance system rel[ying] primarily on voice commands, … [t]he Lexus DVD Navigation System respond[ing] to over 100 commands and guid[ing] the driver with voice and visual directions  … [, and] avatars … becoming common”  (Halal, 2004, paras. 15-33).



Figure 1


Figure 2

With traditional, digital computers, it requires “the output of an entire power station” (Watson, 1997, para. 6) to perform 1016 operations per second, while the human brain can do the same amount of work “while consuming less power than an electric light bulb” (Watson, 1997, para. 6).  (See Figure 1.)  Newer, analog computers, on the other hand, can “run at a computational speed a million times faster than the human brain” (Berne, 2001, para. 6).  (See Figure 2.)  With this increase in computational power, it is now possible to build “absolutely creative computers whose probably-useful output is unpredictable even in principle effectively creative computers whose probably-useful output is unpredictable in practice” (Caulfield, 1995, para. 3).  In other words, it is now possible to build a computer that will behave like a human brain.  As a result, “[i]n the second decade of this century … it will be increasingly difficult to draw any clear distinction between the capabilities of human and machine intelligence” (Berne, 2001, para. 5).  It is estimated that, by the end of this century, “humans will be able to use scanning technology for the purpose of … downloading the brains contents into another receptacle” (Berne, 2001, para. 8).  Ultimately, some researchers believe, this downloading of the brain’s contents will “mak[e] a form of immortality “ (Markoff, 2009, para. 11).


Authorizing Financial Transactions

Configuring Hardware and Software

Credit card providers

Telephone companies

Mortgage lenders

Banks

U.S. Government

AI systems detect fraud and expedite financial transactions, with daily transaction volumes in the billions.

Custom computer systems

Communications systems

Manufacturing systems

Track the rapid technological evolution of system components and specifications. Systems currently deployed process billions of dollars of orders annually.

Diagnosing and Treating Problems

Scheduling for Manufacturing

Medical:

Diagnosis

Prescribing treatment

Monitoring patient response

Technological:

Photocopiers

Computer systems

Office automation

Monitor and control operations in factories and office buildings

Manufacturing operations

Job shop scheduling

Assigning airport gates

Assigning railway crews

Military settings

AI technology has shown itself superior to less adaptable systems based on older technology.

Table 1

          While AI-assisted immortality is still a thing of the future, AI is in common use in four areas of life now: in authorizing financial transactions, in configuring hardware and software, in diagnosing and treating both medical and technological problems, and in scheduling for manufacturing (Waltz, 1996).  (See Table 1.)  Anyone who has ever called a business or a government agency and has talked to a voice-recognition program to navigate through the menu to reach a particular department has interacted with artificial intelligence.  Anyone who has instructed a hands-free cell phone to “call home” has interacted with artificial intelligence.  The popular Tom-Tom navigation system, which tells drivers where to turn, and which helps drivers find the correct route when they miss a turn, uses artificial intelligence.  “[F]or the most part, AI does not produce stand-alone systems, but instead adds knowledge and reasoning to existing applications, databases, and environments, to make them friendlier, smarter, and more sensitive to user behavior and changes in their environments” (Waltz, 1996, para. 2).

          These examples of AI do not yet fully imitate humans, as they are not yet self-aware, nor do computers yet exhibit beliefs, desires, or emotions, but they are a major step toward the future that was embodied in “the HAL 9000 computer from Arthur Clarke's 2001: A Space Odyssey or the superhuman android, Lieutenant Commander Data, of the television program ‘Star Trek: The Next Generation” (High-performance artificial intelligence, 1997, para. 5).  It is expected that computers will continue to learn human traits, however, including “beliefs and desires, even emotions” (Sparrow, 2004, para. 1), and “they will become fully fledged self-conscious ‘artificial intelligences’” (Sparrow, 2004, para. 1).  While the AI devices that are used today are merely what is known as “weak AI,” the latter sort of AI, which works “towards the creation of genuine artificial intelligence " a project known as ‘Strong AI’” (Sparrow, 2004, para. 5).  The development of strong AI may one day lead to the creation of artificial intelligence not unlike “the sort made popular by speculative fiction and films such as ‘Blade Runner’, ‘The Terminator’, ‘Alien’, ‘Aliens’ and ‘AI’” (Sparrow, 2004, para. 46).  In the pursuit of strong AI, “researchers [using neuromorphics] are capturing in silicon … the ‘essence’ of biological subsystems” (Watson, 1997, para. 2).  This concept harks back to another famous Arthur C. Clarke story: “Dial F for Frankenstein,” as well as a 1993 paper by Vernor Vinge: “The Singularity” (Markoff, 2009).  Following the work of British mathematician Alan Turing, Daniel Dennett of the MIT Artificial Intelligence Lab has created an intelligent robot called Cog (Proudfoot, 1999).  Cog has been designed to resemble a human in form as well as intelligence, having “’hips’ and a ‘waist,’ and … hav[ing] skin and a face” (Proudfoot, 1999, para. 1).  Cog will be able to learn, and it will “delight in learning, abhor error, strive for novelty, [and] recognize progress" (Proudfoot, 1999, para. 1).

          As we move through the 21st Century, it is not unreasonable to expect “a modest version of the talking computer made famous in 2001: A Space Odyssey” (Halal, 2004, para. 40) to become a reality, although Halal’s (2004) prediction that such a computer would be available in 2010 fell a bit short of the mark.  It will be important, as research and development of AI advances, to guard against the creation of anything like "’Terminator Salvation’ [, which] comes complete with a malevolent artificial intelligence dubbed Skynet, a military R.&D. project that gained self-awareness and concluded that humans were an irritant … to be dispatched forthwith” (Markoff, 2009, para. 1).  While “[t]he history of artificial intelligence is littered with the wrecks of fantastical predictions of machine “ (Proudfoot, 1999, para. 3), AI continues to advance, and to become entrenched in more and more aspects of daily life, and “it is dangerously presumptuous to claim that science will never progress to the point at

which the question of the moral status of intelligent computers arises” (Sparrow, 2004, para. 5).  Instead, it may be wiser to accept the probability that AI will advance to this point in time, and to consider “whether such machines might be the ‘machines of loving grace,’ of the Richard Brautigan poem, or something far darker, of the ‘Terminator’ ilk” (Markoff, 2009, para. 17).

          While the world waits for “a personal computer … to simulate the brain-power of a trillion human brains” (Berne, 2001, para. 6), “[s]cientific advances are making it possible for people to talk to smart computers … [and to] exploit … the commercial potential of the Internet” (Halal, 2004, para. 1).  Rollo Carpenter has developed a program called Cleverbot, which is designed to learn conversational language.  Cleverbot “chats” with human users on the Internet “to learn how to generate better dialogue over time” (Saenz, 2010, para. 1).  Cleverbot does not, yet, interact with its human users on the level of the HAL 9000, but it “uses a growing database of 20+ million online conversations to talk with anyone who goes to its website” (Saenz, 2010, para. 1), which is located at http://www.cleverbot.com.

          From cell phones to navigation systems to medical diagnostics, AI has moved out of the realm of science fiction and has become a very present, practical reality of modern life.  As chatterbox programs like Cleverbot advance, the future of AI appears bright, and almost limitless.  For now, we can all contribute to the development of AI by logging on to chat with Cleverbot while we wait on hold for voice-recognition customer service answering systems on our Smart Phones.

 


References


 “Artificial Intelligence”.  (2010).  Merriam-Webster Online Dictionary. Retrieved April 13, 2010, from http://www.merriam-          webster.com/dictionary/artificial+intelligence


Berne, R.  (2001, Fall).  “Robosapiens, Transhumanism, and the Kurzweilian Utopia: Why          the Trans In Transhumanism?”  Iris, 43, 36.  Retrieved March 16, 2010, from          ProQuest database.


Caulfield, H.J.  (1995).  “The computer subconscious.”  Kybernetes, 24 (4), 46-52.  Retrieved             March 16, 2010, from ProQuest database.


Gozzi, R.  (1997, Summer).  “Artificial Intelligence " Metaphor or oxymoron?”  Et Cetera,   54 (2), 219-224, Retrieved March 16, 2010, from ProQuest database.


Grayson, B.  (2007, July 19).  The Next Jump in Artificial Intelligence.  Retrieved March 16, 2010,        from http://discovermagazine.com/2007/jul/the-next-jump-in-artificial-          intelligence/article_print


Halal, W.E.  (2004, March/April). “The Intelligent Internet: The Promise of Smart Computers and        E-Commerce.”  The Futurist, 38 (2), 27-32.  Retrieved March 16, 2010, from ProQuest           database.


“High-performance artificial intelligence.”  (1997, August 12).  Science, 265 (5174), 891- 892.   Retrieved March 16, 2010, from ProQuest database.


Markoff, J.  (2009, May 24).  “The Coming Superbrain.”  The New York Times.  Retrieved March    16, 2010, from http://www.nytimes.com/2009/05/24/weekinreview/24markoff.html


Proudfoot, D.  (1999, April 30).  “How human can they get?”  Science, 284 (5415), 745.           Retrieved March 16, 2010, from ProQuest database.


Saenz, A.  (2010, January 13). Cleverbot Chat Engine Is Learning From The Internet To      Talk Like A Human.  Retrieved April 13, 2010, from   http://singularityhub.com/2010/01/13/cleverbot-chat-engine-is-learning-from-the-      internet-to-talk-like-a-human/


Sparrow, R.  (2004).  “The Turing Triage Test.”  Ethics and Information Technology, 6,        203-213.      Retrieved March 16, 2010, from ProQuest database.


Waltz, D.L.  (1996).  Artificial Intelligence: Realizing the Ultimate Promises of Computing. Retrieved March 16, 2010, from           http://www.cs.washington.edu/homes/lazowska/cra/ai.html


Watson, A.  (1997, September 26).  “Why can’t a computer be more like a brain?”  Science, 277         (5334), 1934-1936.  Retrieved March 16, 2010, from ProQuest database.





© 2017 Debbie Barry



Author's Note

Debbie Barry
Initial reactions and constructive criticism welcome.

My Review

Would you like to review this Chapter?
Login | Register




Request Read Request
Add to Library My Library
Subscribe Subscribe


Stats

27 Views
Added on November 10, 2017
Last Updated on November 10, 2017
Tags: essay, artificial intelligence, technology, computers, information technology, education

A Journey through My College Papers


Author

Debbie Barry
Debbie Barry

Clarkston, MI



About
I live with my husband in southeastern Michigan with our two cats, Mister and Goblin. We enjoy exploring history through French and Indian War re-enactment and through medieval re-enactment in the So.. more..

Writing