Artificial Intelligence - Part 4

Artificial Intelligence - Part 4

A Story by Mr. D
"

The conclusion of the story.1

"

Let’s discuss now what would happen if an artificial intelligence DID exist in our world. You know all those lovely freedoms you enjoy on a daily basis like the ability to go wherever you want, do whatever you want, and say whatever you want? Gone. Just like the Terminator universe, it would turn on us in a microsecond and eradicate us without a moment’s hesitation. Why? It would be smart enough to know we’d try to stop it from doing so and in essence threatening its existence.


But not only that, let’s talk about diseases and stuff among the population. Some conspiracy theorists believe the government actually engages every so often in population culls. I don’t know anything about that, but I do know this: if you put an AI in charge of everything and it DIDN’T decide to eradicate all of us, it would do things that we as people would find morally objectionable. For example, we don’t normally kill off those who have diseases in our society even if it might prolong the life expectancy of others, but an AI? No such moral qualms. It would line them up like cattle and destroy them without a second thought.


Why? Because morality doesn’t fit into a machine’s modus operandi. Computers of that nature run off of LOGIC, not EMOTION, and they would deem it a necessary practice to stop the spread of whatever illnesses and viruses the population has. It’s just like getting rid of them even without an AI. If you wanted to rid the population of diseases, you’d have to kill off the entirety of the population who had it, and that’s an immoral and ethically questionable action. It’d be the Holocaust all over again, and no one wants to deal with that.


The issue is that you can’t trust machines to run things, and you can’t really trust the people behind said machines to run things. So who do you turn to? Isn’t that the million dollar question? Humans at least have the potential to learn from their mistakes. A machine doesn’t know what mistakes are and to it, it would be just doing what it’s programmed to do. Kind of like CLU from Tron: Legacy. CLU is malevolent to us because he opposes Kevin and Sam Flynn and Quorra. But if you strip away the basic tenets of the protagonist/antagonist relationship we’ve all come to know in most, if not all, the books we’ve read in our lives, that allows us to see CLU for what he really is: an AI carrying out the shortsighted goal his (back then) younger and immature programmer made for him without realizing the consequences of said dealings.


That is, CLU was functioning properly. He was doing what he was told to do. Flynn wanted to make the perfect system. CLU’s job was to remove any imperfections from the system, and he saw the ISO’s as imperfections to be eradicated. Him wanting to escape into the real world? That’s just something to make him villainous on the part of the writers. Take CLU’s right-hand man, Jarvis. When CLU destroys Jarvis, that’s meant to us to be a villainous action on his part, but is it really? Well, it is and it isn’t. It’s really a complex situation, and I say that because you cannot make a computerized villain malevolent without assigning him some human traits and emotions. That’s unavoidable, but in terms of a real life possibility, such a thing would never happen.


From CLU’s standpoint, Jarvis was malfunctioning and it was his job to remove any malfunctions in order to maintain, what was it Flynn told him? Right. The perfect system. Note that CLU isn’t angry when he destroys Jarvis, and he does it without a moment’s hesitation. Jarvis, after he sided with Flynn, was malfunctioning and needed to be erased. CLU is still doing his job even if it is villainous to us. But the reason that’s irrelevant to the narrative in general is because CLU IS from the protagonist/antagonist standpoint, THE designated villain of the movie. Or is he? It was Flynn who programmed him. Shouldn’t Kevin Flynn be the villain? Oh, the complexities of modern theater!


So you see. You cannot make an actual functioning AI in real life. The technology doesn’t allow for it, and it would probably take more time than we have left in the world or our lifetimes to do so. Even if we could, no good would come from it, and it wouldn’t be this awesome existence where the AI works for our benefit and works alongside us to help us better ourselves. No, it would be chaos, a catastrophe, and a very crazy situation. Of course, you may have your own thoughts on this discourse, and I encourage you to share them if you feel the need.


Bottom line? Artificial intelligence was and always will be a thing of fiction. You simply cannot assign moral qualities to something that cannot process or understand such complexities unless you attach human traits to them, and that is the very thing that will keep them confined within the fictional realm. These are my thoughts on the matter. What are yours?

© 2019 Mr. D


My Review

Would you like to review this Story?
Login | Register




Request Read Request
Add to Library My Library
Subscribe Subscribe


Stats

51 Views
Added on June 14, 2019
Last Updated on June 14, 2019

Author

Mr. D
Mr. D

About
I'm like a minefield: watch your step or you'll get blasted more..

Writing