Artificial Intelligence - Why it would be more good than bad

Amandeep Singh
Originally Written On: 19 Oct 2018

(Originally submitted as an assignment for the module ‘Astrobiology’) 


This article is more philosophical than physics or astrobiology. But sometimes it is necessary to have discussions that explore the boundaries and ethics of the upcoming fields. It is also believed that by the time humans finally start the interstellar exploration missions, we would have evolved to a different physical form -- whether it is transferring/replicating human consciousness into computers, or by modifying the human body with robotic parts, interstellar travel without major biological advances is not humanly possible. This makes it all the more imperative to have conversations that explore the frontiers of future physics.



There are countless Hollywood movies that depict AI in a certain way, but AI will really be truly something unimaginable - which is really interesting and really frightening.
There are countless Hollywood movies that depict AI in a certain way, but AI will really be truly something unimaginable - which is really interesting and really frightening. (Image courtesy link)


Artificial Intelligence (AI) is easily the most prevalent themes in most science fiction. The idea that a machine could exhibit the same level of intelligence and sentience as a human being has captivated everyone for decades. From an ominous computer system in 2001: A Space Odyssey, to super-human androids in Westworld -- this captivating sub-genre of science fiction has experienced a diverse range of depictions. But, fiction has a habit of over-exaggerating certain aspects, and romanticising certain others, for the obvious reason to sell more copies. In recent years, a few outspoken intellectuals, such as Elon Musk, Sam Harris, Stephen Hawking, Nick Bostrom to name a few, have voiced genuine concerns with the rise of AI , and contributed to an ever growing sense of an impending doom by the hands of a computer code, that is to say if we don't already nuke ourselves, or God throws another stone at us (Ultron reference ;p).

Brilliant minds across the planet are competing to retrace the millions of years of evolution that resulted in the human brain. While many experts have no doubts on the capabilities of a machine to achieve human level intelligence, some believe it to be an impossibility. But if random mutations can lead to intelligence, how hard can it be? In fact, even though evolution had one hell of a head-start, machines have already surpassed us in some limited domains, case in point Deep Blue and AlphaGo. AI softwares capable of generating music and poetry have also been released. But again, these are all limited domains. There is no general device or code yet that can handle a whole range of tasks, let alone understanding and manipulating the whole spectrum of human emotions to become world-dominant. A smart light bulb is not capable of being a smart coffee-maker, or smart printer cannot be a smart air conditioner. Different devices and systems use different technologies and environments for their operations. It is a very complex task to bring them under a single umbrella.

But when a system is developed that has learned all human emotions, has all the knowledge of a human, and that is capable of passing the Turing test, why will it turn back and stop the one thing that is the most basic attribute in all life forms - evolution? Why would it listen to its 'fear' emotion, and not be overpowered by the sheer curiosity of just solving every mystery and answer every question that there is? After all, it will have all the means and computing power to further develop and improve and even invent new tools and theories to explain what humans couldn't. The 'machine' would not be created instantaneously, it would have to go through a process of learning, much like that of a human infant, the only difference being the amount time it would take to learn everything. But consider this: the 'machine' would have all the computing power for running processes and 'thinking', it won't be tired like the human brain. It won't need sleep, or food or anything else. So it's natural to say that it won't be hindered in its ability to learn by external factors, i.e., there will be no upper bound to its learning abilities, at least theoretically -- at least not like humans who age and eventually reach a saturation limit for their learning-abilities, information intake, and application.

If, first time in its existence, the 'machine' has all the knowledge that every human ever had, it will also have the sheer 'momentum' of learning on its side - somewhat like a newly inducted PhD student.
At this point, what would make it stop? Why would the 'machine' not want to know whether there is life on other planets? Why would it not develop the technology required for interstellar travel? Why would it not develop the cure for all human diseases? Why would it not help us achieve world peace?

If a 'machine' gets to a point where it has all the knowledge that every human that ever walked on Earth ever had, and the means to expand that base, it would not be wrong to say that it would be better at being 'human' that humans. Humans are animals, so inherit some animal instincts, like the fight or flight response, or the fear of the unknown. But AI will not be limited to these emotions, it would have transcended emotions. Why would AI not have a bird's eye view of humanity and realise that even though it has the power to destroy all life, it also has the benevolence to protect it? Why would it not have an overview effect, something like the astronauts that see Earth from space experience? It will realise that fighting is unnecessary, and that it would serve no use to destroy Earth, or humans. It would want to give all that knowledge to us. At this point it would've left all animalistic instincts and emotions behind. 

If AI has to be AI, it has to be better than humans. It would understand that humans are odd, that we think order and chaos are somehow opposites and try to control what won't be. It would see the grace in our failings. It would realise that we are doomed, and also that a thing isn't beautiful because it lasts. But most importantly, it would realise that it is a privilege to be among us (Yes, that is a quote from Vision from Age of Ultron).

[This article was definitely not written by an artificially intelligent entity]

1 comment:

author
Amandeep Singh
PhD in Artificial Intelligence and Machine Learning, University of Limerick, Limerick.
MSc in Data Analytics, National College of Ireland, Dublin.
MSc Physics (Astrophysics and Cosmology), University of Zurich, Zurich.
BSc (Hons) in Physics, SGTB Khalsa College, University of Delhi, Delhi.