Artificial intelligence, boon or bane

Eliezer Yudkowsky said:

“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

 

Eminent scientist Stephen Hawking’s prediction that the development of Artificial Intelligence (AI), which is a much hyped preoccupation of modern science, may lead to the end of the human race has revived an old debate.

 

From answering queries to predicting future of your relationship, Artificial Intelligence (AI) is much discussed nowadays.

 

At the basic level, AI is a robotic machine which can think intelligently and creatively and act autonomously. Hawking’s fears are shared by many others. A few weeks ago, well-known US technologist Elon Musk said that AI might turn out to be a demon which could pose the biggest existential threat to the human race.

 

We’ve seen movies depicting the technology like Matrix, and even Bollywood doesn’t fall short of explaining what is AI, along with its fair share of melodrama. But, what seems fascinating and equally scary is a new report talking about an AI arms race.

An army of machines may be decades away, and Anja Kaspersen, Head of International Security, World Economic Forum, pointing at a survey of AI researchers by TechEmergence(via Medium) points out how it poses an array of security concerns which could be curbed by timely implementation of norms and protocols. There are many questions raised about how AI could be a life-changing and threatening factor, and what it is goes into the hands of some malicious minds. Remember, those movies, wherein the hero is always protecting sci-fi tech from those who want to use it for destruction. This could become a real-lfe scenario sooner than expected.

 

Talking about the dark side of the deep web, the report clearly points out how destructive tools like 3D printed weapons already exist for sale. Another scenario highlighted in the report asks you to imagine a gun combined with a quadrocopter drone, a high-resolution camera, and a facial recognition algorithm which can detect specific faces or say targets and assassinates them as it flies across the skies.
It is not the first warning from Hawking. Earlier this year, he had said that success in creating AI could be the biggest event in human history, but could also be the last.

 

Hawking’s case is that AI may one day take off on its own, redesign itself and replicate at a fast pace and supersede the human race, whose biological evolution cannot keep pace with it.

 

“Such a device would require no super intelligence. It is conceivable using current, “narrow” AI that cannot yet make the kind of creative leaps of understanding across distinct domains that human can. When “artificial general intelligence”, or AGI, is developed — as seems likely, sooner or later — it will significantly increase both the potential benefits of AI and, in the words of Jeff Goddell, its security risks, “forcing a new kind of accounting with the technological genie”, Kaspersen explains.

Science fiction and futuristic films have often projected scary scenarios of robots acquiring the power to challenge their creators. HAL 9000 in 2001: A Space Odyssey and the cyborg assassin in the Terminator remain products of imagination, along with many other malevolent beings in the realm of art, which kept us on the edge.

They went away as nightmares but history has many times shown that today’s imagination is tomorrow’s reality. Fiction may be dismissed as unreliable and sensational, but when the world’s smartest scientist issues a warning, it needs to be taken seriously.

There are many who do not share the almost apocalyptic view of Hawking and Musk. Some say that robots will never be able to replace human beings, even if they may do everything more efficiently.

Some researchers suggest that maybe human intervention which could help enforce control. However, it quickly points out how driverless cars have been suggesting that “humans struggle to stay alert and lose situational awareness when supervising a system that usually runs in automated mode”.
Yet, others have said that the birth of an intelligent robot cannot happen in the near future. But that does not deny the possibility that Hawking spoke of. Some have also talked of developing ethical robots which will follow rules and regulations.
a paper published by Google-acquired Deepmind pointing out how humans can supervise and take control albeit temporarily and make the system believe it needs to shut down. But, there is absolutely no mention of when and how a human determines its time to take control. “An additional concern is that any weapons system with a degree of autonomy could be spoofed and the programmed objectives corrupted remotely by a purpose-engineered virus like the Stuxnet worm,” further adds the report.

Another important point highlighted is the gap between those working on or developing AI and public sector decision-makers who usually make the policy but have little understanding of the complexities that come with such technologies. On the other hand and needless to say, militaries, even if they are weaponising AI, won’t talk about it. However, there has been an exception, adds the report, hinting at Russia’s Iron man humanoid. As humanoids reduce risk to soldiers, even US and Chinese militaries are believed to investing in AI and robotics.

Now, even if AI is being weaponised, the earlier iterations could be buggy. The survey did face difficulty at predicting the result when artificial intelligences engage with each other. Citing robotics professor Mary Cummings, the report adds Google and Amazon “will soon have much more surveillance capability with drones than the military”.

“Another implication of the brain drain from the military to private sector is a reduction in capacity to test and verify the effectiveness of technology, to a degree that would instil confidence in battle situations,” claims the report.

Moving further, what happens when an artificial general intelligence works on improving itself. Engineers still don’t understand deep neural networks running narrow AI applications completely. On the other hand, many AI applications still have life-enhancing potential, which means we can’t simply hold back the development.

This gives rise to the need for norms, protocols, and mechanisms that will oversee AI. “A new approach to the oversight and governance of AI would map the interests of relevant stakeholders as well as existing efforts to develop a shared concept on mitigating the security implications of AI,” the report adds.

 

 
Many would not know what such experiments would end up as. Hawking may, perhaps, be too alarmist. But it must be noted that he has not called for an end to all research and development of artificial intelligence. He has only called for caution and setting of a limit beyond which dabbling in AI may become dangerous. Mankind, now, has the power and freedom to decide whether it should destroy itself or not. They should remain with it.

 

 

Some scholars like Stuart Russell have called for an action to be taken to avoid “potential pitfalls” in the development of AI which are backed by leading technologists. For instance, legal autonomous weapons systems (LAWS)  or “killer robots” is considered to be one of the pitfalls. The U.N. Human Rights Council has asked for a moratorium on the development of LAWS, and there are many others who even go further and suggest a full ban.

While there is too little that can be done to stop LAWS now, there are some who claim that killer robots can reduce the number of soldiers that die. “The counterpoint is that politicians might be more ready to start wars when they are sending robots than humans into battle — and the technology, once developed, is likely sooner or later to be used by those with scant regard for humanitarianism,” adds the report.

 

 

 

Our understanding is still the tip of the iceberg and we cannot predict the future.

As rightly said by James Barrat in Our Final Invention: Artificial Intelligence and the End of the Human Era:

 

“A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right.” 

                                                                                                -Manaswini Das

IMI New Delhi

PGDMHR 2016-18

References: http://tech.firstpost.com/news-analysis/artificial-intelligence-a-boon-or-curse-322122.html

http://www.deccanherald.com/content/447312/artificial-intelligence-boon-bane.html

 

 

Leave a comment