WEF raises ethical issues related to AI
Artificial intelligence has been an area of interest since decades but it was not until recently that it has silently become the way we do business; and if the technology keeps going at the same pace it would not take much time that AI would become the way we live.
Now when the adoption of AI is at its all time highs, it is really important to address the issues, fears and threats related to the boundless, revolutionizing technology. World Economic Forum in its latest blog discussed ethical issues dealing with AI and addresses some long prevailing concerns.
Unemployment – One of the persistent questions, “Will robot/AI take my job?” was answered by WEF with another question – we should instead ask, “how would we spend our time?”
The transformation of work would be toward more complex roles, from the physical work to cognitive labour that characterizes strategic and administrative work. And this transition might help people to focus on things that hold deeper meaning in life, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.
“If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live” quotes the blog.
Revenue Distribution - By using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money, said blog.
We are already seeing a widening wealth gap, where start-up founders take home a large portion of the economic surplus they create. “In 2014, roughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley” states the blog.
If we are truly heading for a post work society, we need a fair post-labour economy as well.
Artificial Stupidity – AI technology can be fooled in ways that humans would not be. For example, random dot patterns can lead a machine to “see” things that are not there, tells the blog.
Also, the machine can not be trained for all the scenarios it may come across in real world which might raise job to over-see machines, their constant learning, training and fool-proofing.
Security – The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. Imagie a battle ground of robot soldier, not only more devastation will be created on ground but countries can be destroyed on cyber level as well.
We already have armed drones in USA, UK, Iran and China, what if these robots underwent a glitch? Who would be responsible for a mistake done by a robot? Worse, What if artificial intelligence itself turned against us – the humans?
With wide spread dependence on AI, even smallest of malware can have echoing effects.
Singularity - Will AI, one day, have the same advantage over us? We can not rely on just "pulling the plug" either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth, quotes the blog.
Robot Rights – “Once we consider machines as entities that can perceive, feel and act, it's not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of "feeling" machines?”, asks the blog.
Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What's more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful "survive" and combine to form the next generation of instances.