Digit Geek
terminator
Digit Geek > Recent Articles > ALT > A whole bunch of smart people think AI is bad news

A whole bunch of smart people think AI is bad news

What if they are right?

One of the most common themes in Sci-Fi movies and literature is that of a dystopian world, where humans have been enslaved by robots controlled by an all-knowing super smart artificially intelligent entity. We all enjoy watching these movies and maybe have had a nightmare or two about such a future. But we’re here to tell you it might be possible! But don’t take our word for it. A whole bunch of smart people including Elon Musk and Stephen Hawking think humanity is poking around where it shouldn’t.

We are trying to create something that is better and smarter than us. We are still far away from creating an AI that is conscious about itself, but sooner or later we will succeed. And once the AI realizes that it is better and smarter than us, and doesn’t need us – what will stop it from eliminating us and taking over the world? After all, isn’t it about ‘survival of the fittest’?

Over the ages, a lot of smart people have warned us about this fate. Even pop culture has been trying to warn us about the impending doom that is coming when AI goes rogue. Here’s a collection of really smart people and their views on AI:

Elon Musk

The Tony Stark of real life, Elon Musk has been telling people about the risks of trusting AI too much for quite some time. During an MIT symposium, he went on to say, “With artificial intelligence, we are summoning the demon”. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out”. He called for researchers and companies working on AI to come together to form an international regulatory framework so as to keep a check on the advances in the field and make sure that we don’t unleash the ‘demon’ upon ourselves. He had also tweeted “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”Well, that is one hell of a warning.

elonmusk

Musk has gone as far as equating AI with demons

Stephen Hawking

Stephen Hawking is one of the most respected physicists and cosmologists in the world. And when he warns you about AI going rogue, you shut up and listen to him!
In an interview to BBC, he said “The development of full artificial intelligence could spell the end of the human race. It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded”. Definitely sounds scary, doesn’t it?

When Stephen Hawking warns you – you shut up and listen

After the release of the sci-fi Movie Transcendence, he went on to criticize AI researchers in a newspaper column saying “If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’? Probably not—but this is more or less what is happening with AI.”

Nick Bostrom

A famous Swedish Philosopher who is a Director at the University of Oxford and has spent a large amount of time theorizing and thinking about what will happen when technological singularity is attained. Many famous technologists have also quoted him before. He said that once AI surpasses the human brain, they would kill us easily in different ways, leaving Earth like a “Disneyland without children”. Shudder.

Nick Bostrom

Nick Bostrom has painted a grim picture of AI surpassing human intelligence

Isaac Asimov

One of the most famous sci-fi writers of all time, he wrote the famous novel ‘I, Robot’ that was later turned into a movie. He came up with the 3 basic laws of robotics which have been widely accepted by AI researchers and scientists. The first and most important law is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

isaac asimov

One of the most prominent sci-fi authors of all time, Isaac Asimov predicted that robots could turn against us and came up with the 3 basic laws of robotics.

James Barrat 

Author of the book “Our Final Invention: Artificial Intelligence and the End of the Human Era” (the name screams doom and gloom), James Barrat interviewed a number of AI researchers and philosophers and documented it. He goes on to state that “Without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfil its goals.”

Gary Marcus

Ex-head of AI at Uber, Gary Marcus wrote in a column in the New Yorker “Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called “technological singularity” or “intelligence explosion,” the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.”

Gary Marcus

“The risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.”

All these views and depictions, while based on fiction and conjecture, are based in the current state of science and technology – making it a very real future we might be looking at. When it comes, will you welcome it or fight?

This article was first published in April 2017 issue of Digit magazine. To read Digit’s articles first, subscribe here or download the Digit for Android and iOS. You could also buy Digit’s previous issues here.

Purusharth Sharma

Purusharth Sharma