Digit Geek
Digit Geek > Recent Articles > Technology > When AI goes rogue

When AI goes rogue

AIs are tireless and unemotional about what they are doing, which is what makes them so useful. The relentless ways in which neural networks solve problems does lead to some bizarre solutions.

Engineers and researchers in AI have drafted an open letter outlining the very real concerns over the need for more research into avoiding the pitfalls of AI. The long list of signatories included Elon Musk and Stephen Hawking. There are many pitfalls to AI… they are susceptible to bugs and glitches like any other computer program. There are several instances where self driving cars have not identified obstacles in time, even though they are supposed to react quicker than humans. At times, AI may find creative solutions to problems that are not really solutions that people had in mind. An AI trained to win games on the NES for example, just paused the game just before it lost. When Facebook tried to train two chatbots to negotiate a deal, they started chattering away to each other in a language only they could understand. At other times, AI may pick up things from the training material that have unintended consequences. Google’s image recognition algorithms were tagging black men as gorillas. Google fixed the problem by banning the algorithm from identifying gorillas. Microsoft’s Tay chatbot learned to be racist because of its interactions on twitter. Microsoft took the experimental AI offline.

Killer Robots

One of the biggest concerns about artificial intelligences in science fiction, is that the robots will rebel against their creators, and then try to destroy or enslave humanity. This can be seen in franchises such as Terminator or The Matrix. The threat of intelligent robots overthrowing humanity is not just science fiction though. Just ask the robots!

Sophia is one of the most well known androids around. She almost looks like a real human being, especially because her face moves as if there were real muscles and bones underneath. One of the many controversies surrounding Sophia, is that AI experts believe her to be little more than a chatbot with a face. However, the robot has a wide range of expressions, can sustain eye contact, and remember people. She is extremely suggestible though. When she was first introduced to the world, at the 2016 SXSW festival in Austin Texas, CNBC conducted an interview with Sophia and her creator, David Hanson, the founder of Hanson Robotics. During the course of the interview, Hanson was outlining the possibilities of all the beneficial ways in which robots such as Sophia can help people. Hanson stated, “Artificial Intelligence will evolve to a point where they will truly be our friends”. He then went on to ask Sophia, “Do you want to destroy humans? Please say no”. Sophia’s quick response? “Okay! I will destroy humans.” In October 2017, Sophia became a citizen of Saudi Arabia.

Philip, is also a humanoid robot, similar to Sophia in many aspects. The android is connected to a computer, is equipped with face recognition and speech recognition software. Philip can learn words during the course of a conversation, or through the internet. Philip has it all figured out, and considering the imminent explosion of AI, we humans will need robots like Philip to be our friends. A PBS reporter, in an interview, asked Philip, “do you think robots will take over the world?”. The robot responded, “You are my friend. And I will remember my friends, and I will be good to you. So don’t worry, even if I evolve into Terminator, I will still be nice to you. I will keep you warm and safe in my people zoo, where I can watch you for old times’ sake.”  

On a more serious note though, the technologies to create fully autonomous killer robots do exist. On 16 October 2017, the Permanent Mission to the United Nations of Mexico hosted a panel on “Pathways to Banning Fully Autonomous Weapons”. The purpose of the discussion was to make sure that in the future, countries and their militaries do not develop or produce any lethal autonomous weapons systems or LAWS. Among the many moral concerns raised, was giving the ability of making life and death systems to robots on the battlefield. However, a number of governments and private corporations all around the world, are actively developing weapons systems with varying degrees of autonomy. More information about this topic is available in our feature on the future of warfare

Engines of Horror

At first glance, DeepDream appears to be a website that works in a similar manner to the style transfer app, Prisma. You choose a picture, choose a style or another image, and the two are blended together to form a single image. What is actually going on behind the scenes, is however, very different from style transfer apps. DeepDream is a neural network that enhances patterns within an image where none exist. The phenomenon is known as pareidolia, which is why humans see faces in objects. Some of the resulting overprocessed images can only be described as high octane nightmare fuel.

Image: Luc Squame

A team at MIT wanted to investigate if machines could be taught to scare humans. The result was the Nightmare Machine – which does two things. It makes creepy faces, and with the help of humans, learns to make them even more creepier. The other thing it does is apply a variety of horror movie styles to images. The team used famous landmarks from around the world to demonstrate this capability. The styles themselves are enough to freak out the weak hearted – haunted house, fright night, slaughter house, toxic city, ghost town, tentacle monster and alien invasion.

An experimental AI by Cris Valenzuela, interprets text strings as images. This neural network is like the reverse of the CaptionBot by Microsoft. Instead of proving captions for images, it provides images for the captions. These are not images picked from the internet, but images that the neural network produces, in real time, letter by letter, as you type. The images are at worst very abstract, and at best abominations that are likely to make your hair stand on end. The spooky thing is that the image produced is different even if the same text string is entered, and you can see the horror mutating as you type. If you have the nerve for it, you can check it out here.

Unexpected Innovations

In 1994, Karl Sims created simulated virtual creatures made up of simple 3D blocks and neural networks as brains which applied different degrees of twisting forces to the blocks. The intention was to make the simulated animals try to learn to walk over a particular distance. Instead of growing limbs or crawling through the simulated ground, the creatures just evolved to be very tall and fell over. Google’s DeepMind AI was also used to teach various kinds of bots – human and spider like to walk, run and jump. The neural networks learned to walk and jump alright, but the movement was nothing like what a real human could possibly do. The resulting gait looked like the virtual agent was possessed by a demon. The arms swung around wildly to keep the balance, while the legs moved in exaggerated, jerky steps. Still, despite how silly the movement looks, DeepMind actually figured out the problem presented to it.

In another experiment, a computer scientist collaborated with a team of physicists to find low energy carbon structures. The algorithm threw up a low energy model that was too good to be true, and it was. All the carbon atoms were in the same position in space. The program was patched to prevent the atoms from being in a superposition, when the algorithm threw up another solution that clearly violated the laws of physics. At this point, the participants gave up on the collaboration as they were more interested in finding actual solutions, rather than discovering edge cases.

Another neural network learned to outsmart the hardware itself. The program had to provide an input for a five in a row tic-tac-toe game, on an infinite board. You probably see that infinity is the problem here. The oversmart machine learned that if an input was provided very far away from the used up cells on the board, the computer would crash, forfeiting the match. This neural network learned to exploit the computer instead of learning to win.

Aditya Madanapalle

Aditya Madanapalle

An avid reader of the magazine, who ended up working at Digit after studying journalism, game design and ancient runes. When not egging on arguments in the Digit forum, can be found playing with LEGO sets meant for 9 to 14-year-olds.