Digit Geek
Digit Geek > Recent Articles > Technology > What is Artificial Intelligence?

What is Artificial Intelligence?

This article was first published as a part of the cover story in the January 2018 issue of Fast Track on Digit magazine. To read Digit’s articles first, subscribe here. You could also buy Digit’s previous issues here.

Strap yourselves in, it’s going to be a wild ride in whatever remains of the year. You’re going to read a whole bunch of stories about Artificial Intelligence (AI), some aimed to delight, some aimed to create despair, some will full on frighten you, while others will make you laugh out loud, and almost all of them will be bulls**t. Pardon the language, but sometimes it’s the only way to express the right emotion and put things in context.

Heck, some of those stories will be written by us, as we ourselves get sucked into the hype – don’t blame us, we have technology fetishes to feed, and that’s an expensive addiction! Take it all with a pinch of salt though.

Why? Because if the subject is cars, and people bring up everything from bicycles to jet planes, and boats to potato batteries, and claim that it’s all “cars” you’d pretty easily spot the aforementioned bulls**t, I assume. It’s the same thing with AI, but unlike a nice solid subject like cars, we’re all fuzzy about what AI actually is.

You don’t need intelligence to do that (©Digit)

As an experiment, I searched my inbox for the term “Artificial Intelligence”, filtered out all emails sent by anyone from Digit or 9.9, and limited the search to 2017 only. What I got was 402 emails from external sources (mostly press releases), all laying claim to having developed or used AI in some form or the other. And what were the topics covered? Everything from smart keyboards (predictive text, essentially) to digital assistant devices (Google Home and Amazon Echo types), and also from stock investment bots to e-commerce suggestion bots, it was all there.

Remember, this is the stuff that gets through my very strict spam filter, and are from people whom I have specifically chosen to accept emails from. Honestly, it’s like everything is AI these days. Or maybe nothing is AI these days.

Definitions

So what is really the accepted definition of AI? The dictionary would have you believe that AI is “simulated intelligent behaviour”. Basically, whether or not a machine can imitate intelligent human behaviour. Thus, if we go by the dictionary, it’s pretty much all of the stuff that people claim is AI these days. Would you consider being able to read and then transcribe words from a page to be something only humans with intelligence can do? I ask because OCR software has been doing it for some time now. Are those AI?

Is Google Translate AI?

What about translating the written word? Got something written down in Hindi and want it translated into English? Assuming you don’t speak one of the languages, you’d something like Google’s free translator service. So is that AI then? Many people are not happy with such a vague definition of AI, and neither are we at Digit happy to set the bar so low. So we need another definition…

Turing Test

This is the point where it becomes obligatory to mention the Turing test. When Alan Turing came up with it back in 1950, it seemed foolproof and very futuristic. In fact, there were surely those who scoffed at him and wondered what on earth he’d been smoking! When calculating machines were the order of the day, imagining a machine could ever pass the Turing test was nearly impossible. In case you’ve been living under a rock, the Turing test is basically a setup where a human (A) and a computer (B) are behind a curtain, and an examiner (C) is on the other side, writing out questions to both, and then receiving back written (printed out) answers from both A and B. C is tasked with trying to figure out whether A or B is the computer, based solely on the answers he/she receives. If C is unable to distinguish between both, or if C picks the wrong answer and identifies A as the computer, then the computer is said to have passed the Turing test and is considered to be an AI.

Alan Turing. Genius. Period.

To be fair, Turing was an unparalleled genius in the field of computers, and even back in 1950, much before the silicon era, he knew better than to focus on machines developing human intelligence. His test was essentially a take on a popular parlour game called the imitation game.

Turing never claimed that a machine that passed his test would be ‘intelligent’, he merely stated that it would be able to ‘mimic intelligence’. He never believed that a machine could gain consciousness.

Lighthill Report

Of course, this didn’t stop people from claiming that their machines had passed the Turing test and were therefore intelligent. In the 70s, many scientists were working on AI, and all claimed to be on the verge of a breakthrough, while others were extremely sceptical. In order to get to the bottom of the matter, and try and solve the debate once and for all, the British Science Research Council asked James Lighthill, a well-renowned British scientist at the time, to undertake a study and file a report on AI. In 1973 he completed and published his report titled Artificial Intelligence: A General Survey, in which he was very, very critical of the accomplishments of AI research. Although he did praise the work done in very specific and niche fields such as those for trying to solve small and specific problems, he was critical of all general purpose AI research.

The press caught on and eventually turned public opinion against money being wasted on a pipe dream. The British government cut off funding for all AI research, and it started what would be later called the AI Winter (like Nuclear Winter). Even the Americans undertook a similar study after hearing about Lighthill’s report, and the US government also massively cut back on AI funding.

Since then, the field of AI (much like a lot of other things) has undergone a bubble phase where it’s super hyped, only to then have the bubble pop and re-enter AI Winter. A lot of this yo-yoing is caused by people like us, the press, because it’s the easiest thing for journalists to get carried away with a fad or a new hype, only to then quickly realise that they were wrong, and write scathing takedowns.

So what is AI?

Coming back to the original question we asked, the answer is, sadly, we still don’t know. Some would be happy with the dictionary definition (not us), while most would be happy if it passed the Turing Test (again, not us). We certainly don’t think computers should have to be able to come up with a unified theory of gravity to be considered “intelligent”, but we do want a little more than the Turing test offers. Surely something that is given the title of “intelligence”, artificial or otherwise, will have to be more than a mere charlatan!

Does Deep Blue beating Kasparov in chess mean it is more ‘intelligent’ than him?

Consider Deep Blue beating Kasparov at chess, or Watson winning at Jeopardy, or Google DeepMind’s AlphaGo beating Lee Sedol and Ke Jie at Go… these are examples of computers beating humanity’s best and brightest at their own game! Should that be considered AI? Many would say yes, we still think it’s not quite the same. AlphaGo probably sucks at poker, and Deep Blue would flunk at backgammon. Of course, they could be coded and re-built to play those games, but it would take years. And even after that, a 6-year-old could ask them questions that would stump them. So are they AI? We think not.

Then there are those who will claim that a computer has to be sentient, be self-aware, and be able to learn before it can be considered intelligent. In short, they want the birth of artificial life, which matures over time, much in the way we change from when we are babies to when we become thinking adults. Obviously, because such an AI would literally mimic human development, it would have to be considered “intelligent”.

Positronic Brains?

Is that what we’re after? Is the imagination of Isaac Asimov ruling our future still? What we mean is, are we looking to create artificial humans? Because the sample set of intelligent beings we know of is one, much like the sample set of known planets with life, are we just too ignorant to spot intelligence when we see it?

Asimov’s AI is what we’re trying to make

From what we know, an ant is essentially a robot that needs to eat and uses primitive senses to do so. It has a few laws inside it that define it. For example, it defends itself or even flees when it is threatened. It communicates, it undertakes the task of searching for food, of protecting the nest, of sacrificing itself for its queen, etc. Are ants intelligent? No relativity needed, nor are they intelligent as humans, or dolphins… just, “are they intelligent”?

Depending on whether or not you think of an ant as intelligent (or a bee, or a fish…) will help you best understand what you consider to be intelligence, and will help you answer the question we’ve posed here.

Do we already have AI?

If you answered “Yes, ants are intelligent”, then yes, we already have AIs that intelligent. The rovers on Mars are like huge ants that are going about their business, with no hookup to the internet, and no real-time contact with anything. They’re autonomous, and are crawling about on Mars looking for stuff that’s interesting. Of course, we still command them, much like an ant will report back to the colony and then be told to go do something. Both the rovers and the ants have “masters” who essentially control them. Heck, you and I have bosses who tell us what to do, and we are all expendable when it comes to working. Don’t believe me? The next time your boss tells you to do something, tell him to go to hell; then see what happens.

If you think ants are intelligent, then so is the Mars rover

The mistake a lot of us make is to only consider a human standard of intelligence. An ant doesn’t need art or social media to feel good about itself, and for all we know, ants don’t even have the concept of feeling good. Similarly, expecting an AI to behave like a human is probably a mistake. We’re not mathematically driven objects – one look at the average results of a math exam will tell you that! We have abstract thought, creativity, depression, happiness, anger, and all of that non-mathematical and illogical input makes us who we are. How can we build something based purely on mathematics and then expect it to behave like us?

What we’re getting at is that if we give up this notion that human-like = intelligence, we might realise that we have artificial intelligence all around us. It’s task-specific and hasn’t mastered the nuances of the languages we use yet, but it is precise and predictable and dependable. How often can you say that about humans?

We already know that computers will take over driving cars, and do it better, with fewer accidents than humans. Of course, driving will be reduced to commuting if that happens, and no one is going to enjoy it any more than they enjoy riding the train to work, but it will happen eventually anyway. Are Tesla’s cars AI?

Is a self-driving car an example of artificial intelligence?

Smart assistants have a long way to go still, but then again, the first artificial driving assist couldn’t drive a car either. Are Siri, Alexa, Cortana and GA (Google Assistant) just toddlers in a pre-kindergarten program? After all, babies are dumb too, but they have milestones that they’re supposed to reach in order to be considered of average intelligence… So is current AI like a toddler, or a baby, and what are the milestones we should set for it? We can’t set time frames the way we can with human children – by 24 months they should say words, etc. However, can we still set milestones – when an AI is able to learn new words, is it a toddler, when it can drive a car, is it 16?

Maybe the stories that follow can help you decide whether we already have AI or not…

Robert Sovereign-Smith