This article was first published as a part of the cover story in the January 2018 issue of Fast Track on Digit magazine. To read Digit’s articles first, subscribe here. You could also buy Digit’s previous issues here.
It’s barely 2018 and I’m already fed up with the buzzword of the year. Everywhere I turn I see bold proclamations of AI-this and AI-that. It’s almost like every mom-and-pop store (also called startups by many) is screaming about how they’ve got AI doing so and so, and why it’s better than the other mom-and… err… startups…
I get the same emails that Robert does, and I have the advantage of writing my whinge after I read his piece, and he’s being far too nice! The press releases I get seem to be written by an AI because they’re that dumb! Pretty much every single one will mention “AI” far too often, more even than the company name, which is almost shocking! It’s like they’re hoping we will put up the press release as is, and SEO their name with the word AI… thankfully, we’ve still got humans working here who cannot be fooled that easily.
I picked an example email at random, and although I won’t name the company to prevent me from getting sued, the email was for a company that is an “expert in AI services”. How do we know that? Well duh, they say so 12 times in their email! What do they do? They take data, add intelligence, and then automate. Vague enough for you? They have tonnes of business knowledge and domain expertise. No really, just give them a call, and they will convince you of that fact. They assure you that they can make products that are not just AI automation, but which also learn constantly! (Quick! Someone notify the Nobel Prize committee, at least one 2018 prize is in the bag!). Now generously sprinkle on keywords such as cognitive, accelerated AI, knowledge, real-time learning, automated intelligence, contextual knowledge, leverage, and many more, and you have yourself an impressive sounding press release!
This is the state of AI today. It’s trickery and fakery, and claiming to be smarter than it is because we’re only getting dumber anyway. It’s the Turing test, but one where the humans are idiots, so the computer looks intelligent by default. This is so evident to me when I browse through social networks and find posts from people amazed at what is essentially simple coding.
The simplest example is all the “OMG how does Facebook know what I search for when I’m on other sites?” posts that people seem to keep making. Now you might already know that the answer is that in this case, Facebook doesn’t know anything. It’s merely using cookies, your IP address, and has tied up with ad networks that have tied up with other sites, which then make a nice little temporary file about you (or rather your IP + browser info). So when you search for, say, DDR4 RAM, and visit various seller’s sites that have ad tracking and keyword tracking, and view products related to that, you are tagged as someone searching for “DDR4 RAM”. Now, when you’re on Facebook, who is tied up with the same advertiser networks, you get DDR4 RAM ads, and even possibly the one you were looking at, even the same site you were looking at. Why? Because you are currency online. No one is interested in showing you ads, they want you to click and buy.
Facebook, like every other site out there, including us, wants you to click on an ad on our site and then go and complete a transaction. Such referrals earn us money, and money keeps our business alive and pays our salaries. So if you want to help Digit thrive, once you have decided to buy something online, search for it on the Digit site, click the affiliate link that shows the price and site it’s available at, and then buy it. Someone is going to get that money anyway. If you Google the product, that money goes to Google, and they have more than enough. Support us instead, OK?
Now that the shameless plug is done with, let’s get back to these so-called smart devices, or assistants, or robots, or cars, or whatever. They’re not smart. Heck intelligence doesn’t even apply to them. They’re code; written out rules. Far fancier ones than we’ve ever had before, admittedly, but lines of code nonetheless. It’s the databases that have gotten much larger over the years. With so much more data being digitised all the time, especially the stuff that’s in the text, the “library” is growing to humongous proportions.
The world is already exchanging information in the zettabyte range. Approximately 20 zettabytes in 2017 according to some conservative sources – that’s 20,000,000,000 TB (twenty billion terabytes)! Of course, this isn’t the amount of data available, it’s the amount of data transfer. Still, if 20 ZB worth of data is being transferred, the total amount of data cannot be more than a few orders of magnitude lower. Some studies conducted by storage companies (which we will take with a small pinch of salt) claim that the actual data stored on all computers globally is about 20 ZB and that by 2025, this could be as high as 160 ZB! Either way, let’s just agree that these so-called smart systems have more data available to them than they need.
Let’s say you wanted to search for a needle in a haystack, not proverbially, I mean literally, search for a needle in a haystack. Assuming the needle was hidden by me, you could do this intelligently: You could just set the haystack on fire, and once it’s all burned away, the needle should be left. If you’re not allowed to burn it, you could get a very powerful magnet and get the needle out that way… there are many ways in which it can be done. If you can’t get a magnet, use a water trough, and shovel hay into it and spread it out on the surface. Hay floats, needles sink. Enough said.
These are intelligent human answers to such questions. Now, if someone you asked this question to said they would create a billion clones of themselves and each one would remove one straw of hay until only the needle was left… would that be “intelligent”?
Brute forcing things is pretty much the way of current AI. If it’s not that, it’s hard-coded responses. If you’ve ever worked in front office or hospitality, or customer facing roles, you will be familiar with the concept of standard verbiage. This is basically things that you have to say, and tell people in a certain situation. It’s the reverse of AI, it’s trying to turn humans into robots, essentially. If you’ve called a call centre and got frustrated with the non-committal and standard responses, you’ve been at the receiving end of standard verbiage. Basically, don’t think, just respond with Y when a customer says X. That’s AI, in a nutshell, today (and every day before today). I’m pointing at all the smart assistants when I say this!
Yes, I know. They’re even coding themselves now, aren’t they? You’ve all probably read the hyped up news reports about Google AutoML and Microsoft (and Cambridge University’s) DeepCoder supposedly coding themselves. Heck, with AutoML, the headlines screamed about how it was coding better than the programmers who made it! Bah humbug!
Microsoft’s DeepCode was using snippets of various code written by humans, and trying to piecemeal them together to see what worked and what didn’t. It’s like you want to arrive at a given shape, and the bot is programmed to use legos (snippets of code) that some humans built. Now, it’s a question of trial and error, and basically, brute forcing the problem.
AutoML is similar. The task given to it was to use facial recognition for given images. Now there are points to be marked, then distances and ratios to be computed, and also complex logic that goes over my head, but essentially what the researchers did was teach it, “Look, we have to get to point X, and you move by doing this, and the set of all possible moves is a trillion trillion, and we’ve guesstimated our way through and got this path which is 50 percent efficient, can you do better?” (I’m oversimplifying because I’m too dumb to explain it the way the AutoML people did). Now, it’s a computer. It can go down every path possible. Is it any surprise it found a more efficient path?
All of this essentially boils down to infinite monkey theorem – get an infinite amount of monkeys who can hit keys on a typewriter, sit them there for an infinite amount of minutes, and eventually one of them will produce Shakespeare’s Hamlet. Does this mean that particular monkey is as intelligent as Shakespeare? Duh! Obviously not!
What is all boils down to is that AI today still needs human crutches – or in other words, they depend so heavily on human input that they’re impossible to consider as AI. A prime example is something Robert forwarded to me a month or two ago. It was a Facebook pilot program that was run in Australia, to prevent jilted lovers from sharing naked images of their exes in anger.
Revenge porn it’s called, and apparently, it’s a problem in Australia. You can read more about the experiment here – and the first thing you will notice is that it requires you (as a victim of revenge porn) to upload naked photographs of yourself to Facebook, which will be looked at by a human who works at Facebook. This is because Facebook’s own smart algorithm cannot tell your naked butt from others online. So a Facebook staff member needs to look at your image and create a hash, which will then be used to identify anyone uploading that image. This is, of course, assuming you have all the same images as your ex-does of you naked (assuming you sent them to him or her). While it will help someone who does have to face this problem, it’s rather telling in this case that a human has to be involved in such a thing.
This is why I refuse to call these software codes AI. Not just yet. Sure, maybe they’re evolving from bacteria still and will eventually become fish, and then amphibians, and eventually humans and then even surpass us… it’s totally possible. But don’t start calling them intelligent when they’re bacteria! It’s sad, the hype is going to cause way too much investment in not so good startups and projects, which will eventually fail (most often, not always), and it will cause another AI bubble to burst. Heck, we’ve been around long enough to see one happen in our lifetimes already, and I think this is just another case of the enthusiastic marketeers ruining it for the real researchers.
Now I’m going back to playing with my dog. At least when he frustrates me by not listening to my commands, he can wag his tail and flop his ears and look cute, while my stupid Google Home just goes “I’m sorry…” all the time. Stop saying you’re sorry. If you keep doing the same damn thing, again and again, you’re not sorry, you’re just dumb!