Digit Geek
Digit Geek > Recent Articles > Technology > The tech behind Artificial General Intelligence

The tech behind Artificial General Intelligence

This article was first published as a part of the cover story in the January 2018 issue of Fast Track on Digit magazine. To read Digit’s articles first, subscribe here. You could also buy Digit’s previous issues here.

Nope. It’s not happening anytime soon. We don’t have the hardware, we don’t have the software and we definitely don’t have the data. Artificial Intelligence is certainly progressing in an exponential manner, but we’re still far from the first Artificial General Intelligence (AGI). If you haven’t come across that term in any of the other stories, AGI is human-like intelligence. There is no real-world example to help you better understand the concept, however, it has been featured a zillion times over in popular pieces of science fiction. Remember Jarvis? Tony Stark’s (Iron Man) AI butler? Go a couple years further back and then there’s KITT from Knight Rider. To be honest, we’d like to give you a proper estimate of when we could see it, but when the world’s foremost experts in the domain can’t agree, we don’t stand a chance at doing any better, but we’re still going to try.

The quest for AGI

Researchers predict that we’ll have functional forms of AI that can replace humans engaged in certain tasks. We’ll have true human language translation by 2024, the capability of writing high-school essays by 2026, autonomous trucks by 2027, retail helps by 2031, the next GRRM (George R. R. Martin) by 2049 and the best surgeon by 2053. They say that there’s a 50 percent chance that AI will outperform humans in the next 45 years and around 120 years, humans will not have to work (in the traditional sense). However, all these are forms of AI which are task-specific and aren’t AGI.

Autonomous vehicles are a task-specific application of AI and don’t qualify as AGI

In order for there to be an AGI, there has to be just one entity capable of doing everything under the sun. And if it hits a roadblock, then it needs to improvise, adapt and overcome that roadblock just like a human would. In an article published in Neuron Today, Demis Hassabis, founder of Google’s Deepmind along with three other co-authors argue that this is possible if we build AI based on the workings of the human brain. This wouldn’t be the first time we’re seeing AI algorithms being based on things from nature. Autodesk’s software solutions are all incorporating the system of Generative Design which is a bunch of algorithms trained using concepts from nature. Current AI systems make use of a certain learning model i.e., the manner in which an AI ingests data and builds patterns. This approach needs to make way for a system which incorporates multiple data models rather than just one. Each model can have its suited application domain where it outperforms other models, just like the human brain has different zones for different purposes. For example, the human brain has a short-term memory and a long-term memory which come into play for different applications.

Taking nature’s course

One major step towards AGI is Pathnet. The folks over at Google’s Deepmind have come up with a learning model in which multiple users train the same neural network. Simply put, Pathnet is a network of neural networks. Yep, it’s inception time! AI learning models when faced with a new task, have to start from scratch and train a new pattern altogether. However, with Pathnet, an AI can use existing information gleaned during the learning phase for another task and apply bits of it towards learning the new task.

PathNet in action

It’s like if you learnt how to ride a bicycle, the same knowledge can partly be used to learn how to ride a motorbike. You wouldn’t need training wheels all over again, would you? Well, neither will a Pathnet powered AI. Comparisons between a fresh AI, vs a fine-tuned AI vs a Pathnet enabled AI, seem to strongly lean in favour of the Pathnet powered AI. However, Pathnet is just a beginning towards developing the right set of algorithms for an AGI. We’re pretty certain that one year down the line, we’ll have an even more pathbreaking approach, and six months further down from then, another breakthrough.


It would seem that we have a very powerful algorithm in our hands to build a proper AGI. So what’s stopping us? Compute power, of course. Or rather the lack thereof. We hear of a new supercomputer being commissioned every couple of years. However, are these even close to simulating the human brain?

The K computer, which consumes 12.6 MegaWatts of power and is capable of crunching 10.51 petaflops at peak level. took 40 seconds to do what the human brain can in one.

Researchers at the K computer, the 10th most powerful supercomputer which has 705,024 CPU cores and 1.4 million GigaBytes of RAM, managed to simulate one second of a human brain activity in 40 seconds. This was back in 2014. The K computer consumes 12.6 MegaWatts of power and is capable of crunching 10.51 petaflops at peak level. Yet it took 40 seconds to do what the human brain can in one. The current top supercomputer, Sunway TaihuLight, is 8.85 times as powerful and yet it can’t match the human brain, second for second. And that’s just considering the computing power, we also need to consider power consumption. 12.6 MegaWatts is astronomical compared to the human brain’s 20 Watt needs. Moreover, this is just the training phase, once the neural net has been trained, you don’t need such a powerhouse to run it. You can have supercomputers which are way less powerful for deployment. However, here’s the catch, you still need a supercomputer at the end of the day.

Is that you Moore?

Anyone who has read up a little about AI knows of Ray Kurzweil. His predictions have been on point when it comes to developments in AI. He developed a method for estimating the total calculations per second (cps) of the human brain by approximating the cps for one portion of the brain and then extrapolating that number for the rest of it. By his estimates, the human brain is capable of 10 quadrillion cps. He then puts a value on the current state of computers by estimating how much cps can be purchased for $1000. Right now, we can buy a little over 10 trillion cps per $1000. And when you throw Moore’s law into the mix, then 10 quadrillion per $1000 becomes possible by the year 2025. Now that’s just pure computational power.

After a point, AI’s rate of growth will put Skynet to shame

The algorithm is as big a piece of the pie as the hardware. So if we are to have the evolution of Pathnet running on a $1000 computer from 2025, we probably might have the first system capable of being an AGI. From that point onwards, it’s going to chart a journey of accelerated growth that will put Skynet to shame. Whether a true AGI will come into existence by 2025 is a question that remains unanswered, but we have no doubt that there’s going to be one within the next 120 years.


Mithun Mohandas

While not dishing out lethal doses of sarcasm, this curious creature can often be found tinkering with tech, playing vidya' games or exploring the darkest corners of the Internets. #PCMasterRace