What is Artificial General Intelligence (AGI)?
What comes to your mind when you think about AI? Deep Learning? Image recognition? Systems behind those targeted ads which surprise you with an ad just before you start thinking about it? Well all of this is AI but also its not really ‘intelligent’.
In fact, many of the systems today marketed as AI have deep-learning working under-the-hood. These are really deep neural networks which let you buy unconstrained performance on any well-defined problem using unlimited amounts of data. The main problem with such approach to AI is that although these networks can outperform humans on narrow tasks (such as image recognition, game playing, etc.), they are pretty stupid as well. They breakdown even with simple changes in input and need to be spoon-fed data and lots of it. One cannot imagine these systems, be able to one day, pass the Wozniak’s Coffee Test.
So, what is the alternative? Well, a small group of researchers are now trying to bring the idea of an intelligent system which does not require arbitrary amounts of data to perform well and is able to work on more broader tasks by itself (not narrowly defined metrics). For this reason, they are pushing for a new term which is called Artificial General Intelligence (AGI) as the term AI has been hijacked to generally mean deep learning networks which do not move towards the goal of an intelligence which is more human-like.
Researchers like Francois Chollet and Pei Wang are most notable on introducing and defining this new idea of intelligence. They have published important articles on how to go about defining and then measuring such an intelligence. You can read up on Wang’s On Defining Artificial Intelligence where he defines intelligence as:
Intelligence is the capacity of an information-processing system to adapt to itsenvironment while operating with insufficient knowledge and resources.
— Wang (1995)
This definition is meant to be used as a working definition of Intelligence (meant to guide research in the area). However, what I like most about this definition is its incorporation of the assumption of insufficient knowledge and resources (AIKR) into the definiton, and it corresponds to our daily experiences as a human being too. Let me explain how:
First, in our daily lives, we can only think about so much things at once, and what makes us smart is the prudent apportioning of our thinking resources and time. For example, given an issue at hand, who do you think is an intelligent person? One who paritions their resources to deal with the issue at hand (time, money, thinking power, etc.) or the one who ignores the issue and spends their resources without any regard? Evolutionarily, you can think of this assumption in the terms that the organisms who were successful in gathering more food and shelter with the same limited resources as others, were more intelligent and hence more successful. Hence, this provides us with a general description of intelligence as opposed to narrowly focusing it on recognizing images, generating language, etc. and forces it to come up with creative solutions to cope with limited resources.
On the other hand, Francois Chollet’s paper On the Measure of Intelligence talks about how to measure intelligence in a general way and one of the most stressed upon points is that one should not be able to buy performance using data, as it makes for systems to not come up with creative solutions, but just collect more and more data. With these important distinctions between regular AI and General AI, I expect a new direction which will move more inline with the vision of a more general or human-like AI.
There are many more papers about General Intelligence which I will discuss about in the upcoming posts. Stay Tuned.