Here are the 5 building blocks of general intelligence. You can see signs that we’re pretty close to AGI already.
On a recent episode of the No Priors podcast, Zapier co-founder Mike Knoop said that:
- The consensus definition of AGI (artificial general intelligence) these days is: “AGI is a system that can do the majority of economically useful work that humans can do.”
- He believes this definition is incorrect.
- He believes that François Chollet’s definition of general intelligence is the correct one: “a system that can effectively, efficiently acquire new skill and can solve open-ended problems with that ability.”
François Chollet is the creator of the keras library for ML. He wrote a seminal paper On the Measure of Intelligence and designed the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) challenge. It’s a great challenge – you should play with it at https://arcprize.org/play to see the kinds of problems they expect “true AGI” to be able to solve.
Unlike other benchmarks where AI is either close to or has already surpassed human-level performance, ARC-AGI has proven to be difficult for AI to make much progress on.

Does that mean Chollet’s definition of general intelligence is correct and ARC-AGI is the litmus test for true AGI?
With all due respect to Chollet (who is infinitely smarter than me; I didn’t get very far in solving those puzzles myself) I feel that it is a little bit reductive and fails to recognize all aspects of intelligence.
General intelligence
There is already smarter-than-human AI for specific skills like playing chess or Go, or predicting how proteins fold. These systems are intelligent but it is not general intelligence. General intelligence is applies across a wide range of tasks and environments, rather than being specialized for a specific domain.
What other than the following is missing from the definition of general intelligence?:
- Ability to learn new skills
- Ability to solve novel problems that weren’t part of the training set
- Applies across a range of tasks and environments
In this post, I submit there are other aspects that are the building blocks of intelligence. In fact, these aspects can and are being worked on independently, and will be milestones on the path to AGI.
Aspects of Intelligence #1: Priors – Language and World Knowledge
Priors refers to existing knowledge a system (or human) has that allows them to solve problems. In the ARC challenge, the priors are listed as:
Objectness
Objects persist and cannot appear or disappear without reason. Objects can interact or not depending on the circumstances.
Goal-directedness
Objects can be animate or inanimate. Some objects are “agents” – they have intentions and they pursue goals.
Numbers & counting
Objects can be counted or sorted by their shape, appearance, or movement using basic mathematics like addition, subtraction, and comparison.
Basic geometry & topology
Objects can be shapes like rectangles, triangles, and circles which can be mirrored, rotated, translated, deformed, combined, repeated, etc. Differences in distances can be detected.ARC-AGI avoids a reliance on any information that isn’t part of these priors, for example acquired or cultural knowledge, like language.
However, I submit that any AGI whose priors do not include language should be ruled out because:
- Humans cannot interact with this AGI and present it novel problems to solve without the use of language.
- It is not sufficient for an AGI to solve problems. It must be able to explain how it arrived at the solution. The AGI cannot explain itself to humans without language.
In addition to language, there is a lot of world knowledge that would be necessary for a generally intelligent system. You could argue that an open system that has the ability to look up knowledge from the Internet (i.e., do research) does not need this. But even basic research requires a certain amount of fundamental knowledge plus good judgment on which source is trustworthy. So, knowledge about the fundamentals of all disciplines is a prerequisite for AGI.
I believe that a combination of LLMs and multi-modal transformer models like those being trained by Tesla on car driving videos will solve this part of the problem.
Aspects of Intelligence #2: Comprehension
It takes intelligence to understand a problem. Understanding language is a necessary but not sufficient condition for this. For example, you may understand language but it requires higher intelligence to understand humor. As every stand-up comedian knows, not everyone in the audience will get every joke.
Presented with a novel problem to solve, it is possible that there are two systems that both fail to solve the problem. This does not prove that neither system is intelligent because it is possible that one system can at least comprehend the problem while another fails to even understand it.
Measuring this is tricky, though. How do you differentiate between a system that truly understands the problem vs. another that bullshits and parrots its way to lead you to believe that it understands? While tricky, I do think it is possible to quiz the system on aspects of the problem to gauge its ability to comprehend it. e.g., ask it to break it down into components, identify the most challenging components, come up with hypotheses or directions for the solution. This is similar to a software developer interview where you can gauge the difference between a candidate who could at least understand what you were asking, and can give some directionally correct answers even though they may not arrive at the right answer.
Comprehension also becomes obvious as a necessary skill when you consider that it’s the only way the system will know whether it has successfully solved the problem.
Aspects of Intelligence #3: Simplify and explain
This is the flip side of comprehension. One of the hallmarks of intelligence is being able to understand complex things and explain them in a simple manner. Filtering out extraneous information is a skill necessary for both comprehension and good communication.
A system can be trained to simplify and explain by giving it examples of problems, solutions and explanations. Given a problem and a solution, the task of the system – i.e. the expected output from the system – is the explanation for how to arrive at the solution.
Aspects of Intelligence #4: Asking the right questions
Fans of Douglas Adams already know that answer to life, the universe and everything is 42. The question, however, is unknown.
“O Deep Thought computer,“ he said, “the task we have designed you to perform is this. We want you to tell us….” he paused, “The Answer.”
“The Answer?” said Deep Thought. “The Answer to what?”
“Life!” urged Fook.
“The Universe!” said Lunkwill.
“Everything!” they said in chorus.
Deep Thought paused for a moment’s reflection.
“Tricky,” he said finally.
“But can you do it?”
Again, a significant pause.
“Yes,” said Deep Thought, “I can do it.”
Given an ambiguous problem, an intelligent entity asks great questions to make progress. In an interview, you look for the candidate to ask great follow-up questions if your initial problem is ambiguous. An AGI system does not require its human users to give it complete information in a well-formatted, fully descriptive prompt input.
In order to be able to solve problems, an AGI will need to consistently ask great questions.
Aspects of Intelligence #5: Tool use
An intelligent system can both build and use tools. It knows which tools it has access to, and can figure out which is the right tool for a job and when building a new tool is warranted. It is a neural net that can grow other neural nets because it knows how to. It has the ability and resources to spawn clones of itself (a la Agent Smith from The Matrix) if necessary to act tools or “agents”.
This ability requires a level of self-awareness, not in the sense of sentience but in the sense of the system understanding its own inner workings so that it understands its constraints and knows how it can integrate new subsystems into itself when needed to solve a problem. Like Deep Thought built a computer smarter than itself to find the question to the ultimate answer, a task that Deep Thought itself was unable to perform:
“I speak of none other than the computer that is to come after me,” intoned Deep Thought, his voice regaining its accustomed declamatory tones. “A computer whose merest operational parameters I am not worthy to calculate – and yet I will design it for you. A computer which can calculate the Question to the Ultimate Answer, a computer of such infinite and subtle complexity that organic life itself shall form part of its operational matrix.
BONUS: 1 more building block – for superintelligence
In addition to the five building blocks above, I believe there is one more if a system is to become superintelligent (beyond human level intelligence).
Aspects of Intelligence #6: Creative spark
What is common among the following?:
- The discovery (or invention?) of the imaginary unit i, the square root of negative one.
- Einstein’s thought experiments, such as imagining riding alongside a beam of light, which led him to develop the special theory of relativity.
- Archimedes’ eureka moment while taking a bath when he realized that the volume of water displaced by an object is equal to the volume of the object itself.
- Newton watching an apple fall from a tree and wondering if this is the same force that keeps the moon on orbit around the Earth.
- Friedrich August Kekulé having a dream of a snake biting its own tail and leading, leading to the discovery of benzene’s ring structure.
- Niels Bohr Bohr proposed that electrons travel in specific orbits around the nucleus and can jump between these orbits (quantum leaps) by absorbing or emitting energy. This explained atomic spectra, something that classical physics could not explain.
- Nikola Tesla designing the Alternating Current system to efficiently transmit electricity over large distances, and designing the induction motor to use alternating current.
In all these cases, a spark of creativity and imagination led to new advancements in knowledge that were not built upon the available knowledge at the time.
Most scientists and engineers spend their entire career without such groundbreaking insight. So this is not strictly necessary for general intelligence. But for beyond human-level intelligence, the system must be capable of thinking outside the box.
References
- On the Measure of Intelligence – François Chollet
- Puzzles in the evaluation set for the $1 million ARC Prize
- ARC Prize
- ChatGPT is Bullshit – Michael Townsen Hicks, James Humphries, Joe Slater
- Stochastic parrot – Wikipedia