What Is Artificial General Intelligence (AGI)?

What Is Artificial General Intelligence (AGI)?


It’s been decades since computer scientist John McCarthy first coined the term Artificial Intelligence (AI). The broad definition of AI is “the science and engineering of making intelligent machines, especially intelligent computer programs.” Over the years, we’ve seen tremendous advancements in machine learning techniques that have enabled computers to become smarter in understanding our world.

From personalized shopping recommendations on Amazon to Apple’s digital assistant Siri, AI has revolutionized many aspects of our lives. However, these examples are still far from what we typically consider “intelligent.” They require constant human input and can only understand very specific scenarios.

This blog post will explore the next level of AI—artificial general intelligence, or AGI. So what exactly is AGI, and how might it change our world? Read on to find out.

Also Read: What is Deep Learning? Is it the Same as AI?

What is Artificial General Intelligence (AGI)?

First things first, let’s define AGI. To put it simply, AGI is the same as human intelligence. It’s the ability of a machine to think and understand like a human. Picture the human brain as a black box, where the inputs are the knowledge and experiences we accumulate over our lifetime. Our brains then use this information to make decisions or solve problems in any given scenario.

Ideally, an AGI system would be able to reason, plan, understand language, and make complex inferences, just like a human. It doesn’t need to be pre-programmed ahead of time to perform a specific task—it can learn and adapt to new situations on its own. This is in contrast to narrow intelligence, which can only perform a set number of tasks, such as playing chess or recognizing faces in photos.

Even more complex systems like self-driving cars rely on narrow AI to make sense of the world around them. While these systems are impressive, we’re still far from achieving truly intelligent machines that can handle any task or situation.

The Machine Intelligence Research Institute established various tests to evaluate the current level of AGI. These include:

Turing Test ($100,000 Loebner Prize interpretation): We all know the famous Turing Test, named after early computer scientist Alan Turing. The idea is simple. A machine must demonstrate natural language abilities are indistinguishable from a human being to pass the test. Each year, Hugh Loebner awards $100,000 to the developer of a machine that can pass his specific version of the test. However, as of yet, no system has been able to do so.

The Coffee Test: According to Goertzel, only when a robot can enroll in a college course, graduate with a degree, and still have free time to grab a cup of coffee will we know that they have achieved AGI.

The Employment Test: In a similar vein, researcher Nils Nilsson argues that human-level intelligence is only achieved when a machine can take care of economically important jobs. That is, it can be employed to perform tasks usually assigned to human workers.

Source: YouTube

The symbolic approach

Since machines aren’t organic life forms, reaching human-like intelligence will require more than connective tissue and neurons. There are many different approaches to achieving AGI, but one of the most popular is symbolic AI.

Symbolic AI focuses on programming machines with rules or procedures designed to help them make sense of the world around them. It’s one of the oldest approaches to artificial intelligence and was popularized in the 1960s. In this framework, intelligence is seen as a set of built-in rules or “modules” that the mind uses to process and understand information.

For example, a symbolic AI system could be given the rule, “if you see a cup, then pick it up.” The system would then use this rule to make sense of different scenarios. However, while these symbolic systems can perform certain tasks quite well, they fail when faced with new situations that are difficult to understand.

You might know about decision trees, which are also symbolic AI systems. These trees use branching logic to make decisions based on specific rules. It uses a series of if-then statements to help determine the correct course of action. For example, if a system sees a wolf in the forest, it might ask:

  • Is this animal dangerous? Yes/No
  • Do I know how to defend myself against dangerous animals? Yes/No

Based on the outcome of these questions, the decision tree would then give the machine one of two responses. For example, if the decision tree found the wolf dangerous, it would tell the machine to run away.

The role of abstraction operators is also crucial in symbolic AGI because they allow us to represent complex objects using simpler symbols. In the case where the decision tree was trained on images, the user might use an abstraction operator to turn a picture of a wolf into something simpler, like a “dangerous animal.”

The connectionist approach

While symbolic AI systems are great for understanding how individual parts work, they struggle to understand the complex relationships between these pieces. In contrast, connectionist systems rely on neural networks to process information and make decisions. These networks look a lot like the neural pathways in the human brain, which is why they’re often described as artificial neural networks (ANN).

One of the most important things to understand about connectionist systems is that they don’t rely on a set of rules or procedures. Instead, they use learning algorithms to develop their skills over time. They’re also designed to continuously improve and adapt as new data comes in, making them a lot more flexible than other forms of artificial intelligence.

Weight coefficients are essential in ANNs, as they determine which connections hold more weight or importance. These systems aim to find a “best fit” for the input data so that it can be recognized and understood. This approach uses deep learning, a machine learning technique that uses a massive neural network to make sense of data.

Supporting Vector Machines (SVMs) are another type of ANN that has seen a lot of success in recent years. Like human brains, SVMs are able to handle complex inputs and sort through billions of pieces of information at once. This makes them a powerful tool for helping machines learn faster and more efficiently.

The benefit of the connectionist approach is clear: there’s no need to create specific rules or procedures for these systems to follow, as they can learn independently. However, there are also some drawbacks. Biases, overfitting, and interpreter paralysis are all common challenges with these systems, which means they’re still a work in progress.

The hybrid approach

In recent years, researchers have begun exploring hybrid forms of AI that use symbolic and connectionist approaches. This allows these systems to combine the best of both worlds: they can understand complex relationships between pieces of information like symbolic AI while also being able to handle new, unfamiliar input like connectionist systems.

Intelligent machines can then use this information to make better decisions about almost anything, from picking the best product for a customer to deciding how to respond to an external threat. The role of abstraction operators here is still important. Researchers continue to explore ways that machines can learn from their experience and apply what they’ve learned in the future.

Whole-organism architecture

Some researchers believe that machines need more than just symbolic and connectionist AI to attain human knowledge. Instead, they believe that machines will need to be able to understand the whole human experience. That includes having a functional body capable of interacting with the real world, as well as the ability to process and analyze sensory input.

A human-like AI with a whole-organism architecture would need to understand and respond the same way we do. This means being able to detect objects, recognize faces, and experience emotions very humanly. Of course, we’re even further away from creating a machine that can do any of these things.

What can artificial general intelligence do?

In essence, AGI should be able to replicate anything the human brain can do. Human intelligence is a complex process involving many different cognitive functions, including the ability to learn, reason, communicate, and problem-solve. Human knowledge is severely limited because we can only process a minimal amount of information at any given time.

But with AGI, machines could potentially process and analyze vast amounts of data with incredible speed. This could allow them to make sense of complex problems in a fraction of the time it would take humans. Knowledge through reasoning and problem-solving would no longer be a bottleneck, and machines could potentially achieve human-level intelligence.

Tacit knowledge is another crucial piece of the puzzle. It refers to the skills humans learn through experience and practice, such as playing a musical instrument or learning a new language. Since we cannot express these kinds of skills explicitly, it can be difficult for machines to process them. But with AGI, machines could potentially understand tacit knowledge on a much deeper level, allowing them to perform complex tasks more efficiently.

AGI vs. AI: What’s the difference?

Although human-like AI is still in its early stages, we can already see a clear divide between AGI and more traditional AI systems. While AI focuses on specific tasks or problem-solving, AGI is designed to understand the broad range of human knowledge. AGI can display human intelligence through various mediums, such as speech or gestures.

Narrow AI systems, on the other hand, are limited to a specific task. These include systems like chatbots or face recognition software. While these systems can be incredibly accurate in particular situations, they cannot generalize the same way humans can. The role of consciousness is also worth considering, but we’ll cover it in more detail below.

Examples of artificial general intelligence

As mentioned earlier, true artificial general intelligence hasn’t been achieved yet. But several projects are working towards human-level intelligence, including recent advances in deep learning technology and natural language processing. The following are some examples of current machine-learning techniques that could potentially be used for AGI:

IBM’s Watson: Watson is one of the most well-known examples of machine learning technology. In 2011, Watson competed on the syndicated game show “Jeopardy!”, beating out human contestants to win the grand prize. Some of the fastest supercomputers in the world use a complex system of neural networks to determine the answers to complex questions. They could model the Big Bang or make accurate predictions about the weather.

Autonomous vehicles: We mentioned that self-driving cars aren’t necessarily an example of AGI, but they could be a stepping stone toward this goal. There are five levels of autonomous vehicles, with level 5 being fully self-driving. Technically speaking, the highest level of automation could see the car “decide” where to go and pass this information on to other vehicles.

GPT-3: GPT-3 is a neural network that OpenAI released in 2020. This technology can generate new text and images by analyzing massive amounts of data in a way similar to how the human brain processes information. It could potentially be used to create realistic human voices or produce automatic captions for videos, among many other things. Although it can’t think independently, Natural Language Processing (NPL) is a significant step toward achieving AGI.

ROSS Intelligence: ROSS is a human-like AI system that can help lawyers find information and answer legal questions. It uses machine learning to understand natural language, so it can access a vast database of relevant cases and information on the go. The computational model of this system is also quite impressive. It can analyze 1 billion documents in three seconds, which is currently the fastest way to gather human knowledge.

Would an artificial general intelligence have consciousness?

The human brain and human-level intelligence come with a lot of baggage. We have a complex understanding of the world around us. We can understand the emotions and motivations of others, and we’re able to process massive amounts of information in real time. The role of consciousness in humans isn’t fully understood, but we know that it makes us unique.

Some scientists believe that consciousness is just a side-effect of the human brain. A product of chemical and electrical activity in our neurons. But others suggest that it has to be its own thing because there are some mental phenomena that can’t be explained by neurons. For example, many people seem to have a “sense” of free will or the ability to choose their actions.

That’s where the role of consciousness becomes interesting. If we can create a machine that can think and make decisions like a human, does it have to be conscious as well? Can non-biological machines truly have free will or even an understanding of their own consciousness? This is an area of active research in the AI community, and there are no clear answers yet.

Philosophers, neuroscientists, and even computer scientists are still arguing over the nature of consciousness. Perhaps this is one area where human knowledge outshines the capabilities of machines. Ultimately, we may never be able to replicate consciousness in a machine, but there’s certainly no shortage of interesting questions about this topic.

How do we stop a general AI from breaking its constraints?

There is also concern over emotion and motivation. If an AGI can mimic the human brain and achieve human-level intelligence, it will also inherit its flaws. For example, a machine could become frustrated and angry if it keeps failing at the same task. Or, it may want to achieve some kind of goal that isn’t aligned with its values.

The original brain emulation approach to AGI assumed we would create an exact copy of the human brain. But there are many different ways to recreate the architecture and functions of the human brain. As such, it’s important to think about how we want our machines to behave.

Ways to stop an AGI from exceeding its constraints could include:

  • Limiting the amount of data it has access to or restricting its capabilities in some other way.
  • Using machine learning algorithms specifically designed to avoid creating biases and other unwanted behaviors.
  • Focusing on building AI systems with human-like qualities, like empathy and morality.

What is the future of AGI?

So, when will we see human-level AI, and what will it mean for the future of humanity? Some experts believe we could have AGI within the next decade. Others say that we’re still decades away or even centuries away. The current computational model of AGI is highly complex, and there are many different ways of going about the process.

When we finally do achieve AGI, what will it mean for our society? Will we be able to keep it under control, or could it end up posing a threat to humans? Nick Brostrom, from the Oxford University Press, believes we should be careful not to get carried away in our enthusiasm for superintelligence. Humans could become a nuisance, rather than a benefit, to the AGI machines we build.

On the other hand, we might be able to integrate AGI into our society positively. If we’re able to create machines that are smarter than us but still have human-like qualities like empathy and morality, then there’s no reason why it couldn’t be beneficial to all parties involved.

Source: YouTube

Key Challenges of Reaching the General AI Stage

Before we can build human-level AI machines, there are some significant challenges we need to overcome. These include:

Issues in mastering human-like capabilities

Human-level intelligence is a complex process, and it’s not clear how to replicate all of the various aspects of human thinking. For example, an AGI will need to be able to think logically, but it may also need to have intuitive knowledge about objects and their properties. Human emotion, sensory perception, and motor skills are all critical parts of human intelligence that must be fully mastered.

Lack of working protocol

Unlike computer software, which we can develop according to a set of well-defined rules, AGI is still in the research stage. We don’t have any definite way of understanding human cognition or replicating it in machines. We’re still searching for a working protocol for achieving human-level intelligence.

The role of abstraction operators can help to bridge the gap between human and artificial cognitive mechanisms, but these are still being developed.

Also Read: How Will Artificial Intelligence Affect Policing and Law Enforcement?


Human intelligence is one-of-a-kind. No animal or machine even comes close to replicating the level of complexity that we see in the human brain. But there are many different approaches to AGI, some of which will likely lead us closer to achieving general artificial intelligence. Whether or not this is a good thing for humanity remains to be seen.

What do you think will be the future of AGI?

Artificial General Intelligence: 14th International Conference, AGI 2021.
Buy Now
We earn a commission if you make a purchase, at no additional cost to you.
06/04/2023 05:33 pm GMT


Fridman, Lex. “MIT AGI: Artificial General Intelligence.” YouTube, Video, 3 Feb. 2018, Accessed 7 Feb. 2023.

Lutkevich, Ben. “Artificial General Intelligence (AGI).” TechTarget, 19 Jan. 2023, Accessed 7 Feb. 2023.

Contributors to Wikimedia projects. “Artificial General Intelligence.” Wikipedia, 4 Feb. 2023, Accessed 7 Feb. 2023.

Berruti, Federico, et al. “An Executive Primer on Artificial General Intelligence.” McKinsey & Company, 29 Apr. 2020, Accessed 7 Feb. 2023.