It’s no secret that humanity considers itself the apex species on this planet. The conceptualization of intelligence has long been an exclusive domain of humankind. We claim other animals act only on instinct, while we, the thinking species, are capable of making decisions based on our intelligence. But what if we were to be challenged in this domain?
Understandably, many people are in disbelief that Artificial Intelligence (AI) could ever surpass the intelligence of humans. After all, AI systems don’t possess a mind of their own—at least not yet. All algorithms and datasets come from us, humans. So how can our own system replicate our decision-making process better than ours? Could a new being really surpass us in this respect?
These questions are critical to consider in our current age of technology. AI is quickly advancing and making inroads into every sector of our lives, from healthcare to finance. This article will delve into the current discussion about whether AI can be smarter than humans and provide an overview of the potential implications.
Human Intelligence vs. Artificial Intelligence
We cannot move forward in this discussion without understanding the difference between human intelligence and AI. First, let’s define intelligence. According to the Oxford Dictionary, intelligence is “the ability to learn, understand and think in a logical way about things.” Fairly simple, right?
However, it is essential to note that intelligence comes in many forms and can be measured in various ways. Biological intelligence, such as the kind found in humans, is based on instinct and emotional responses. AI, on the other hand, relies on algorithms and datasets to come to conclusions.
While both can be utilized to make decisions, it is difficult to compare the two as they operate in very different ways. Humans possess a relatively complex and powerful brain capable of understanding and analyzing data. From sensory information to memories, our brain collects and stores extensive knowledge.
AI systems rely on pre-programmed instructions to identify patterns in data. This makes AI faster than humans at specific tasks, such as searching large datasets or recognizing facial features. It also helps machines access greater amounts of information quickly than we could ever hope to process.
Convergent Thinking vs. Divergent Thinking
This fundamental difference brings us to one of the major points of contention regarding AI and humans. Convergent thinking is the ability to come up with a single, correct answer from a set of data or facts. For example, a math equation is one type of convergent thinking problem. We can use pre-established rules and algorithms to find the answer, which will be the same every time. Logical reasoning and problem-solving skills fall under the realm of convergent intelligence. As such, we can compare convergent thinking to common sense.
In contrast, divergent thinking is used to develop a wide range of creative solutions to a problem or question. A good example of divergent thinking would be brainstorming a list of possible answers to the gun control debate in America. Since there is no single correct answer here, divergent thinking is necessary to come up with a range of ideas. This type of thinking requires a certain amount of cognitive flexibility in order to generate new and unique solutions. If convergent thinking is common sense reasoning, then divergent thinking is “human sense.”
At this point, AI cannot match the human level of creativity, empathy, morality, or intelligence needed for divergent thinking. However, it is worth mentioning that AI systems can be enhanced with deep learning and other advanced techniques to strengthen their pattern recognition capabilities. More on this later.
As for convergent thinking, intelligent machines are already surpassing humans in certain areas. AI is being used to automate mundane tasks and make decisions based on information and analysis that would be too difficult for a human to process. In many cases, this has resulted in faster and more accurate results than what humans can achieve.
Who would be smarter – AI or Humans?
Since biological and digital intelligences are multifaceted and operate differently, it is difficult to compare the two directly. When it comes down to which one is “smarter,” there is no simple answer. We can look at it from many different angles, such as:
Before March 2016, Go champion Lee Sedol reigned supreme for almost 20 years as the greatest Go player in the world. The strategy game consists of two players who take turns placing black and white stones on a board. The aim of the game is to capture as much territory as possible by surrounding your opponent’s pieces. Go is typically enjoyed by high-IQ human players because of its complexity and the skill level required to master it. In fact, there are more possible board configurations in Go than atoms in the universe, so accounting for every possible move is impossible for humans.
In 2016, DeepMind designed a computer system called AlphaGo to challenge Lee Sedol in the game of Go. After months of training, AlphaGo went on to defeat Lee Sedol in all but one of their five matches. Not only did this achievement lead to Sedol’s retirement, but it also demonstrated the potential of AI to exceed human capabilities in complex strategic games. Today, AlphaGo Zero is even stronger than its predecessor and can beat it 100–0 in a match.
The ball drops back into humanity’s court when it comes to one-shot learning. One-shot learning is a type of intelligence where someone (or something) can learn from very few examples. For instance, if you showed a human a picture of an apple and asked them to recognize it in the future, they could do so easily. Even children 1–2 years old can recognize an apple after seeing it only once. AI, on the other hand, is not quite as adept at this sort of learning yet.
AI is often based on supervised learning, meaning it needs many labeled examples to recognize patterns and make accurate predictions. One-shot learning requires the system to be able to detect patterns after just one example, which is something that AI still struggles with. While progress has been made in this area, AI can’t quite match the human ability for one-shot learning yet.
Finally, when it comes to emotional intelligence, AI has not been able to match the capabilities of humans. Emotional Intelligence (EI) is the ability to perceive, understand, and manage emotions. We can break this down into four components: Self-awareness, self-regulation, motivation, and empathy.
AI has become more adept at recognizing human emotions, but it has yet to demonstrate the ability to understand, interpret and use them effectively. Intelligent machines are also unable to truly feel empathy towards another entity. The closest they have come is being able to mimic human behavior and responses in specific scenarios, but this does not constitute true emotional intelligence.
Emotional intelligence extends to other disciplines, such as the arts and philosophy. Human creativity is still unrivaled in this area. Sure, you could program an AI to mimic certain creative processes, but it would be unable to think outside the box or come up with something that has never been seen before.
However, that may be about to change. In 2022, an AI-generated artwork actually won first place at the Colorado State Fair for the first time in history. Of course, humans were still responsible for the development and programming of AI. Still, it showed that machines might soon be able to create artwork with a level of creativity comparable to that of humans.
A Weird Intelligence
As so eloquently displayed by Janelle Shane’s “AI Weirdness,” machines can come up with some pretty strange solutions when tasked with a problem. While these crazy inventions and ideas may seem far-fetched, they display an intelligence that is completely unique to AI. Do we consider an octopus weird or intelligent for changing its color to match the environment?
AI might not be able to match the emotional intelligence of a human yet, but nothing indicates that this type of intelligence is more important than another in the grand scheme of things.
AI Has The Capacity of Self-Learning.
Humans learn in a variety of ways, mostly as a result of experience. We observe our parents, teachers, and peers; we experiment with new things and learn through trial and error. AI can learn in a similar way called machine learning. Machine learning involves algorithms that “learn” from data and make predictions or decisions without being explicitly programmed to do so.
Deep learning is a type of machine learning that relies on neural networks and can learn complex tasks such as image classification, natural language processing, and autonomous driving. The system “learns” from labeled data, meaning it gets better at recognizing patterns with more examples.
Human interventions, such as providing huge datasets and adjusting the system’s parameters, are still necessary for machines to learn. But with advances in machine learning and deep learning, AI systems can begin to match and even outperform humans in certain tasks. The end result could be a form of superhuman intelligence that gathers all the combined knowledge of human history and applies it with superhuman speed and accuracy.
AI moving towards AGI
Up to now, we have mainly discussed how AI software compares to human intelligence. However, human-like intelligence requires constantly interacting with the environment in meaningful and purposeful ways. Well, technological progress is now moving towards creating such a product: artificial intelligence, or Artificial General Intelligence.
AGI would involve software that can understand the context and make decisions without the need for explicit programming. This form of AI is often represented as a humanoid robot with human-like skills and behavior. They would have sensors that can detect their environment and the ability to make their own decisions like humans. For instance, LiDAR sensors allow an AGI to build a 3D map of its environment and use this data to navigate autonomously.
Also called strong AI or human-level AI, AGI is still in its infancy and is considered by some to be a very distant goal. However, this does not mean that progress has stopped; many researchers are working hard to develop AGI systems. The ultimate goal is to reach intelligence awareness, where the vessel understands and acts upon the environment.
Compared to narrow AIs, like AlphaGo and self-driving cars, AGI systems have the potential to be much more intelligent than humans. They wouldn’t be restricted to one specific task but could master many disciplines and become extremely powerful. This freedom of thought and ability to interact with its environment could make AGI systems smarter than humans in multiple ways.
We Are Probably Not as Smart as We Think
Another factor to consider in this complex debate is that our status as an intelligent species has only been validated by human judgment. We compare ourselves to other earthling species and assume that our intelligence is the only valid form of knowledge. But who knows what other forms of intelligence exist outside our planet and in the universe as a whole?
Chances are, we are not nearly as intelligent as we think. Some extraterrestrial species that have already merged with technology or achieved a level of superhuman intelligence may be looking down upon us, laughing at our limited capabilities.
Now, that’s not to say our intelligence isn’t valuable. It is exactly what we need to live, interact and survive in our specific environment. As such, it is invaluable, and we should continue to strive to achieve more. But, even though we may be able to use our intelligence to create smart machines, such AIs may end up surpassing us in ways we can’t yet comprehend.
Limited Cognitive Capacity As Humans
Part of the reason we probably aren’t as smart as we think is because of our limited cognitive capacity as humans. Cognitive intelligence is limited to our own mental capacity and can only be translated into tangible results within the confines of our environment. Our mental capacity is defined by hundreds of thousands of years of evolution and, in essence, is quite limited.
We cannot “compute” beyond the boundaries of our physical capacity. Our brains work like computers, but they are severely limited in terms of power and speed. As such, we can only tackle specific problems simultaneously at speeds roughly equivalent to 60 bits per second. On the other hand, computers can work at an astonishing rate of billions or even trillions of bits per second.
Does that make them billions of times smarter than us? Not necessarily; it just means they can process data much more quickly. However, this accelerated speed and power could make them the ideal choice for solving complex tasks and calculations that our feeble human minds can’t handle.
Also Read: How Can AI Improve Cognitive Engagement.
Ingrained Cognitive Biases As Humans
The last talking point we’d like to explore in the human- and artificial intelligence debate is our cognitive biases as humans. We arrive at conclusions based on the information we’re presented with and our preconceived notions. For example, humans are not immune to prejudice and stereotyping as we can interpret information in our context.
Motivational biases can also play a huge role in our decision-making process; we see this often with politicians and businessmen motivated by money or power. In other words, every decision we make is filtered through our human perspective, which can be quite flawed.
On the other hand, AI systems are impartial and objective; they don’t have any preconceived notions or motivations to influence their decisions. As such, they are much better suited for making complex calculations requiring many data points and variables.
However, it’s worth noting that since humans are the ones who design and build AI systems, their cognitive biases can still be present in the underlying algorithms. As such, AI developers must go to great lengths to ensure their creations are free of any human bias. This problem is slowly being addressed, but it is still a major challenge.
For instance, if the team of developers behind an image-generation AI only feeds pictures of white people as a training dataset, the AI will likely produce only white people in its outputs. That’s a simple example of a cognitive bias, but much more complex cases can occur in AI systems. They may be subtle enough to go unnoticed by humans but can still be detrimental to how AI systems operate.
In conclusion, there is no doubt that in the distant future, AI-based systems may become smarter than humans in various areas. Leaders in the field, such as Elon Musk, have even declared AI an existential threat to humanity. Even though he owns OpenAI, one of the world’s leading AI research companies, Musk understands the potential of AI and how it can become more powerful than humans.
Thankfully, OpenAI, the Machine Intelligence Research Institute, and other organizations are pursuing “friendly AI” development, which aims to ensure that these powerful systems will benefit humanity and not be a danger.
At the same time, it’s important to remember that AI is still in its infancy. It exists within finite boundaries and cannot think or act beyond the scope of its programming. Will that change in the future? Only time will tell.