In the modern world, technology amazes us with its infallibility in completing increasingly higher-level tasks. We have ingrained in them logical and mathematical constraints that seem to surpass our own limitations. Despite this, modern AI still finds difficulty in accomplishing tasks that infants experience no trouble with. This is the basis for Moravec’s paradox. The official statement from Moravec’s 1988 paper is below:
“Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”1
The short answer is that the human brain is a product of millions of years of evolution whereas AI has barely reached its 50th birthday. What we as humans find difficult, like calculating the trajectory of a rocket ship or decoding cryptic messages, are only difficult because they are relatively new concepts in the grand scheme of evolution. The skills we already have, like consciousness, perception, visual acuity, and emotion all come acquired through evolution. The challenge is how do we teach a machine these concepts we don’t give a second thought to?
The things that humans find hard are only hard because they are new. The skills that we already acquired through evolution come to us so naturally that we do not have to think about it. How exactly are we going to teach a machine the things that we do not even think about?
For example, even a simple task such as playing catch requires a lot of considerations. AI would need sensors, transmitters, and effectors. It needs to calculate the distance between its companion and itself, the sun glare, wind speed, and nearby distractions. It would need to decide how firmly to grip the ball and when to squeeze the mitt during a catch. It would also need to consider several what-if scenarios: What if the ball goes over my head? What if it hits the neighbor’s window? The amount of code to write and execute would be near infinite!
AI still has a ways to go to compensate for the human ingenuity. Moravec’s claim still holds sway today. AI can accomplish much, but the effort required to teach an AI maybe considerably more than we instinctively anticipate.
Are we close to a breaking through Moravec’s Paradox?
What AI cannot do right now is to go beyond the parameters of what it has learned. Humans, on the other hand, can use their imagination to dream new possibilities. AI cannot perform creative tasks such as telling a joke or writing an original story. But are they really that far off?
In 2016, Google’s brainchild, DeepMind Technologies, developed AlphaGo a program that defeated world Go champion Lee Sedol. This reinforcement learning AI innovated new strategies for the centuries old game that earned the respect of many masters. In 2021, OpenAI released GPT-3, a revolutionary NLP algorithm that has greater comprehension for the nuances and context of language. Through it, OpenAI created DALL-E which generates never before seen art from any supplied text.
Though these feats are impressive, it does not necessarily mean that sentient AI is around the corner. Human’s innate skills are valuable assets that are not easily transferable, and humans will always remain relevant. However, we may be closer to a breakthrough than Moravec could have ever guessed.
What is Moravec’s Paradox?
In the modern world, technology amazes us with its infallibility in completing increasingly higher-level tasks. We have ingrained in them logical and mathematical constraints that seem to surpass our own limitations. Despite this, modern AI still finds difficulty in accomplishing tasks that infants experience no trouble with. This is the basis for Moravec’s paradox. The official statement from Moravec’s 1988 paper is below:
“Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”1
Also Read: How Video Games Use AI
Why are Simple Tasks Hard for AI?
The short answer is that the human brain is a product of millions of years of evolution whereas AI has barely reached its 50th birthday. What we as humans find difficult, like calculating the trajectory of a rocket ship or decoding cryptic messages, are only difficult because they are relatively new concepts in the grand scheme of evolution. The skills we already have, like consciousness, perception, visual acuity, and emotion all come acquired through evolution. The challenge is how do we teach a machine these concepts we don’t give a second thought to?
The things that humans find hard are only hard because they are new. The skills that we already acquired through evolution come to us so naturally that we do not have to think about it. How exactly are we going to teach a machine the things that we do not even think about?
For example, even a simple task such as playing catch requires a lot of considerations. AI would need sensors, transmitters, and effectors. It needs to calculate the distance between its companion and itself, the sun glare, wind speed, and nearby distractions. It would need to decide how firmly to grip the ball and when to squeeze the mitt during a catch. It would also need to consider several what-if scenarios: What if the ball goes over my head? What if it hits the neighbor’s window? The amount of code to write and execute would be near infinite!
AI still has a ways to go to compensate for the human ingenuity. Moravec’s claim still holds sway today. AI can accomplish much, but the effort required to teach an AI maybe considerably more than we instinctively anticipate.
Also Read: Artificial Intelligence and disinformation.
Are we close to a breaking through Moravec’s Paradox?
What AI cannot do right now is to go beyond the parameters of what it has learned. Humans, on the other hand, can use their imagination to dream new possibilities. AI cannot perform creative tasks such as telling a joke or writing an original story. But are they really that far off?
In 2016, Google’s brainchild, DeepMind Technologies, developed AlphaGo a program that defeated world Go champion Lee Sedol. This reinforcement learning AI innovated new strategies for the centuries old game that earned the respect of many masters. In 2021, OpenAI released GPT-3, a revolutionary NLP algorithm that has greater comprehension for the nuances and context of language. Through it, OpenAI created DALL-E which generates never before seen art from any supplied text.
Though these feats are impressive, it does not necessarily mean that sentient AI is around the corner. Human’s innate skills are valuable assets that are not easily transferable, and humans will always remain relevant. However, we may be closer to a breakthrough than Moravec could have ever guessed.
Share this: