Introduction: Self taught AI will be the end of us
Will self taught AI be the end of us? Are we creating the instruments of our own destruction or exciting tools for our future survival?
Once we teach a machine to learn on its own—as the programmers behind AlphaGo have done, to wondrous results—where do we draw moral and computational lines? Let us tackle the very questions that may define the future of humanity.
Source: YouTube | WSF
Since video games first appeared on the market in the form of virtual chess and solitaire games, they have been a platform in which artificial intelligence (AI) has been developed. Each victory of a machine over a human in a game of chess has helped make algorithms smarter and more efficient.
In order to solve real world problems, such as automating complex tasks such as driving and negotiation, these algorithms must navigate more complex environments than those found in board games and must learn to work in a team.
It had been an insurmountable challenge, until now, to teach artificial intelligence how to work and interact with other players to succeed.
New research has revealed a method that researchers have developed to train artificial intelligence algorithms to perform at human levels in 3D multiplayer games, such as a modified version of Quake III Arena in Capture The Flag mode.
It may be a simple task – two opposing teams are competing to capture the flags of each other by navigating a map – but for this game to succeed it requires complex decision-making and an ability to predict and respond to the actions of the other players.
This marks the first time in the history of first-person video games when an artificial intelligence has attained human-like skills.
Self taught AI learning curve
Several milestones in AI research have been reached in other multiplayer strategy games over the course of multiple years. The fact that five of the bots were controlled by an artificial intelligence, they defeated a professional e-sports team in a game of DOTA 2.
There have also been instances when human players in a game of StarCraft II have been defeated by an artificial intelligence. All of the algorithms in all of the cases applied reinforcement learning, which means that the algorithm learns by trial and error while interacting with its environment.
It’s not necessarily true that the five bots that won DOTA 2 did not learn from humans – they were trained exclusively by playing matches against clones . The improvement that enabled them to defeat professional players was the scaling of existing algorithms. The AI system is capable of playing a game that takes humans minutes or even hours to play in a few seconds because of AI’s computational powers. As a result, the researchers had the opportunity to train their artificial intelligence with 45,000 years of game playing within just ten months.
It is worth noting that the Capture the Flag bot from the recent study also began learning from scratch. Rather than playing against its identical clone, a cohort of 30 bots were created and trained to work in parallel with each other with their own internal reward signals. During session play each bot was confronted with the same environment and learned from each other’s actions.
Despite the most advanced deep reinforcement learning algorithms, humans still learn at a speed much faster than the most advanced deep learning algorithms.
In order to reach a level of performance that even a human can match, OpenAI’s bots as well as DeepMind’s AlphaStar (the bot running StarCraft II) consumed thousands of hours of gameplay. In spite of this, a self taught AI capable of beating humans at their own game is an exciting breakthrough that could change how we think about machines.
The future of self taught AI
In a Capture the Flag game, human players rated the AI bots as being more collaborative than their peers. A few players expressed enthusiasm, saying that they felt supported as well as learning from playing alongside their AI teammates.
However, should artificial intelligence learn from us or should it continue to develop on its own? By teaching AI to learn without imitating humans, it could become more efficient and creative, but it could also create algorithms that might be better suited to tasks that don’t require human collaboration, such as warehousing robots. As AI gets smarter, we’re all in for more surprises. I think Self taught AI would mean the end of human involvement in most tasks or work humans do.