Introduction: AI and weapons of the future
AI and weapons of the future are very concerning. Since the early days of computing, scientists have been exploring artificial intelligence’s potential to impact various aspects of life. In recent years, AI has begun to play a more significant role in multiple industries, like transport, finance, and manufacturing.
But as always, we can also use revolutionary technology for warfare.
Also Watch: A drone that can dodge anything thrown at it.
From smart weapons that can select their targets to automated drone swarms, it seems like the future of combat will be increasingly reliant on artificial intelligence. In this blog post, we’ll explore some of the pros and cons of AI-powered weaponry and what the future of battle might look like if these technologies are widely deployed.
The Depersonalization of Killing Throughout History
If we want to understand what’s in store for us, it helps to take a step back and look at where we came from. The early days of hominid warfare saw mankind wielding swords in close-range combat, where the enemy’s blood would be spilled by one’s own hand.
The advent of gunpowder changed all that. Suddenly, it became possible to kill an enemy from a distance, with minimal risk to oneself. This technological advance led to the development of the sniper, launchers, artillery, and the concept of “asymmetric warfare” – where one side has a significant advantage over the other.
Contemporary battles are even less personal, with weaponry like nuclear bombs, airstrikes, and landmines. It is now possible to kill without even having a visual on your target. What could possibly be the next step in this depersonalization of killing?
You guessed it – artificial intelligence.
Misinformation: The most powerful AI driven weapon.
From politics to cybersecurity and to warfare, digital innovations have created a fertile environment for misinformation to spread. It’s now possible for artificial intelligence (AI) programming systems to create false information and present it as fact – and even trick experts into thinking the information is true.
The spread of misinformation is a problem that starts in the human mind and is concretized with the help of Big Data, social media, and news media. When misinformation is presented effectively, it can be nearly impossible to decipher fact from fiction. And the amount of data out there allows machine learning (ML) algorithms and AI to learn and tailor their outputs constantly, making it even more challenging to tell the difference.
How does misinformation work?
Much of the damage that misinformation causes isn’t directly related to the false information itself, although false statements often do cause damage to individuals and companies. The “tainted truth effect” refers to the psychological effect on the human brain after being warned about the accuracy of what they’re reading. Whether the warning comes in the form of well-intentioned fact checking or ill-intentioned fear mongering, the effect causes people to distrust information to the point that they may even start to disregard true headlines for the possibility that they may be false.
When people are not sure what news outlets or public figures to trust, misinformation can contribute to the chaos and deterioration of public discourse. There have been studies proving that inaccurate statements cause problems with recall even when the person knows what they are reading is false. The spread of false information creates a distrustful environment, and is a tactic that has historically been used to create public confusion on purpose, especially pushing false narratives during conflict.
Also Read: Artificial Intelligence and disinformation.
This is the most dangerous AI based weapon of the future. This can cause immense amount of confusion and breakdown any strategy of any structured force. This can topple democracies of the world without firing a single shot.
What is AI-Powered Weaponry?
AI-powered weaponry is, quite simply, weaponry that uses artificial intelligence to select and engage targets. That could be anything from an autonomous drone that identifies and tracks targets using facial recognition to a missile that can navigate through the skies.
Also Read: AI global arms race
Some experts believe that AI-powered weaponry could completely revolutionize warfare as we know it. They argue that not only would it result in fewer civilian casualties, but it could also lead to more efficient and accurate combat operations.
On the flip side, there are others more skeptical about the role of A.I. in warfare. These individuals claim that the technology is still in its infancy, and many potential dangers are associated with its use.
Instead of jumping into the debate, let’s objectively look at a few more examples of AI-powered weaponry.
Although years of training and adrenaline turn soldiers into near-superhuman machines, there is always the risk of human error. Mistakes can be costly, both in terms of lives and strategic advantage. Delays in target acquisition, hesitation, and even simple fatigue can lead to disastrous consequences on the battlefield.
Smart weapons eliminate that risk by removing the need for human input. They can identify targets using a variety of sensors, like visual, acoustic, or radar sensors. They are often used in conjunction with their human handlers, in which case they are dubbed semi-autonomous.
One incredible application of A.I. in warfare is the “smart gun.” These are weapons that can only be fired by authorized users, thanks to biometric technologies like fingerprint scanners and iris recognition.
Taking things even further, the weapon could also be programmed to only fire when pointed at a specific type of target, like enemy soldiers or vehicles.
The benefits of such a weapon are obvious – it would virtually eliminate friendly fire incidents and decrease the chances of the weapon falling into enemy hands and being used against the squadron.
But the advantages don’t stop there. In a world where so many mass shootings are carried out by individuals who gain access to somebody else’s gun, the “smart gun” could be a game-changer.
If the weapon could only be fired by its owner(i.e., registered owners, police officers), it would make it much harder for criminals, terrorists, or disgruntled children to use deadly firearms.
In 2015, the United States Army successfully tested a new smart bullet called “EXACTO.” The self-guided bullet can change direction in midair and is powered by a small onboard motor.
This opens up all sorts of possibilities for snipers and other Battlefield Operators. Imagine being able to take out a target from hundreds of yards away without even having to take into account the wind speed or direction.
In a similar vein, the military can also use A.I. in air defense. Initially, most air defense systems were based on human controllers monitoring radar screens and issuing orders to interceptors.
However, with the advent of A.I., it is now possible for a computer system to monitor the radar screens and make decisions about when and how to respond to an incoming attack. This frees up human controllers to do other tasks and allows for a faster and more efficient response to an attack.
For example, the United States’ Phalanx CIWS is a close-in weapon system that uses radar and computers to identify and target incoming missiles or artillery rounds. The unit can:
Loitering munitions are crew-less aerial vehicles that can loiter over an area for long periods, waiting for targets to appear. Once it identifies a target, the loitering munition can engage it with onboard weapons.
Also known as “suicide drones,” they stand somewhere between unmanned combat aerial vehicles (UCAVs) and cruise missiles.
Just like a UCAV, the suicide drone can hover over an area for long periods looking for a kill, which is especially useful for targets that stay hidden or move around frequently. And like a cruise missile, a loitering munition’s mission ends with a stealthy final dive towards the target, at which point it detonates its explosives.
Loitering munitions are often equipped with cameras and sensors that give them a wide field of view. That way, they can detect targets at long range and engage them before they have a chance to escape.
Benefits of loitering munition
There are many benefits to using loitering munitions over other unmanned aerial vehicles.
For one, they are much cheaper to build and operate than a UCAV. In fact, most loitering munitions can be built for well under $10,000 at scale.
They are also much smaller and lighter than a UCAV, making them easier to transport and launch.
Lastly, loitering munitions can be used in a much wider range of environments than a UCAV. They can fly in winds that would ground a larger aircraft and operate in temperatures that freeze or melt standard electronics.
Disadvantages of loitering munitions
The main disadvantage of loitering munitions is also their ease of access. A makeshift loitering munition can be built by anyone with a 3D printer and basic electronics knowledge.
If a swarm of these got into the wrong hands, they could easily be used to attack targets such as a military base or a city. A loitering munition attack could cause mass casualties in areas with high concentrations of people.
Another disadvantage is that, because they are expendable, loitering munitions can be used much more recklessly than other types of unmanned aerial vehicles. There is no need to worry about retrieving them after an attack, so they can be sent into situations that would be too dangerous for a UCAV.
Other AI-Powered Aircrafts
Loitering munitions do fly, but not very fast nor very far. The nature of their journey makes it easier for the A.I. to control and make decisions mid-flight. But what about aircraft that reach thousands of miles per hour in speed and need to execute tight turns, like fighter jets?
Boeing might have just the answer. The aerospace manufacturer unveiled a new fighter jet that A.I. will power. The jet, known as the “Loyal Wingman,” is a drone that can fly alongside a human-piloted plane and act as a wingman.
Many countries, including the U.S., Australia, and the U.K., have already signed up for the program.
The Loyal Wingman is still in development, but Boeing has said that it will be able to fly autonomously and make decisions based on data from sensors on the ground and in the air.
The future of A.I. in air battle is bright. Although we haven’t heard the top speeds for the Loyal Wingman yet, it’s safe to say that an autonomous jet could fly further and faster than any pilot. Human physiology limits how long we can fly and how many G-forces we can take, but an A.I. would fly for days on end without tiring.
An autonomous jet would also process information much faster than a human. Every second counts in the heat of battle, and an A.I. would be able to make split-second decisions that could mean the difference between life and death.
Lastly, an autonomous jet would fly closer to the enemy without fear of being shot down. In a dogfight, both pilots are trying to get behind the other so they can take a clear shot. But an A.I. would fly erratically, making it much harder for the enemy to hit.
Another benefit of an autonomous aircraft like the Loyal Wingman is its price tag. An F-35 fighter jet costs around $80 million. The Loyal Wingman is still in development, but Boeing has hinted at a $5-$6 million price.
A.I in War Strategy
So far, we’ve looked at how artificial intelligence can make a difference in combat. But what about planning in the conference room? Planning and executing a war is a huge undertaking that requires a lot of strategic thinking.
In the past, this has been done with maps, miniatures, and a lot of guesswork. But now, there are A.I.-powered algorithms that can help make sense of data and predict the outcomes of different scenarios.
War strategy is a complex field, and no algorithm is perfect. But by using A.I., we can consider a much more comprehensive range of factors than ever before.
For example, many traditional war games only consider things like terrain and troop numbers. But with the help of A.I., we can also factor in things like weather, social media sentiment, and even the oil price.This allows us to run multiple simulations and develop the best possible strategy.
War generals can fall victim to emotion and bias when making decisions. But an A.I. would be able to make decisions based on data and logic without emotions clouding its judgment.
A.I. can also help us identify patterns that humans might miss. For example, A.I. might notice that the enemy is constantly attacking at a specific time of day or retreating to the same location.
Lastly, A.I. can help us make decisions in real-time. In the heat of battle, there is no time to sit down and crunch numbers. But an A.I.-powered war room could do just that, giving us the information we need to make split-second decisions.
Conclusion: AI and weapons of the future
While the future of warfare is uncertain, one thing is for sure: Artificial intelligence is the next logical step in the evolution of warfare.
As we move forward into this new era of warfare, it’s critical that we thoughtfully consider the implications of putting A.I. in control of deadly weapons. This is a good thing in many ways – A.I. can help reduce collateral damage and make our military forces more effective. However, some significant dangers come with giving machines the power to make life-and-death decisions.
What do you think about the future of A.I. in warfare? Let us know in the comments below!
Also Read: Self taught AI will be the end of us
Institute, Stockholm International Peace Research. Arms and Artificial Intelligence: Weapon and Arms Control Applications of Advanced Computing. Stockholm International Peace Research Institute, 1987.
Johnson, James. Artificial Intelligence and the Future of Warfare: The USA, China, and Strategic Stability. Manchester University Press, 2021.
Monte, Louis A. Del. Genius Weapons: Artificial Intelligence, Autonomous Weaponry, and the Future of Warfare. Prometheus Books, 2018.