World War III & Killer Drones: An Artificial Intelligence Disaster?
10 years ago, the thought of a tiny, self-flying drone with a gun on it would be laughable. But 10 years later, with the dawn of state-of-the-art Deep Learning technology, the impossible is now possible, and it’s getting World War 3 – level scary.
Elon Musk, Stephen Hawking, and several other influencers have expressed their concerns on Artificial Intelligence (AI), which has made substantial developments in recent years. In the past year, AI has discovered new planets, made breakthroughs in AGI (Artificial General Intelligence), AlphaZero, and improved cancer diagnoses.
But what happens when governments begin to get dangerous with Artificial Intelligence? Autonomous Killer Drones, target identification, AI hackers, and AI-powered missiles are all things that have now been made possible with AI. Some governments are rushing to create new military arsenals using AI, which, in turn, is forcing other governments to do the same. In the near future, we can expect a huge AI arms race, where countries compete for dominance in Artificial Intelligence – we already see China and the US in a tight competition. It only makes Elon Musk’s “World War 3” comments even scarier.
Playing with Fire
Unfortunately, this isn’t the first time we’ve seen such a “race”. In the 1960s, the Manhattan Project triggered a large military focus on nuclear weapons, causing the US and Russia to stare each other down with their nuclear arsenal. Lasting for decades, nuclear non-proliferation efforts were the only way to calm the situation down – and even then, experts knew that we should have tried to prevent a conflict before the nuclear arms race even started. There should never have been a Cold War in the first place.
With bomb shelters being constructed in every American home, it was clear that we were on the edge of a nuclear disaster, and barely made it out. It’s a situation that the world should never experience again, and yet we’re at the brink of yet another arms race, which could lead to a World War 3.
So how do we ensure we don’t reach another standoff situation? Hundreds of AI experts have already expressed their concern regarding arming AI with weaponry. It’s up to world leaders to hold global conferences about AI. It’s up to regulatory agencies and governments to create appropriate regulations (AI Weapons ban, etc.) to prevent us from reaching a potential World War 3. And it’s up to us, the AI community, to make sure that they know how critical our current technological situation is.
Is Regulating AI even Beneficial?
We’ve seen so many technological advancements being made with Artificial Intelligence – we’d only be limiting this amazing growth with regulation, right?
Well, not completely. The AI community is largely pushing for regulation pertaining to AI & weaponry. This would still allow devices like Amazon’s Alexa to be created and used by the public. We’d still have great advancements in medical technology, space exploration, and robotics. People’s lives would get more convenient, but military AI development would be regulated.
An optimal and safe AI would comprise of devices made for making human’s lives easier, not for destruction. In one of his science-fiction short stories, author Isaac Asimov provided 3 rules for creating robots, now known as the “Three Laws of Robotics”. These well-known rules (or at least the basic ideas behind them) should apply to AI as well, and perhaps could serve as the basis for defining “safe” versus “unsafe” usage of Artificial Intelligence.
Is it Possible to Regulate AI Worldwide?
While regulating AI is a noble goal, achieving it is no easy task. We would have to work with countless governments, and perhaps even companies, to make sure that safe and harmless AI is being created – and even then, creating actual legislation would be difficult.
AI is Cheap
Unlike nuclear weapons, developing AI is simple – almost anyone can do it. Our Stock Prediction with Deep Learning series outlines how to create a AI Trading Bot with historically annualized returns of up to 46.5%. If anyone with a computer could technically “beat” the stock market, then generic AI regulation is hard to achieve.
But then again, the same goes for any regulation – marijuana, guns, and other “contrabands”. Why is this different?
While a crate of guns or shipment of grenades can be physically policed, supervising software is much harder. In its most simple form, AI is simply software information, that when given to a machine, makes it function. It’s hard to regulate Artificial Intelligence simply due to its ability for people to develop it in secret.
International Enforcement
While nuclear weapon development is only achievable by some governments around the world (due to cost), most governments could create their own AI. Getting even more entities to sign some sort of “AI non-proliferation agreement” is hard. If some countries are aggressive and refuse such an agreement (perhaps North Korea?), other countries will back out as well – you can’t allow one country to get ahead technologically while the others are stuck watching them develop nuclear weapons.
Nonetheless, Musk, Hawking, and countless other tech leaders have formed to urge the United Nations to ban AI Autonomous Weapons, a definite step in the direction towards globally enforcing AI regulations. Although meeting in November 2017, there has been no decision made by the United Nations on AI regulation.
Defining AI
Another main difficulty with AI regulation is defining it. Stanford’s 100 Year Study on AI report states that “attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI (it isn’t any one thing)”. If you think about it, AI is based on math, and regulating matrix multiplication doesn’t seem very rational. Suddenly, the term “Artificial Intelligence” becomes unclear. And this is the exact problem with the regulation of such an abstract concept.
Failing to unanimously agree on regulation would result in one country dominating AI development
The Zero-Sum Game
So what happens if we fail to regulate AI at all? More technically-advanced countries, with big and wealthy companies, would begin a buildup of Artificially Intelligent technology. Meanwhile, smaller countries, focused on their own issues, would fall behind.
But don’t get me wrong – AI isn’t all bad. Organizations would certainly employ AI to help smaller countries, and our whole world would benefit. However, in the long run, unregulated AI gives the top entities in the world even more power and possibilities in which to cause harm.
A world in which a small group of entities hold all the power is never a very stable one – just look at the North Korean missile crisis, where the country is threatening its “defense-less” neighbors with missiles. Without checks and balances, such a tense situation can blow up at any time. In AI’s case, conflict and hunger for power could lead to large companies taking over countries with AI cyberattacks – the possibilities become limitless. And World War 3 becomes imminent.
It’s obvious that we don’t want any one country, or any one company, to win the race in Artificial Intelligence. Unfortunately, that’s unrealistic to demand, but the best we can do is attempt to make sure AI is never used as a weapon. That way, our most technologically advanced countries would be using AI for good.
Finding a solution to our World War 3 problems won’t be easy. Government regulation seems like a good idea, but we’ve seen how difficult that can actually be. Getting the current countries leading AI development to meet together and discuss preliminary rules and guidelines towards safe AI is the first step we should take to prevent a World War 3.
What do you think? Will AI cause World War 3? Leave your thoughts below.
Recent Comments