Why it doesn’t make sense to ban autonomous weapons

In May 2019, the Defense Advanced Research Projects Agency (DARPA) announced, “There is currently no AI that can eject a human into a high-speed, high-G dogfight by tying it to a fighter jet.”

Fast forward to August 2020, creating an A.I. Heron system Innocently defeated top fighter pilots from 5 to 0 in DARPA’s alphadogfight trial. Time and time again Heron’s AI overtook human pilots as it pushed the limits of G-forces with unconventional tactics, lightning-fast decision making, and lethal accuracy.

Former US Secretary of Defense Mark Ariz announced in September that the Air Combat Evolution (ACE) program would deliver AI to the cockpit by 2024. They are very clear that the goal is to “assist” the pilots rather than “replace” them. However, in the heat of battle against other AI-enabled platforms, it’s hard to imagine how humans can be reliably kept in the loop when humans simply aren’t fast enough.

On Tuesday, January 26, the National Security Commission on Artificial Intelligence met recommending not to ban AI for such applications. In fact, Vice Chairman Robert Work stated that AI can make fewer mistakes than its human counterparts. The Commission’s recommendations to Congress in March, The Campaign to Stop Killer Robots, have been in direct opposition with a coalition of 30 countries and several non-governmental organizations, which have been advocating against autonomous weapons since 2013.

There are plenty of sound reasons to support a ban on autonomous weapons systems, including destabilizing military gains. The problem is that AI development cannot be stopped. Unlike visible nuclear enrichment facilities and material restrictions, AI development is much less visible and thus nearly impossible to police. In addition, the advancement of AI used to transform smart cities can easily be used to increase the effectiveness of military systems. In other words, this technology will be available to aggressively posted countries that will embrace it towards achieving military dominance whether we like it or not.

Therefore, we know that these AI systems are coming. We also know that no one can guarantee that a human remains in the loop in the heat of battle – and as Robert Work argues, we do not want to. Whether viewed as an inhibitory model or to ease the security dilemma, the reality is that the AI ​​arms race has begun.

[Read: How Polestar is using blockchain to increase transparency]

“I think we should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it probably is. ” – Elon Musk

Like most technology innovations whose potential unintended consequences begin to give us pause, the answer is almost never a restriction, but rather to ensure that its use is “acceptable” and “protected”. As Elon Musk suggested, we should be very careful indeed.

Acceptable use

Just like that Facial recognition, Which is subject to considerable scrutiny with increased sanctions across the US, is not the technology that is the problem – it is its acceptable use. We should define situations where such systems can be used and where they cannot. For example, no modern-day policewoman would ever shy away from showing a single victim a single suspect photo and asking, “Has that person been seen?” Using facial recognition to visually identify potential suspects is unacceptable Bias of such techniques Across various ethnicities, which itself goes well beyond the AI ​​training data limits to camera sensors).

Automatic License Plate Reader