In May 2019, the Defense Advanced Research Projects Agency (DARPA) announced, “There is currently no AI that can eject a human into a high-speed, high-G dogfight by tying it to a fighter jet.”
Fast forward to August 2020, creating an A.I. Heron system Innocently defeated top fighter pilots from 5 to 0 in DARPA’s alphadogfight trial. Time and time again Heron’s AI overtook human pilots as it pushed the limits of G-forces with unconventional tactics, lightning-fast decision making, and lethal accuracy.
Former US Secretary of Defense Mark Ariz announced in September that the Air Combat Evolution (ACE) program would deliver AI to the cockpit by 2024. They are very clear that the goal is to “assist” the pilots rather than “replace” them. However, in the heat of battle against other AI-enabled platforms, it’s hard to imagine how humans can be reliably kept in the loop when humans simply aren’t fast enough.
On Tuesday, January 26, the National Security Commission on Artificial Intelligence met recommending not to ban AI for such applications. In fact, Vice Chairman Robert Work stated that AI can make fewer mistakes than its human counterparts. The Commission’s recommendations to Congress in March, The Campaign to Stop Killer Robots, have been in direct opposition with a coalition of 30 countries and several non-governmental organizations, which have been advocating against autonomous weapons since 2013.
There are plenty of sound reasons to support a ban on autonomous weapons systems, including destabilizing military gains. The problem is that AI development cannot be stopped. Unlike visible nuclear enrichment facilities and material restrictions, AI development is much less visible and thus nearly impossible to police. In addition, the advancement of AI used to transform smart cities can easily be used to increase the effectiveness of military systems. In other words, this technology will be available to aggressively posted countries that will embrace it towards achieving military dominance whether we like it or not.
Therefore, we know that these AI systems are coming. We also know that no one can guarantee that a human remains in the loop in the heat of battle – and as Robert Work argues, we do not want to. Whether viewed as an inhibitory model or to ease the security dilemma, the reality is that the AI arms race has begun.
“I think we should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it probably is. ” – Elon Musk
Like most technology innovations whose potential unintended consequences begin to give us pause, the answer is almost never a restriction, but rather to ensure that its use is “acceptable” and “protected”. As Elon Musk suggested, we should be very careful indeed.
Just like that Facial recognition, Which is subject to considerable scrutiny with increased sanctions across the US, is not the technology that is the problem – it is its acceptable use. We should define situations where such systems can be used and where they cannot. For example, no modern-day policewoman would ever shy away from showing a single victim a single suspect photo and asking, “Has that person been seen?” Using facial recognition to visually identify potential suspects is unacceptable Bias of such techniques Across various ethnicities, which itself goes well beyond the AI training data limits to camera sensors).
Another technology that suffered from early abuse was the License Reader (ALPRs). ALPRs were not only useful for identifying target vehicles of interest (eg, expired registrations, suspended drivers, even arrest warrants), but license plates and their geographic locations database after a crime. Turns out to be quite useful for locating suspicious vehicles. It was quickly determined that this practice was counterproductive because it violated civil liberties and we now have formal policies for data retention and acceptable use.
Both of these AI innovations are examples of incredibly useful but controversial technologies that need to be balanced with well-thought-out Acceptable Use Policies (AUPs) that honor issues Decode, Prejudice, privacy and civil liberties.
Unfortunately, defining AUPs may soon be seen as the “easy” part because it only requires us to think and formalize and put more minds to suit the circumstances and which are not, However we need to move very fast in doing so. The hardest idea with adopting AI is to ensure that we are protected from the inherent threats of systems that are not widely popular even today – That Ai is hackable.
AI is susceptible to adverse conditions Data poisoning And model theft attacks can be used to influence the behavior of automated decision-making systems. Such attacks cannot be prevented from using traditional cyber security techniques because the input to AI falls outside the perimeter of an organization’s cybersecurity, during training and model deployment times. In addition, there is a wide gap in the skills required to protect these systems because cyber security and Read or learn to meditate There are often mutually exclusive niche skills. Deep learning experts typically do not have a glance at how malicious actors think and cybersecurity experts usually do not have deep knowledge about AI to understand potential weaknesses.
As an example, consider the task for training an automatic target identification (ATR) system to identify tanks. The first step in this task is to understand thousands of training images to teach AI. A malicious actor who understands how AI can work Embed hidden images The data are almost invisible to scientists, but during model development, the inputs flip according to the size of the training dimension to a completely new unseen image. In this case, the above image of a tank during model training time can be poisoned to completely flip over in a school bus. As a result ATRs are being trained to identify both tanks and school buses as threat targets. Remember the difficulty of keeping a human in the loop?
Many will dismiss this example as either impossible or impossible, but remember that neither AI experts nor cyber security experts understand the whole problem. Even if the data supply chain is secure, breaches and Insider threats Occurs daily, and is literally only one example of an unknown number of potential invading vectors. If we have learned anything it is that all systems are hackable given a motivated malicious actor with enough malicious power – and AI was never built keeping security in mind.
There is no point banning AI weapon systems because they are already here. We cannot police development, and we cannot guarantee that humans remain in the loop because these are the reality of AI innovation. Instead, we should define when it is acceptable to use such technology and, further, we take all mediocre actions to protect such technologies from hostile technologies that are malicious and no doubt developed by state actors Is not.
This article was originally published byJames stewart On TechtalksA publication, which examines technology trends, how they affect the way we live and do business, and the problems they solve. But we also discuss the bad side of technology, the deep implications of new technology, and the things we need. You can read the original article Here.
Published February 14, 2021 – 13:00 UTC