7 février 2019 | International, C4ISR

DARPA: Defending Against Adversarial Artificial Intelligence

Today, machine learning (ML) is coming into its own, ready to serve mankind in a diverse array of applications – from highly efficient manufacturing, medicine and massive information analysis to self-driving transportation, and beyond. However, if misapplied, misused or subverted, ML holds the potential for great harm – this is the double-edged sword of machine learning.

“Over the last decade, researchers have focused on realizing practical ML capable of accomplishing real-world tasks and making them more efficient,” said Dr. Hava Siegelmann, program manager in DARPA's Information Innovation Office (I2O). “We're already benefitting from that work, and rapidly incorporating ML into a number of enterprises. But, in a very real way, we've rushed ahead, paying little attention to vulnerabilities inherent in ML platforms – particularly in terms of altering, corrupting or deceiving these systems.”

In a commonly cited example, ML used by a self-driving car was tricked by visual alterations to a stop sign. While a human viewing the altered sign would have no difficulty interpreting its meaning, the ML erroneously interpreted the stop sign as a 45 mph speed limit posting. In a real-world attack like this, the self-driving car would accelerate through the stop sign, potentially causing a disastrous outcome. This is just one of many recently discovered attacks applicable to virtually any ML application.

To get ahead of this acute safety challenge, DARPA created the Guaranteeing AI Robustness against Deception (GARD) program. GARD aims to develop a new generation of defenses against adversarial deception attacks on ML models. Current defense efforts were designed to protect against specific, pre-defined adversarial attacks and, remained vulnerable to attacks outside their design parameters when tested. GARD seeks to approach ML defense differently – by developing broad-based defenses that address the numerous possible attacks in a given scenario.

“There is a critical need for ML defense as the technology is increasingly incorporated into some of our most critical infrastructure. The GARD program seeks to prevent the chaos that could ensue in the near future when attack methodologies, now in their infancy, have matured to a more destructive level. We must ensure ML is safe and incapable of being deceived,” stated Siegelmann.

GARD's novel response to adversarial AI will focus on three main objectives: 1) the development of theoretical foundations for defensible ML and a lexicon of new defense mechanisms based on them; 2) the creation and testing of defensible systems in a diverse range of settings; and 3) the construction of a new testbed for characterizing ML defensibility relative to threat scenarios. Through these interdependent program elements, GARD aims to create deception-resistant ML technologies with stringent criteria for evaluating their robustness.

GARD will explore many research directions for potential defenses, including biology. “The kind of broad scenario-based defense we're looking to generate can be seen, for example, in the immune system, which identifies attacks, wins and remembers the attack to create a more effective response during future engagements,” said Siegelmann.

GARD will work on addressing present needs, but is keeping future challenges in mind as well. The program will initially concentrate on state-of-the-art image-based ML, then progress to video, audio and more complex systems – including multi-sensor and multi-modality variations. It will also seek to address ML capable of predictions, decisions and adapting during its lifetime.

A Proposers Day will be held on February 6, 2019, from 9:00 AM to 2:00 PM (EST) at the DARPA Conference Center, located at 675 N. Randolph Street, Arlington, Virginia, 22203 to provide greater detail about the GARD program's technical goals and challenges.

Additional information will be available in the forthcoming Broad Agency Announcement, which will be posted to www.fbo.gov.

https://www.darpa.mil/news-events/2019-02-06

Sur le même sujet

  • Army Picks 2 Firms to Build Light and Medium Robotic Combat Vehicles

    15 janvier 2020 | International, Terrestre

    Army Picks 2 Firms to Build Light and Medium Robotic Combat Vehicles

    By Matthew Cox The U.S. Army has announced that it plans to strike deals with QinetiQ North America and Textron Systems to build versions of the Robotic Combat Vehicle (RCV). Army Combat Capability Development Command's Ground Vehicle Systems Center, along with the service's Next-Generation Combat Vehicles cross-functional team, intend to award Other Transaction Agreements (OTA) to QinetiQ North America to build four light versions of the RCV and to Textron to build four medium versions, according to a recent news release from the National Advanced Mobility Consortium. The Army often uses OTAs under its new acquisition reform strategy, so it can have prototypes built quickly for experimenting with new designs. If all goes well in upcoming negotiations, the service intends to award the final OTAs for both variants by mid-February, the release states. The prototype RCVs will be used as part of the Army's "Robotic Campaign of Learning" in an effort to "determine the feasibility of integrating unmanned vehicles into ground combat operations," the release adds. The RCV effort is part of the Army's sweeping modernization effort, launched in 2017. The service wants to develop light, medium and heavy version of the RCV to give commanders the option of sending unmanned vehicles into combat against enemy forces. "Robots have the potential to revolutionize the way we conduct ground combat operations," Brig. Gen. Ross Coffman, director of the Next-Generation Combat Vehicle cross-functional team, said in the release. "Whether that's giving increased firepower to a dismounted patrol, breaching an enemy fighting position, or providing [chemical, biological, radiological, nuclear] reconnaissance, we envision these vehicles providing commanders more time and space for decisions and reducing risk to soldiers." Following final OTA notices, QinetiQ North America and Textron's RCVs will be used in a platoon level experiment in March and a company-level experiment in late 2021, the release states. The results of the experiments, along with the findings from several virtual experiments, will "inform a decision by the Army on how to proceed" with robotic combat vehicles in 2023, according to the release. Textron, along with its subsidiaries Howe and Howe Technologies and FLiR Systems Inc., displayed the Ripsaw M5 unmanned tracked vehicle as its RCV in October at the Association of the United States Army's annual meeting. QinetiQ North America teamed up with Pratt and Miller Defense to enter its Expeditionary Modular Autonomous Vehicle (EMAV) at AUSA as well. Jeffrey Langhout, director of the Ground Vehicle Systems Center, applauded the selection of QinetiQ North America and Textron as a "testament to the dedication and passion of the Army to giving our soldiers the best capabilities possible." "This is a great day for our Army, as we make another important step in learning how we can employ robotic vehicles into our future formations," he said in the release. https://www.military.com/daily-news/2020/01/14/army-picks-2-firms-build-light-and-medium-robotic-combat-vehicles.html

  • Rafael finds European partners to market Trophy active protection system

    15 novembre 2021 | International, Terrestre

    Rafael finds European partners to market Trophy active protection system

    The new Germany-based venture, dubbed EuroTrophy, is charged with finding new takers for the defensive technology and leading any vehicle-integration efforts for future customers.

  • Fortem Technologies takes aim at ‘dark' UASs with SkyDome

    20 août 2020 | International, Aérospatial

    Fortem Technologies takes aim at ‘dark' UASs with SkyDome

    by Gerrard Cowan Counter-unmanned aerial system (C-UAS) specialist Fortem Technologies has seen a growing military interest in its systems, the company told Janes , with the US-based firm emphasising an interception approach to tackling potential UAS threats. Fortem Technologies' SkyDome is an end-to-end system encompassing several elements that can be operated separately or as part of an integrated approach. This comprises artificial intelligence (AI)-based software SkyDome Manager that includes ThreatAware, a capability that can analyse input from several sources and sensors. These sources include the company's TrueView radar, which can help to detect ‘dark' UASs that do not emit radio frequency (RF) or other signals. The overarching system also includes DroneHunter, a multirotor UAS that can intercept rogue UASs using a net tether. Adam Robertson, Fortem's co-founder and chief technology officer (CTO), said the company opted for the DroneHunter approach for several reasons. First, it can help to avoid collateral damage. Second, it means that the targeted UAV can be brought back for forensic analysis. ”That allows us to figure out where the source is - really we're interested in stopping the source of the threats, not the object that was threatening us,” said Robertson. The company sees potential for the systems in both fixed installations and mobile platforms, as well as on temporary sites, he noted. Robertson added that Fortem has been working to increase the autonomy of the system. While it still requires human supervision, the system can function independently to varying degrees depending on the rules of engagement. https://www.janes.com/defence-news/news-detail/fortem-technologies-takes-aim-at-dark-uass-with-skydome

Toutes les nouvelles