7 février 2019 | International, C4ISR

DARPA: Defending Against Adversarial Artificial Intelligence

Today, machine learning (ML) is coming into its own, ready to serve mankind in a diverse array of applications – from highly efficient manufacturing, medicine and massive information analysis to self-driving transportation, and beyond. However, if misapplied, misused or subverted, ML holds the potential for great harm – this is the double-edged sword of machine learning.

“Over the last decade, researchers have focused on realizing practical ML capable of accomplishing real-world tasks and making them more efficient,” said Dr. Hava Siegelmann, program manager in DARPA's Information Innovation Office (I2O). “We're already benefitting from that work, and rapidly incorporating ML into a number of enterprises. But, in a very real way, we've rushed ahead, paying little attention to vulnerabilities inherent in ML platforms – particularly in terms of altering, corrupting or deceiving these systems.”

In a commonly cited example, ML used by a self-driving car was tricked by visual alterations to a stop sign. While a human viewing the altered sign would have no difficulty interpreting its meaning, the ML erroneously interpreted the stop sign as a 45 mph speed limit posting. In a real-world attack like this, the self-driving car would accelerate through the stop sign, potentially causing a disastrous outcome. This is just one of many recently discovered attacks applicable to virtually any ML application.

To get ahead of this acute safety challenge, DARPA created the Guaranteeing AI Robustness against Deception (GARD) program. GARD aims to develop a new generation of defenses against adversarial deception attacks on ML models. Current defense efforts were designed to protect against specific, pre-defined adversarial attacks and, remained vulnerable to attacks outside their design parameters when tested. GARD seeks to approach ML defense differently – by developing broad-based defenses that address the numerous possible attacks in a given scenario.

“There is a critical need for ML defense as the technology is increasingly incorporated into some of our most critical infrastructure. The GARD program seeks to prevent the chaos that could ensue in the near future when attack methodologies, now in their infancy, have matured to a more destructive level. We must ensure ML is safe and incapable of being deceived,” stated Siegelmann.

GARD's novel response to adversarial AI will focus on three main objectives: 1) the development of theoretical foundations for defensible ML and a lexicon of new defense mechanisms based on them; 2) the creation and testing of defensible systems in a diverse range of settings; and 3) the construction of a new testbed for characterizing ML defensibility relative to threat scenarios. Through these interdependent program elements, GARD aims to create deception-resistant ML technologies with stringent criteria for evaluating their robustness.

GARD will explore many research directions for potential defenses, including biology. “The kind of broad scenario-based defense we're looking to generate can be seen, for example, in the immune system, which identifies attacks, wins and remembers the attack to create a more effective response during future engagements,” said Siegelmann.

GARD will work on addressing present needs, but is keeping future challenges in mind as well. The program will initially concentrate on state-of-the-art image-based ML, then progress to video, audio and more complex systems – including multi-sensor and multi-modality variations. It will also seek to address ML capable of predictions, decisions and adapting during its lifetime.

A Proposers Day will be held on February 6, 2019, from 9:00 AM to 2:00 PM (EST) at the DARPA Conference Center, located at 675 N. Randolph Street, Arlington, Virginia, 22203 to provide greater detail about the GARD program's technical goals and challenges.

Additional information will be available in the forthcoming Broad Agency Announcement, which will be posted to www.fbo.gov.

https://www.darpa.mil/news-events/2019-02-06

Sur le même sujet

  • Leonardo signs contract with Malaysia for two ATR 72 MPA

    25 mai 2023 | International, Aérospatial

    Leonardo signs contract with Malaysia for two ATR 72 MPA

    This contract follows the selection of the solution offered by Leonardo announced last October, and includes the supply of two ATR Special Mission aircraft in Maritime Patrol configuration plus the...

  • Pentagon research office wants innovative tools to spot influence campaigns

    5 novembre 2020 | International, C4ISR, Sécurité

    Pentagon research office wants innovative tools to spot influence campaigns

    Andrew Eversden WASHINGTON — A new broad agency announcement shows that the Pentagon's top research arm wants to work with industry to develop technology that can track adversarial influence operations across social media platforms. The announcement from the Defense Advanced Research Projects Agency (DARPA) for a project called INfluence Campaign Awareness and Sensemaking (INCAS) will use an automated detection tool to unveil influence operations online. “INCAS tools will directly and automatically detect implicit and explicit indicators of geopolitical influence in multilingual online messaging to include author's agenda, concerns, and emotion,” the BAA reads. The BAA comes as the federal government seeks solutions to defend against foreign influence campaigns, particularly surrounding political campaigns, that aim to sow discord among Americans with inflammatory messages. “The US is engaged with its adversaries in an asymmetric, continual, war of weaponized influence narratives. Adversaries exploit misinformation and true information delivered via influence messaging: blogs, tweets, and other online multimedia content. Analysts require effective tools for continual sensemaking of the vast, noisy, adaptive information environment to identify adversary influence campaigns,” the BAA reads. Through the project, DARPA seeks to improve upon current social media tools to track influence operations. The current tools, the solicitation reads, requires a major manual effort in which analysts have to sift through “high volumes” of messages and decide which ones are relevant and gaining traction, using tools for digital marketing. “These tools lack explanatory and predictive power for deeper issues of geopolitical influence,” the solicitation reads. “Audience analysis is often done using static, demographic segmentation based on online and survey data. This lacks the flexibility, resolution, and timeliness needed for dynamic geopolitical influence campaign detection and sensemaking.” The program has five technical areas. Technical area one focuses on using automated influence detection to enable analysts to analyze influence campaigns. The second area will “dynamically segment" the population that is responding to influence campaigns, and identify “psychographic attributes relevant to geopolitical influence,” such as “worldviews, morals and sacred values.” The INCAS tool's third technical area will assist analysts in linking influence indicators and population response over time across several platforms, in order to capture influence campaigns as they evolve over time. The fourth area will create infrastructure to provide data feeds from online sources to the other three technical areas, and the final technical area will conduct technology evaluations and will not be competed as part the the BAA. DARPA expects multiple awards for technical areas one and two, and single awards for technical areas three and four. Abstracts are due Nov. 17, 2020, with proposals due Jan. 8, 2021. Awards will be made around July 2012 using standard procurement contracts or Other Transaction Agreements. https://www.c4isrnet.com/artificial-intelligence/2020/11/03/pentagon-research-office-wants-innovative-tools-to-spot-influence-campaigns/

  • Pakistani Hackers Use DISGOMOJI Malware in Indian Government Cyber Attacks

    16 juin 2024 | International, Sécurité

    Pakistani Hackers Use DISGOMOJI Malware in Indian Government Cyber Attacks

    Pakistan-based UTA0137's cyber espionage campaign targeting Indian government with DISGOMOJI malware, exploiting DirtyPipe and Firefox scam.

Toutes les nouvelles