Back to news

July 8, 2019 | International, Other Defence

How the Pentagon can improve AI adoption

By: Graham Gilmer

The excitement of artificial intelligence today is like the space race of the 1960s, when nations were in fierce competition. Now, the United States is in first place. But continued leadership is not a given, especially as competitors, namely China and Russia, are making significant investments in AI for defense. To maintain our technological advantage, safeguard national security, and lead on the world stage, we have an imperative to invest strategically in AI.

The successful and widespread adoption of AI requires the United States take a human-centric and technologically innovative approach to using AI to help maintain the peace and prosperity of our nation. As the Department of Defense and Joint Artificial Intelligence Center (JAIC) continue their efforts to accelerate AI adoption, they must address three key components of successful adoption: building trust in AI technology, operationalizing AI technologies to reach enterprise scale, and establishing ethical governance standards and procedures to reduce exposure to undue risk.

Build trust in AI technology

Fear and distrust hold technology adoption back. This was true during the first three industrial revolutions as mechanization, factories, and computers transformed the world, and it is the case in today's fourth industrial revolution of AI. The confusion surrounding AI has led to teams abandoning applications due to a lack of trust. To build that trust, we must prioritize training, explainability, and transparency.

Trust in technology is built when leaders have accurate expectations around what it is going to deliver, mission owners can identify use cases connected to the core mission, and managers understand the true impact on mission performance. Building trust requires that all users, from executives and managers to analysts and operators, receive training on AI-enabled technologies. Training involves not only providing access to learning resources, but also creating opportunities for them to put their new skills to use. In its formal AI strategy, Pentagon leaders outlined extensive plans for implementing AI training programs across the department to build a digitally savvy workforce that will be key to maintaining the United States' leading position in the AI race.

“Explainable AI” also curbs distrust by showing users how machines reach decisions. Consider computer vision. Users may wonder: How can such a tool sift through millions of images to identify a mobile missile launcher? A computer vision tool equipped with explainable AI could highlight aspects of the image that it uses in identification—in this case, elements that look like wheels, tracks, or launch tubes. Explainable AI gives users a “look under the hood,” tailored to their level of technical literacy.

AI technologies must be more than understandable; they must also be transparent. This starts at the granular system level, including providing training data provenance and an audit trail showing what data, weights, and other inputs helped a machine reach its decision. Building AI systems that are explainable, transparent, and auditable will also link to governance standards and reduce risk.

Operationalize AI at the enterprise scale

AI will only be a successful tool if agencies can use AI at the enterprise level. At its core, this means moving AI beyond the pilot phase to real-world production across the enterprise or deployed out in the field on edge devices.

Successfully operationalizing AI starts early. AI is an exciting new technology, but agencies too enamored with the hype run the risk of missing out on the real benefits. Too many organizations have developed AI pilot capabilities that work in the lab but cannot support the added noise of real-world environments. Such short-term thinking results in wasted resources. Agencies must think strategically about how the AI opportunities they choose to pursue align with their real-world mission and operations.

Leaders must think through the processes and infrastructure needed to seamlessly extend AI to the enterprise at-scale. This involves building scalable infrastructure, data stores and standards, a library of reusable tools and frameworks, and security safeguards to protect against adversarial AI. It is equally important to prioritize investment in the infrastructure to organize, store, and access data, the computational needs for AI (cloud, GPU chips, etc.), as well as open, extensible software tools for ease of upgrade and maintenance.

Establish governance to reduce risk

Governance standards, controls, and ethical guidelines are critical to ensuring how AI systems are built, managed, and used in a manner that reduces exposure to undue risk. While our allies have engaged in conversations about how to ensure ethical AI, China and Russia have thus far shown little concern for the ethical risks associated with AI. Given this tension, it is imperative that the United States maintain its technological advantage and ethical leadership by establishing governance standards and proactive risk mitigation tactics. To this end, in May, three Senators introduced the bipartisan Artificial Intelligence Initiative Act, which includes provisions for establishing a National AI Coordination Office and national standards for testing AI algorithm effectiveness.

Building auditability and validation functions into AI not only ensures trust and adoption, but also reduces risk. By establishing proactive risk management procedures and processes for continuous testing and validation for compliance purposes, organizations can ensure that their AI systems are performing at optimal levels. Governance controls and system auditability also ensure that AI systems and tools are robust against hacking and adversarial AI threats.

AI could be the most transformative technological development of our lifetime—and it's a necessity for maintaining America's competitive edge. To ensure that we develop AI that users trust and can scale to the enterprise with reduced risk, organizations must take a calm, methodical approach to its development and adoption. Focus on these three areas is crucial to protecting our national security, maintaining our competitive advantage and leading on the world stage.

Graham Gilmer is a principal at Booz Allen who helps manage artificial intelligence initiatives across the Department of Defense.

https://www.c4isrnet.com/opinion/2019/07/08/how-the-pentagon-can-improve-ai-adoption/

On the same subject

  • ‘A Little Bit Disruptive’: Murray & McCarthy On Army Futures Command

    September 7, 2018 | International, Aerospace, Land, C4ISR

    ‘A Little Bit Disruptive’: Murray & McCarthy On Army Futures Command

    By SYDNEY J. FREEDBERG JR. "It's establishing buy-in over the next three, four, five years from the institution (of the Army)," Gen. Murray said. "It's about establishing buy-in on Capitol Hill, because if I don't have buy-in there, this won't survive.” DEFENSE NEWS CONFERENCE: The Army's new Futures Command won't tear down the most failure–prone procurement system in the entire US military. Instead, both its commander and the Army's No. 2 civilian emphasize they want to be just “a little bit disruptive” and “work with the institution.” That will disappoint critics of the service's chronically troubled acquisition programs who saw the Army's much-touted “biggest reorganization in 40 years” as an opportunity to tear the whole thing down and start again. The necessary change to Army culture “is going to take time,” brand-new four-star Gen. John “Mike” Murray said here yesterday, “and I think you do that by being a little bit disruptive, but not being so disruptive you upset the apple cart.” “It's hard, Sydney, because you know, you have to work with the institution,” Undersecretary Ryan McCarthy told me after he and Murray addressed the conference. “You don't want to go in there and just break things.” Work Through The Pain Reform's still plenty painful, acknowledged McCarthy, who's played a leading role in round after round of budget reviews, cutting some programs to free up funding for the Army's Big Six priorities. The choices were especially hard for 2024 and beyond, when top priorities like robotic armored vehicles and high-speed aircraft move from the laboratory to full-up prototypes. “You've got a lot of people out investing, and they're all doing good things, but they weren't the priorities of the leadership,” McCarthy told me yesterday. “You have to explain to folks why you're doing what you're doing. You need them focused on the priorities of the institution” – that is, of the Army as a whole, as set by leadership, rather than of bureaucratic fiefdoms with a long history of going their own way. But what about the pushback from constituencies who see their priorities being cut, particularly upgrades to keep current platforms combat-ready until their replacements finally arrive? “If you don't accept the risk that you talked about, (if you don't) slow down or stop the upgrade of legacy systems, you never get to next generation equipment,” brand-new four-star Gen. John “Mike” Murray said here yesterday, “and I think you do that by being a little bit disruptive, but not being so disruptive you upset the apple cart.” In other words, funding for incremental upgrades will crowd out funding for potential breakthroughs. That's largely because the incremental approach looks lower-risk – right up to the point where the enemy fields something revolutionary that your evolutionary approach can't counter. Full article: https://breakingdefense.com/2018/09/a-little-bit-disruptive-murray-mccarthy-on-army-futures-command

  • Podcast: Could Military Sustainment Shifts Impact Broader Aftermarket?

    August 21, 2020 | International, Aerospace, Naval, Land, C4ISR, Security

    Podcast: Could Military Sustainment Shifts Impact Broader Aftermarket?

    Lee Ann Shay August 21, 2020 Changes in military sustainment--including the push for agile development and the use of cloud-based software—could hint at broader shifts in the overall aftermarket. Listen as Aviation Week speaks to Accenture's aerospace team talks about these developments. https://aviationweek.com/mro/podcast-could-military-sustainment-shifts-impact-broader-aftermarket

  • Raytheon awarded $9M to maintain HARM weapons for Morocco, Turkey, U.S.

    January 16, 2020 | International, Land

    Raytheon awarded $9M to maintain HARM weapons for Morocco, Turkey, U.S.

    ByChristen McCurdy Jan. 15 (UPI) -- Raytheon inked a $9 million deal to maintain high-speed anti-radiation missiles, known as HARM, for the Air Force, the government of Morocco and the government of Turkey, according to the Pentagon. The agreement funds repair and sustainment services for 155 missiles owned by Turkey, Morocco and the United States. The AGM-88 high-speed anti-radiation missile is a joint U.S. Navy and Air Force program developed by the Navy and Raytheon.. The 800-pound missile can operate in preemptive, missile-as-sensor and self-protect modes and was developed to suppress or destroy surface-to-air missile radar and radar-directed air defense systems In July Raytheon received $17.8 million to develop computers to launch HARM weapons, and in 2017 in the contractor was awarded $17 million to deliver a targeting system for the program. Foreign military sales funds in the amount of $251,665, and Air Force funds in the amount of $8.24 million are obligated at the time of the award. Work will be performed in Tucson, Ariz., and is expected to be completed in December 2020. https://www.upi.com/Defense-News/2020/01/15/Raytheon-awarded-9M-to-maintain-HARM-weapons-for-Morocco-Turkey-US/5811579137062/

All news