Back to news

June 10, 2019 | International, Other Defence

How industry can build better AI for the military

By:

As AI becomes more prominent in the national security community, officials are grappling with where to use it most effectively.

During a panel discussion at the C4ISRNET conference June 6, leaders discussed the role of industry building AI that will be used by the military.

After studying small and big companies creating AI technology, Col. Stoney Trent, the chief of operations at the Pentagon's Joint Artificial Intelligence Center, said he found commercial groups do not have the same motivations that exist in the government.

“Commercial groups are poorly incentivized for rigorous testing. For them that represents a business risk,” Trent said. Because of this, he the government needs to work with the commercial sector to create these technologies.

“What the Defense Department has to offer in this space is encouragement, an incentive structure for better testing tools and methods that allows us to understand how a product is going to perform when we are under conditions of national consequence because I can't wait,” Trent said. “Hopefully, the nation will be at peace long enough to not have a high bandwidth of experiences with weapons implementations, but when that happens, we need them to absolutely work. That's a quality of commercial technology development.”

For this to take place, the Department of Defense needs to help create the right environment.

“All of this is predicated on the Pentagon doing things as well,” said Kara Frederick, associate fellow for the technology and national security program at the Center for a New American Security. “Making an environment conducive to the behaviors that you are seeking to encourage. That environment can be the IT environment, common standards for data processing, common standards for interactions with industry, I think would help.”

Panelists said national security leaders also need to weigh the risks of relying more on AI technology, one of which is non-state actors using AI for nefarious purposes.

Trent said he sees AI as the new arms race but noted that in this arena, destruction may be easier than creation.

“AI is the modern-day armor anti-armor arms race,” Trent said. “The Joint AI Center, one of the important features of it is that it does offer convergence for best practices, data sources, data standards, etc. The flip side is we fully understand there are a variety of ways you can undermine artificial intelligence and most of those are actually easier than developing good resilient AI.”

Frederick said part of this problem stems from the structure of the AI community.

“I think what's so singular about the AI community, especially the AI research community, is that its so open,” Frederick said. “Even at Facebook, we open source some of these algorithms and we put it our there for people to manipulate. [There is this] idea that non-state actors, especially those without strategic intent or ones that we can't pin strategic intent to, could get a hold of some of these ways to code in certain malicious inputs [and] we need to start being serious about it.”

However, before tackling any of these problems, leaders need to first decide when it is appropriate to use AI

Rob Monto, lead of the Army's Advanced Concepts and Experimentation office, described this process as an evolution that takes place between AI and its users.

“AI is like electricity,” he said. “It can be anywhere and everywhere. You can either get electrocuted by it or you target specific applications for it. You need to know what you want the AI to do, and then you spend months and years building out. If you don't have your data set available, you do that upfront architecture and collection of information. Then you train your algorithms and build that specifically to support that specific use case...AI is for targeted applications to aid decisions, at least in the military space, to aid the user.”

Once the decision is made how and where to use AI, there are other technologies that must make advances to meet AI. One the biggest challenges, said Chad Hutchinson, director of engineering at the Crystal Group., is the question of hardware and characteristics such as thermal performance.

“AI itself is pushing the boundaries of what the hardware can do,” Hutchinson said.

Hardware technology is not the only obstacle in AI's path. These issues could stem from policy or human resource shortfalls.

“What we find is the non-technology barriers are far more significant than the technology barriers,” Trent said.

https://www.c4isrnet.com/show-reporter/c4isrnet-conference/2019/06/09/how-industry-can-build-better-ai-for-the-military/

On the same subject

  • German air force declares Meteor missile ready for Eurofighter fleet

    August 3, 2021 | International, Aerospace

    German air force declares Meteor missile ready for Eurofighter fleet

    The German air force recently completed flight tests for its newest air-to-air missile, the Meteor, and have deemed the weapon ready for use aboard the nation’s Eurofighter Typhoon fleet.

  • Intelligence Agencies Release AI Ethics Principles

    July 24, 2020 | International, C4ISR, Security

    Intelligence Agencies Release AI Ethics Principles

    Getting it right doesn't just mean staying within the bounds of the law. It means making sure that the AI delivers reports that accurate and useful to policymakers. By KELSEY ATHERTON ALBUQUERQUE — Today, the Office of the Director of National Intelligence released what the first take on an evolving set of principles for the ethical use of artificial intelligence. The six principles, ranging from privacy to transparency to cybersecurity, are described as Version 1.0, approved by DNI John Ratcliffe last month. The six principles are pitched as a guide for the nation's many intelligence especially, especially to help them work with the private companies that will build AI for the government. As such, they provide an explicit complement to the Pentagon's AI principles put forth by Defense Secretary Mark Esper back in February. “These AI ethics principles don't diminish our ability to achieve our national security mission,” said Ben Huebner, who heads the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.” The Pentagon's AI ethics principles came at the tail end of a long process set in motion by workers at Google. These workers called upon the tech giant to withdraw from a contract to build image-processing AI for Project Maven, which sought to identify objects in video recorded by the military. While ODNI's principles come with an accompanying six-page ethics framework, there is no extensive 80-page supporting annex, like that put forth by the Department of Defense. “We need to spend our time under framework and the guidelines that we're putting out to make sure that we're staying within the guidelines,” said Dean Souleles, Chief Technology Advisor at ODNI. “This is a fast-moving train with this technology. Within our working groups, we are actively working on many, many different standards and procedures for practitioners to use and begin to adopt these technologies.” Governing AI as it is developed is a lot like laying out the tracks ahead while the train is in motion. It's a tricky proposition for all involved — but the technology is evolving too fast and unpredictable to try to carve commandments in stone for all time. Here are the six principles, in the document's own words: Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties. Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes. Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias. Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties. Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence. Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector. The accompanying framework offers further questions for people to ask when programming, evaluating, sourcing, using, and interpreting information informed by AI. While bulk processing of data by algorithm is not a new phenomenon for the intelligence agencies, having a learning algorithm try to parse that data and summarize it for a human is a relatively recent feature. Getting it right doesn't just mean staying within the bounds of the law, it means making sure that the data produced by the inquiry is accurate and useful when handed off to the people who use intelligence products to make policy. “We are absolutely welcoming public comment and feedback on this,” said Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there's going to be aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some of the things that we are doing.” Internal legal review, as well as classified assessments from the Inspectors General, will likely be what makes the classified data processing AI accountable to policymakers. For the general public, as it offers comment on intelligence service use of AI, examples will have to come from outside classification, and will likely center on examples of AI in the private sector. “We think there's a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” said Souleles. He specifically pointed to the task of threat identification, using AI to spot malicious actors that seek to cause harm to networks, be they e-commerce giants or three-letter agencies. Depending on one's feelings towards the collection and processing of information by private companies vis-à-vis the government, it is either reassuring or ominous that when it comes to performing public accountability for spy AI, the intelligence community will have business examples to turn to. “There's many areas that I think we're going to be able to talk about going forward, where there's overlap that does not expose our classified sources and methods,” said Souleles, “because many, many, many of these things are really really common problems.” https://breakingdefense.com/2020/07/intelligence-agencies-release-ai-ethics-principles/

  • Inuit company wins 7-year maintenance contract for Canadian Arctic radars

    February 3, 2022 | International, C4ISR

    Inuit company wins 7-year maintenance contract for Canadian Arctic radars

    Nasittuq Corporation, an Inuit majority-owned corporation, has been awarded a Government of Canada contract to operate and maintain the North Warning System (NWS). The NWS is a joint U.S.-Canadian air defence early warning radar system for North America, managed by the North American Aerospace De

All news