Back to news

July 24, 2020 | International, C4ISR, Security

Intelligence Agencies Release AI Ethics Principles

Getting it right doesn't just mean staying within the bounds of the law. It means making sure that the AI delivers reports that accurate and useful to policymakers.

By

ALBUQUERQUE — Today, the Office of the Director of National Intelligence released what the first take on an evolving set of principles for the ethical use of artificial intelligence. The six principles, ranging from privacy to transparency to cybersecurity, are described as Version 1.0, approved by DNI John Ratcliffe last month.

The six principles are pitched as a guide for the nation's many intelligence especially, especially to help them work with the private companies that will build AI for the government. As such, they provide an explicit complement to the Pentagon's AI principles put forth by Defense Secretary Mark Esper back in February.

“These AI ethics principles don't diminish our ability to achieve our national security mission,” said Ben Huebner, who heads the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.”

The Pentagon's AI ethics principles came at the tail end of a long process set in motion by workers at Google. These workers called upon the tech giant to withdraw from a contract to build image-processing AI for Project Maven, which sought to identify objects in video recorded by the military.

While ODNI's principles come with an accompanying six-page ethics framework, there is no extensive 80-page supporting annex, like that put forth by the Department of Defense.

“We need to spend our time under framework and the guidelines that we're putting out to make sure that we're staying within the guidelines,” said Dean Souleles, Chief Technology Advisor at ODNI. “This is a fast-moving train with this technology. Within our working groups, we are actively working on many, many different standards and procedures for practitioners to use and begin to adopt these technologies.”

Governing AI as it is developed is a lot like laying out the tracks ahead while the train is in motion. It's a tricky proposition for all involved — but the technology is evolving too fast and unpredictable to try to carve commandments in stone for all time.

Here are the six principles, in the document's own words:

Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties.

Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes.

Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias.

Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties.

Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence.

Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector.

The accompanying framework offers further questions for people to ask when programming, evaluating, sourcing, using, and interpreting information informed by AI. While bulk processing of data by algorithm is not a new phenomenon for the intelligence agencies, having a learning algorithm try to parse that data and summarize it for a human is a relatively recent feature.

Getting it right doesn't just mean staying within the bounds of the law, it means making sure that the data produced by the inquiry is accurate and useful when handed off to the people who use intelligence products to make policy.

“We are absolutely welcoming public comment and feedback on this,” said Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there's going to be aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some of the things that we are doing.”

Internal legal review, as well as classified assessments from the Inspectors General, will likely be what makes the classified data processing AI accountable to policymakers. For the general public, as it offers comment on intelligence service use of AI, examples will have to come from outside classification, and will likely center on examples of AI in the private sector.

“We think there's a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” said Souleles.

He specifically pointed to the task of threat identification, using AI to spot malicious actors that seek to cause harm to networks, be they e-commerce giants or three-letter agencies. Depending on one's feelings towards the collection and processing of information by private companies vis-à-vis the government, it is either reassuring or ominous that when it comes to performing public accountability for spy AI, the intelligence community will have business examples to turn to.

“There's many areas that I think we're going to be able to talk about going forward, where there's overlap that does not expose our classified sources and methods,” said Souleles, “because many, many, many of these things are really really common problems.”

https://breakingdefense.com/2020/07/intelligence-agencies-release-ai-ethics-principles/

On the same subject

  • Opinion: Five Takeaways From Recent Defense Investment Activity | Aviation Week Network

    March 10, 2021 | International, Aerospace, Naval, Land, C4ISR, Security

    Opinion: Five Takeaways From Recent Defense Investment Activity | Aviation Week Network

    This year, companies large and small will constantly have to assess and reassess where they can best compete.

  • France, UK strengthen military relations — but future fighter jet cooperation ‘not yet there’

    September 10, 2018 | International, Aerospace

    France, UK strengthen military relations — but future fighter jet cooperation ‘not yet there’

    By: Pierre Tran PARIS — British and French defense ministers will meet twice a year rather than just once, reflecting a deepening of bilateral relations despite Britain's impending exit from the European Union, said French Armed Forces Minister Florence Parly. “We have with the United Kingdom very close and deep relations in defense,” she told Defense News at a Sept. 6 event with AJPAE, an aeronautics and space journalists association. “That was formalized with the Lancaster House Treaty and will not be be called into question by the decision that the United Kingdom has taken to leave the European Union. “In defense, there is a shared determination to pursue and deepen this relationship.” The more frequent ministerial meetings reflected that intent. “This cooperation is precious and necessary for the security of the European continent,” she added. Britain has put at French disposal the much-needed Chinook heavy transport helicopter in the Sahel theater, reflecting a close operational cooperation and shared experience in overseas deployment, she noted. Britain has asked for what started as a technology demonstrator for a combat UAV to refocus toward a study of “technology areas,” she said. That left the door open for the technology to be applied for large programs, such as the Franco-German Future Combat Air System, she added. “The story is not yet written,” she said. “Perhaps in the next few years the British could be by our side on the FCAS project. But maybe I am just dreaming. We're not there yet.” The January meeting between French President Emmanuel Macron and British Prime Minister Theresa May, and their governments, also reflected close ties, particularly for the defense ministries, she said. That cross-channel summit closed without a pledge to build the demonstrator for a combat drone, disappointing French industry. France is the lead nation on the FCAS project, which aims to field a future fighter jet flying in a system of systems, linking up drones, tankers, future cruise missiles and swarms of drones. The departure of Britain from the EU, known as Brexit, is due to take place in March. https://www.defensenews.com/global/europe/2018/09/07/france-uk-strengthen-military-relations-but-future-fighter-jet-cooperation-not-yet-there

  • Could these 5 projects transform defense?

    June 25, 2019 | International, Aerospace

    Could these 5 projects transform defense?

    By: Kelsey Reichmann The Defense Innovation Unit — the Department of Defense's emerging technology accelerator — is working on several projects aimed at improving national security by contracting with commercial providers: According to the DIU annual report for 2018, using AI to predict maintenance on aircraft and vehicles could save DoD $3 billion to $5 billion annually. DIU determined maintenance on aircraft and vehicles was often done too early, removing parts that still had a working life ahead of schedule, so, using AI, DIU analysts found they could predict 28 percent of unscheduled maintenance on the E-3 Sentry across six subsystems and 32 percent of on the C-5 Galaxy across 10 subsystems. DIU found deficiencies in the commercial drone industry, resulting in a lack of smaller options for war fighters. Through partnership with the Army's Program Executive Office Aviation, it was able to build an inexpensive, rucksack-portable VTOL drone fit for short-range reconnaissance, according to the report. DIU launched a project, VOLTRON, to discover vulnerabilities in DoD software. This follows a 2018 Government Accountability Office report that found $1.66 trillion work of weapons systems at risk for cyberattack. Using this automated detection and remediation system, DIU will be able to provide DoD software with more secure networks. DIU is also working to secure networks on the battlefield through its Fully Networked Command, Communications & Control Nodes, or FNC3N, project. This project wants to create wearable technology that will provide data to users in a secure interconnected tactical network, according to the report. Using commercial satellite images, DIU is filling gaps in space-based reconnaissance. The peactime indications and warning project has completed the launch of the first commercial, small synthetic aperture radar (SAR) satellite. The use of commercial data will allow the department to easily share the data it receives with allies and partners because it is unclassified. In August 2018 DIU was solidified within the Defense Department when “experimental” was removed from the office's original name, according to the report. It also received a large funding increase, from $84 million in 2017 to $354 million in 2018. https://www.c4isrnet.com/battlefield-tech/2019/06/21/could-these-5-projects-transform-defense/

All news