Back to news

July 24, 2020 | International, C4ISR, Security

Intelligence Agencies Release AI Ethics Principles

Getting it right doesn't just mean staying within the bounds of the law. It means making sure that the AI delivers reports that accurate and useful to policymakers.

By

ALBUQUERQUE — Today, the Office of the Director of National Intelligence released what the first take on an evolving set of principles for the ethical use of artificial intelligence. The six principles, ranging from privacy to transparency to cybersecurity, are described as Version 1.0, approved by DNI John Ratcliffe last month.

The six principles are pitched as a guide for the nation's many intelligence especially, especially to help them work with the private companies that will build AI for the government. As such, they provide an explicit complement to the Pentagon's AI principles put forth by Defense Secretary Mark Esper back in February.

“These AI ethics principles don't diminish our ability to achieve our national security mission,” said Ben Huebner, who heads the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.”

The Pentagon's AI ethics principles came at the tail end of a long process set in motion by workers at Google. These workers called upon the tech giant to withdraw from a contract to build image-processing AI for Project Maven, which sought to identify objects in video recorded by the military.

While ODNI's principles come with an accompanying six-page ethics framework, there is no extensive 80-page supporting annex, like that put forth by the Department of Defense.

“We need to spend our time under framework and the guidelines that we're putting out to make sure that we're staying within the guidelines,” said Dean Souleles, Chief Technology Advisor at ODNI. “This is a fast-moving train with this technology. Within our working groups, we are actively working on many, many different standards and procedures for practitioners to use and begin to adopt these technologies.”

Governing AI as it is developed is a lot like laying out the tracks ahead while the train is in motion. It's a tricky proposition for all involved — but the technology is evolving too fast and unpredictable to try to carve commandments in stone for all time.

Here are the six principles, in the document's own words:

Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties.

Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes.

Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias.

Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties.

Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence.

Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector.

The accompanying framework offers further questions for people to ask when programming, evaluating, sourcing, using, and interpreting information informed by AI. While bulk processing of data by algorithm is not a new phenomenon for the intelligence agencies, having a learning algorithm try to parse that data and summarize it for a human is a relatively recent feature.

Getting it right doesn't just mean staying within the bounds of the law, it means making sure that the data produced by the inquiry is accurate and useful when handed off to the people who use intelligence products to make policy.

“We are absolutely welcoming public comment and feedback on this,” said Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there's going to be aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some of the things that we are doing.”

Internal legal review, as well as classified assessments from the Inspectors General, will likely be what makes the classified data processing AI accountable to policymakers. For the general public, as it offers comment on intelligence service use of AI, examples will have to come from outside classification, and will likely center on examples of AI in the private sector.

“We think there's a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” said Souleles.

He specifically pointed to the task of threat identification, using AI to spot malicious actors that seek to cause harm to networks, be they e-commerce giants or three-letter agencies. Depending on one's feelings towards the collection and processing of information by private companies vis-à-vis the government, it is either reassuring or ominous that when it comes to performing public accountability for spy AI, the intelligence community will have business examples to turn to.

“There's many areas that I think we're going to be able to talk about going forward, where there's overlap that does not expose our classified sources and methods,” said Souleles, “because many, many, many of these things are really really common problems.”

https://breakingdefense.com/2020/07/intelligence-agencies-release-ai-ethics-principles/

On the same subject

  • Upgrading US Navy ships is difficult and expensive. Change is coming

    June 22, 2018 | International, Naval

    Upgrading US Navy ships is difficult and expensive. Change is coming

    By: David B. Larter WASHINGTON ― The U.S. Navy is looking at extending the life of its surface ships by as much as 13 years, meaning some ships might be 53 years old when they leave the fleet. Here's the main problem: keeping their combat systems relevant. The Navy's front-line combatants ― cruisers and destroyers ― are incredibly expensive to upgrade, in part because one must cut open the ship and remove fixtures that were intended to be permanent when they were installed. When the Navy put Baseline 9 on the cruiser Normandy a few years ago, which included all new consoles, displays and computer servers in addition to the software, it ran the service $188 million. Now, the capability and function of the new Baseline 9 suite on Normandy is staggering. The cost of doing that to all the legacy cruisers and destroyers in the fleet would be equally staggering: it would cost billions. So why is that? Why are the most advanced ships on the planet so difficult to keep relevant? And if the pace of change is picking up, how can the Navy stay relevant in the future without breaking the national piggy bank? Capt. Mark Vandroff, the current commanding officer of the Carderock Division of the Naval Surface Warfare Center and former Arleigh Burke-class destroyer program manager, understands this issue better than most. At this week's American Society of Naval Engineers symposium, Vandroff described why its so darn hard to upgrade the old ships and how future designs will do better. Here's what Vandroff had to say: “Flexibility is a requirement that historically we haven't valued, and we haven't valued it for very good reasons: It wasn't important. “When you think of a ship that was designed in the ‘70s and built in the ‘80s, we didn't realize how fast and how much technology was going to change. We could have said: ‘You know what? I'm going to have everything bolted.' Bolt down the consoles in [the combat information center], bolt in the [vertical launch system] launchers ― all of it bolted so that we could more easily pop out and remove and switch out. “The problem was we didn't value that back then. We were told to value survivability and density because we were trying to pack maximum capability into the space that we have. That's why you have what you have with the DDG-51 today. And they are hard to modernize because we valued survivability and packing the maximum capability into the minimum space. And we achieved that because that was the requirement at the time. “I would argue that now as we look at requirements for future ships, flexibility is a priority. You are going to have to balance it. What if I have to bolt stuff down? Well, either we are going to give up some of my survivability standards or I'm going to take up more space to have the equivalent standards with an different kind of mounting system, for example. And that is going to generate a new set of requirements ― it's going to drive design in different directions than it went before. “I suppose you could accuse the ship designers in the 1980s of failure to foresee the future, but that's all of us. And the point is they did what they were told to do. Flexibility is what we want now, and I think you will see it drive design from this point forward because it is now something we are forced to value.” https://www.defensenews.com/naval/2018/06/21/upgrading-us-navy-ships-is-difficult-and-expensive-change-is-coming/

  • SUISSE LE TEST DES FUTURS AVIONS DE COMBAT A COMMENCÉ

    April 25, 2019 | International, Aerospace

    SUISSE LE TEST DES FUTURS AVIONS DE COMBAT A COMMENCÉ

    Les tests des cinq avions de combat en lice pour remplacer les Tiger et les F/A-18 de l'armée suisse ont débuté. L'Eurofighter d'Airbus a ouvert le bal vendredi sur la base aérienne de Payerne (VD). Outre l'Eurofighter, quatre autres concurrents sont en lice: le Gripen E suédois (Saab), le Rafale français (Dassault) ainsi que les deux avions américains, le successeur du F/A-18, le Super Hornet de Boeing, et le F-35A de Lockheed-Martin. L'ordre de passage des candidats a été fixé par ordre alphabétique des constructeurs. Quatre jours de tests sont prévus pour chacun. Tous les candidats ont les mêmes chances. Aucun choix préalable n'a été effectué et pour l'instant, les avions ne seront pas comparés entre eux. Cette phase interviendra lors du deuxième appel d'offres, avait indiqué lundi Christian Catrina, délégué de la cheffe du Département fédéral de la défense pour le projet d'achat des avions de combat. Vérifier les capacités L'objectif de ces tests est de vérifier les capacités des avions et les données des offres déposées par les différents constructeurs. Les essais incluent huit missions comportant des t'ches spécifiques. Effectuées par un ou deux avions de combat, ces missions consisteront en 17 décollages et atterrissage. Elles seront axées sur les aspects opérationnels, les aspects techniques et les caractéristiques particulières. Un vol d'introduction aura lieu avant les essais en vol et au sol pour permettre aux pilotes étrangers de se familiariser avec l'espace aérien suisse. Les missions seront effectuées en solo par un pilote étranger pour le F-35A et le Gripen E qui sont des monoplaces, avait précisé armasuisse. Un ingénieur suisse accompagnera les autres vols. Les évaluations se feront ensuite gr'ce aux enregistrements à bord. La procédure garantit un traitement objectif et identique de tous les candidats. Le choix du modèle se fera sur des bases équitables. Les tests concernent aussi les audits de support produits, les essais en simulateur et les essais au sol en Suisse. https://www.lematin.ch/suisse/test-futurs-avions-combat-commence/story/14127523

  • US aims to stay ahead of China in using AI to fly fighter jets

    May 13, 2024 | International, C4ISR

    US aims to stay ahead of China in using AI to fly fighter jets

    Two Air Force fighter jets recently squared off in a dogfight in California. One was flown by a pilot — the other wasn’t.

All news