24 juillet 2020 | International, C4ISR, Sécurité

Intelligence Agencies Release AI Ethics Principles

Getting it right doesn't just mean staying within the bounds of the law. It means making sure that the AI delivers reports that accurate and useful to policymakers.

By

ALBUQUERQUE — Today, the Office of the Director of National Intelligence released what the first take on an evolving set of principles for the ethical use of artificial intelligence. The six principles, ranging from privacy to transparency to cybersecurity, are described as Version 1.0, approved by DNI John Ratcliffe last month.

The six principles are pitched as a guide for the nation's many intelligence especially, especially to help them work with the private companies that will build AI for the government. As such, they provide an explicit complement to the Pentagon's AI principles put forth by Defense Secretary Mark Esper back in February.

“These AI ethics principles don't diminish our ability to achieve our national security mission,” said Ben Huebner, who heads the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.”

The Pentagon's AI ethics principles came at the tail end of a long process set in motion by workers at Google. These workers called upon the tech giant to withdraw from a contract to build image-processing AI for Project Maven, which sought to identify objects in video recorded by the military.

While ODNI's principles come with an accompanying six-page ethics framework, there is no extensive 80-page supporting annex, like that put forth by the Department of Defense.

“We need to spend our time under framework and the guidelines that we're putting out to make sure that we're staying within the guidelines,” said Dean Souleles, Chief Technology Advisor at ODNI. “This is a fast-moving train with this technology. Within our working groups, we are actively working on many, many different standards and procedures for practitioners to use and begin to adopt these technologies.”

Governing AI as it is developed is a lot like laying out the tracks ahead while the train is in motion. It's a tricky proposition for all involved — but the technology is evolving too fast and unpredictable to try to carve commandments in stone for all time.

Here are the six principles, in the document's own words:

Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties.

Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes.

Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias.

Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties.

Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence.

Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector.

The accompanying framework offers further questions for people to ask when programming, evaluating, sourcing, using, and interpreting information informed by AI. While bulk processing of data by algorithm is not a new phenomenon for the intelligence agencies, having a learning algorithm try to parse that data and summarize it for a human is a relatively recent feature.

Getting it right doesn't just mean staying within the bounds of the law, it means making sure that the data produced by the inquiry is accurate and useful when handed off to the people who use intelligence products to make policy.

“We are absolutely welcoming public comment and feedback on this,” said Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there's going to be aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some of the things that we are doing.”

Internal legal review, as well as classified assessments from the Inspectors General, will likely be what makes the classified data processing AI accountable to policymakers. For the general public, as it offers comment on intelligence service use of AI, examples will have to come from outside classification, and will likely center on examples of AI in the private sector.

“We think there's a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” said Souleles.

He specifically pointed to the task of threat identification, using AI to spot malicious actors that seek to cause harm to networks, be they e-commerce giants or three-letter agencies. Depending on one's feelings towards the collection and processing of information by private companies vis-à-vis the government, it is either reassuring or ominous that when it comes to performing public accountability for spy AI, the intelligence community will have business examples to turn to.

“There's many areas that I think we're going to be able to talk about going forward, where there's overlap that does not expose our classified sources and methods,” said Souleles, “because many, many, many of these things are really really common problems.”

https://breakingdefense.com/2020/07/intelligence-agencies-release-ai-ethics-principles/

Sur le même sujet

  • Former Symantec boss takes over the Defense Innovation Unit

    25 septembre 2018 | International, C4ISR

    Former Symantec boss takes over the Defense Innovation Unit

    By: Aaron Mehta WASHINGTON — Michael Brown spent two decades running companies in Silicon Valley, eventually rising to CEO of Symantec, one of the largest software companies in the world, with annual revenues of $4 billion and more than 10,000 employees. On Sept. 24, he starts a new job as the next leader of the Pentagon's Defense Innovation Unit. While it comes with a much smaller budget, in the range of $40 million, it's a job Brown believes he's stepping into at a critical time. “My fundamental view is we are in a technology race. We didn't ask to be in this, but we're in it,” Brown said in an exclusive interview with Defense News. “I'm concerned that if we don't recognize that we're in a race and take appropriate action, then we let China move forward and we don't put our best foot forward in terms of leading in these key technology areas.” Brown spent the last two years as a White House presidential innovation fellow with the Pentagon, meaning he's not coming into the world of defense cold with the DIU job. During that period he met Raj Shah, the previous DIU leader, as well as Mike Griffin, the Pentagon's undersecretary of defense for research and engineering, who now will be Brown's boss. Brown also co-authored a Pentagon study on China's influence in the U.S. tech scene, an experience that has influenced his views as he prepares to take over DIU. “One of the things I carry with me is I understand the motivation of companies, CEOs, investors because I've been working with these folks my whole life,” he said of his qualifications. Created in 2015 to be the Pentagon's outreach effort to Silicon Valley, DIU — until recentlyknown as the Defense innovation Unit Experimental — has gone through several high-profile iterations. It opened offices in Austin, Texas, and Cambridge, Massachusetts, but also worked through two leaders. It went from reporting directly to the secretary of defense to the Pentagon's undersecretary of defense for research and engineering. The group has also faced questions about its future from skeptical members of Congress, and resistance inside the building. The hiring of Mike Madsen to handle the office's Washington operations is expected to ease those concerns, but Brown acknowledged he would be spending time in Washington every few weeks to shore up internal and external support. Defense Secretary Jim Mattis and Griffin wanted a leader for the agency with a large commercial background, Brown said, “because that's the community we need to access.” Brown wants to create “the ideal exchange where we have access to all the leading technologies from whatever companies we want to work with on the supply side — and on the demand side we have the effective relationships with the Pentagon, throughout the military, so we can be select about what are the most interesting problems to work on in national security that have the greatest impact.” The China problem Brown's comments on China put him in line with the broader Trump administration, whose officials have repeatedly pointed to China as a competitor, and the Mattis-led Pentagon, which has warned of risk from China both as a military competitor and in influencing American supply chains. DIU, to Brown, has a specific role to play in that race: getting the Pentagon the best commercially available technology, and hence freeing up funding to invest in the military-only capabilities, such as hypersonics, needed to check Chinese ambitions. More nebulous but just as important for Brown is a new mission for DIU: doing outreach into the commercial tech community to explain the Pentagon's views on China, and why contributing to the departments efforts are worthwhile. Or as Brown puts it, “making sure the companies in these innovation hubs are aware of the technology race that is going on, so that they're not only viewing China as an economic opportunity but also seeing the geopolitical consequences. Being part of that debate is going to be an important role for DIU.” Brown said some of DIU's top priorities will include human systems engineering, information technology, cyber or advanced computing, autonomy, and artificial intelligence. He is also ordering a look at the various processes DIU uses to see if areas can be sped up, and whether other transaction authorities are being used to their full potential. He said he did not expect a significant restructuring of the office, but one priority is getting a human resources leader and new general counsel to smooth the transition of future hires. Capt. Sean Heritage, who has been acting as DIU interim head, will return to being the Navy lead for the office. The former CEO acknowledged that his background and high-level ties to the tech community may open doors that would be otherwise be shut (Brown was reportedly forced out by Symantec's board in 2016 due to company numbers, making him the third CEO to be removed by the company in the space of four years). He also envisions working with academic institutions located near the three DIU hubs to encourage a debate on the issue. Part of DIU's role is explaining to companies why they should support the department's efforts. Silicon Valley has a reputation as being hostile to the military — a reputation that has only increased in recent months following an employee-led pullout by Google of the department's Project Maven, an effort to incorporate AI into analyzing drone footage. Brown, however, said those concerns are largely “overblown,” noting the office is already in discussions with well over 500 different tech firms. “We haven't found there's a reluctance on the part of companies developing the technologies we're interested in working with the Pentagon,” he said. “They are interested in how DIU can help make that process easier for them.” Brown thinks he is the man to make that happen. “Contrary to what a lot of folks read or talk about with government, my experience is if you have good ideas and have persistence, you can make that happen.” https://www.defensenews.com/pentagon/2018/09/24/former-symantec-boss-takes-over-the-defense-innovation-unit

  • Despite progress, industry faces ‘very tough roadmap’ to field FCAS by 2040

    10 décembre 2020 | International, Aérospatial

    Despite progress, industry faces ‘very tough roadmap’ to field FCAS by 2040

    By: Vivienne Machi   STUTTGART, Germany — After the decade that has been the year 2020, it may seem like 2040 is centuries away. But for Airbus, the scheduled in-service date for Europe's next-generation combat aircraft and weapon system feels just around the corner. The Future Combat Air System (FCAS) industry partners have made significant progress on the pan-European, multi-system effort despite the hurdles of the COVID-19 pandemic. However, Airbus, along with its co-contractors Dassault Aviation and Indra, face a “very tough roadmap” to finalize system designs, begin preliminary development, launch production, and get the systems into service, said Bruno Fichefeux, FCAS leader for Airbus, during the company's annual trade media briefing Dec. 9. The 18-month Joint Concept Study and Phase 1A of the demonstrator portion are progressing well, but the companies need to move quickly to reach key technology maturation phases, he said. “This is a major de-risking and speeding approach towards the future development program, to ensure that we are on time on expectation.” France, Germany and Spain have teamed up on the FCAS program, which includes seven next-generation technology pillars: a sixth-generation fighter jet, multiple “remote carrier” drones, a next-generation weapon system, a brand new jet engine, advanced sensors and stealth technologies, and an “air combat cloud.” In September, the nations' three air forces worked together to down-select the five preferred architectures that will help inform the program's follow-on phases, Fichefeux said at the virtual briefing. The goal for 2021 is for FCAS to enter the preliminary demonstrator development phase for the next-generation fighter and the remote carrier aircraft. Those contracts are currently in negotiations, he noted. Starting in 2021, the FCAS will go from spending a “few million” euros to “billions,” he added. “It's a massive step forward [that] we want to initiate next year.” Observers can expect to see some major design choices after those negotiations are complete; for example, whether the next-generation fighter will have one or two seats, Fichefeux said. Airbus' unmanned aerial systems team has moved forward with efforts related to the remote carrier and manned-unmanned teaming technologies. Jana Rosenmann, the company's UAS leader, said at the briefing that her team had submitted their proposal for Phase 1B of the FCAS demonstrator portion that is scheduled to begin next year. The team is studying two remote carrier designs. “We are looking at both a smaller, expendable remote carrier, as well as a larger, conventional-sized remote carrier, looking in the direction of a loyal wingman to fly together with the combat aircraft,” Rosenmann said. Airbus is the lead contractor for the remote carrier pillar. The program has some new partners on board, Fichefeux shared Wednesday. In April, Airbus teamed up with the German Ministry of Defence for an eight-month pilot program bringing non-traditional startups and research institutes into the FCAS fold. Eighteen organizations worked on 14 separate program elements, spanning the entire range of technology pillars. Those efforts have led to concrete results, to include a first flight-test-approved launcher of an unmanned aerial system from a transport aircraft; a secure combat cloud demonstrator; and a demonstrator of applied artificial intelligence on radio frequency analysis. These 18 partners could be picked up for subcontracts later on in the program, Fichefeux noted. The plan is to “mature these pilots step by step, and then it could develop into real contracting participation within the FCAS development,” he said. “There is a perspective to bring them on board at a later stage.” Meanwhile, Airbus also announced Wednesday that its Spanish subsidiary was selected as lead contractor for the low-observability pillar of the program. Airbus Spain will also lead Madrid's contribution to the next-generation fighter pillar. Indra serves as national lead for the entire program since Spain joined FCAS in early 2020, and also heads the sensor pillar while contributing to the combat cloud and simulations efforts. The finalization of the low-observability contract “completes Spain's onboarding as an equal nation across all FCAS activities,” Airbus said in a release. “The signature closes a ten-month process of onboarding Spain as the third nation.” The program will begin testing low-observability technologies early in the demonstrator phase, Fichefeux confirmed. Both the fighter aircraft demonstrator and the remote carrier will have stealth capabilities when they begin flight tests, which are expected as early as 2026. Then the team will need to work on issues such as how to factor in the future engine's heat signature, and how to integrate sensors and antennae, Fichefeux said. Low-observability “is part of almost all pillars, and the aim of this maturation is to prove” what works and what won't work, he noted. Along with a personal deadline, the FCAS program may also face schedule pressure from Europe's second sixth-generation fighter program. The United Kingdom, Italy and Sweden have teamed up on the Tempest program, with a current goal of delivering new fighter aircraft to the nations' militaries by 2035. When asked whether the two fighter programs may converge at some point, Fichefeux noted that that would ultimately be a government decision. “That is our responsibility, on the industry side, is just not to lose time waiting,” he said. “If the governments want to define a path of convergence, we will support it in due time.” https://www.defensenews.com/global/europe/2020/12/09/despite-progress-industry-faces-very-tough-roadmap-to-field-fcas-by-2040/

  • German defence ministry: working at full speed on procurement proposals

    21 novembre 2022 | International, Aérospatial, Naval, Terrestre, C4ISR

    German defence ministry: working at full speed on procurement proposals

    There will be many more defence procurement proposals heading to the German parliament for approval this year, said a defence ministry spokesperson on Monday, as the war in Ukraine has put renewed focus on bringing the country's military up to speed.

Toutes les nouvelles