Back to news

September 6, 2018 | International, Aerospace, Naval, Land, C4ISR

Artificial intelligence expert gets top job at French defense innovation agency

By:

PARIS — French Armed Forces Minister Florence Parly has appointed Emmanuel Chiva, a specialist in artificial intelligence and training simulation, as director of the newly formed agency for defense innovation, the ministry said.

Chiva took up the post Sept. 1, when the innovation office was officially set up, the ministry said in a Sept. 4 statement. Parly made the appointment in consultation with Joël Barre, head of the Direction Générale de l'Armement procurement office. The innovation agency will report to the DGA.

The agency will be the key player in a new strategy for innovation, seeking “to bring together all the actors in the ministry and all the programs which contribute to innovation in defense,” Parly said in an Aug. 28 speech to a conference held by Medef, an employers' association.

The innovation office will be open to Europe, while allowing experiments to stay close to their operational users, she said.

Parly has set a budget of €1 billion (U.S. $1.2 billion) for the agency, which will seek to coordinate attempts to apply new technology to military applications.

Chiva has more than 20 years of experience in AI and training simulation. He previously held a senior post for strategy and development at Agueris, a specialist in training simulation for land weapon systems.

Agueris is a unit of CMI Defence, a Belgian company specializing in guns and turrets for armored vehicles. Agueris was on the CMI stand at the Eurosatory trade show for land weapons in June. Agueris held three conferences on AI, with Chiva speaking at a roundtable debate on innovation.

Chiva is a graduate of Ecole Normal Supérieure and a specialist in biomathematics, the study of the application of math to biology.

“His appointment perfectly illustrates my vision of defense innovation: open to research and the civil economy, in which entrepreneurship is not a concept but a reality,” Parly said.

https://www.defensenews.com/industry/techwatch/2018/09/05/artificial-intelligence-expert-gets-top-job-at-french-defense-innovation-agency

On the same subject

  • Drones are now a permanent part of the LAPD’s arsenal

    September 20, 2019 | International, Aerospace, Security

    Drones are now a permanent part of the LAPD’s arsenal

    By CINDY CHANG Drones became a permanent part of the Los Angeles Police Department's crime-fighting arsenal Tuesday, despite opposition from privacy advocates who fear the remote-controlled aircraft will be used to spy on people. In a yearlong trial, the LAPD's SWAT team deployed drones four times, mostly when suspects were barricaded and the device provided a bird's eye view of the property's nooks and crannies. On Tuesday, the five-member civilian Police Commission unanimously approved new regulations that enshrine the drones' use in specific situations, including active shooters, barricaded suspects and search warrants. The drones will not be equipped with weapons or facial recognition software, according to the regulations, which are similar to those governing the trial program. In July, at Chief Michel Moore's recommendation, the use of drones was expanded beyond SWAT to include the bomb squad in neutralizing explosives and sweeping large public events for radioactive devices. Drones “provide invaluable information to decision makers while decreasing the risk to human life,” Moore wrote in a July 3 report, noting that everyone is safer when the devices check out a dangerous situation instead of officers going in blind. The LAPD joins about 600 other law enforcement agencies around the country that use drones, according to a 2018 report by Bard College's Center For the Study of the Drone. The new regulations will ensure that the drones are not “being used in a flippant manner,” Asst. Chief Horace Frank, who runs the department's counter-terrorism and special operations bureau, told the Police Commission on Tuesday. The LAPD's drone regulations are more restrictive than those of many other agencies, Frank said. Each drone deployment must be approved by a commander and a deputy chief, and the Police Commission will receive an annual report. Asked by Commissioner Eileen Decker whether drones can help de-escalate volatile situations, Frank cited a June 15 incident when a drone flew near a man who had barricaded himself in a trucking yard. “The minute we deployed the device at the entrance to the trailer and he saw it, he gave up,” Frank said. Activists said the LAPD and Police Commission have disregarded citizens who expressed reservations about the drones in community meetings and online surveys. One activist, Michael Novick, predicted that the LAPD would expand drone usage and infringe on civil liberties. “We're witnessing the exact definition of mission creep,” Novick said. “Now you're upgrading. You approved a temporary pilot project. You're going to normalize it with this step. ... The next step will be they'll come back and say, ‘We actually need the ability to have facial recognition.'” The LAPD's drone fleet will remain at four strong, Frank said. But the DJI Spark devices used in the pilot program will be replaced by DJI Mavics, which have better indoor flying capabilities, extended flight time and lights for navigating in the dark. The models are similar to those used by hobbyists. The Police Commission accepted a $6,645 donation from the Los Angeles Police Foundation to purchase the Mavics, as well as a donation of drone flight tracking software from Measure Aerial Intelligence. As the commission approved the drone regulations and donations, the audience broke into chants of “Shame! Shame!” Moore said he is mindful of “concerns of Big Brother and invasion of privacy and civil liberties.” “We're committed to striking the right balance that ... protects all of our community — their rights of privacy but also their public safety and their right to exist without threats of dangers that this tool can be used in some instances to mitigate,” he told reporters after the meeting. https://www.latimes.com/california/story/2019-09-10/drones-are-now-a-permanent-part-of-the-lapds-arsenal

  • Intelligence Agencies Release AI Ethics Principles

    July 24, 2020 | International, C4ISR, Security

    Intelligence Agencies Release AI Ethics Principles

    Getting it right doesn't just mean staying within the bounds of the law. It means making sure that the AI delivers reports that accurate and useful to policymakers. By KELSEY ATHERTON ALBUQUERQUE — Today, the Office of the Director of National Intelligence released what the first take on an evolving set of principles for the ethical use of artificial intelligence. The six principles, ranging from privacy to transparency to cybersecurity, are described as Version 1.0, approved by DNI John Ratcliffe last month. The six principles are pitched as a guide for the nation's many intelligence especially, especially to help them work with the private companies that will build AI for the government. As such, they provide an explicit complement to the Pentagon's AI principles put forth by Defense Secretary Mark Esper back in February. “These AI ethics principles don't diminish our ability to achieve our national security mission,” said Ben Huebner, who heads the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.” The Pentagon's AI ethics principles came at the tail end of a long process set in motion by workers at Google. These workers called upon the tech giant to withdraw from a contract to build image-processing AI for Project Maven, which sought to identify objects in video recorded by the military. While ODNI's principles come with an accompanying six-page ethics framework, there is no extensive 80-page supporting annex, like that put forth by the Department of Defense. “We need to spend our time under framework and the guidelines that we're putting out to make sure that we're staying within the guidelines,” said Dean Souleles, Chief Technology Advisor at ODNI. “This is a fast-moving train with this technology. Within our working groups, we are actively working on many, many different standards and procedures for practitioners to use and begin to adopt these technologies.” Governing AI as it is developed is a lot like laying out the tracks ahead while the train is in motion. It's a tricky proposition for all involved — but the technology is evolving too fast and unpredictable to try to carve commandments in stone for all time. Here are the six principles, in the document's own words: Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties. Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes. Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias. Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties. Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence. Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector. The accompanying framework offers further questions for people to ask when programming, evaluating, sourcing, using, and interpreting information informed by AI. While bulk processing of data by algorithm is not a new phenomenon for the intelligence agencies, having a learning algorithm try to parse that data and summarize it for a human is a relatively recent feature. Getting it right doesn't just mean staying within the bounds of the law, it means making sure that the data produced by the inquiry is accurate and useful when handed off to the people who use intelligence products to make policy. “We are absolutely welcoming public comment and feedback on this,” said Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there's going to be aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some of the things that we are doing.” Internal legal review, as well as classified assessments from the Inspectors General, will likely be what makes the classified data processing AI accountable to policymakers. For the general public, as it offers comment on intelligence service use of AI, examples will have to come from outside classification, and will likely center on examples of AI in the private sector. “We think there's a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” said Souleles. He specifically pointed to the task of threat identification, using AI to spot malicious actors that seek to cause harm to networks, be they e-commerce giants or three-letter agencies. Depending on one's feelings towards the collection and processing of information by private companies vis-à-vis the government, it is either reassuring or ominous that when it comes to performing public accountability for spy AI, the intelligence community will have business examples to turn to. “There's many areas that I think we're going to be able to talk about going forward, where there's overlap that does not expose our classified sources and methods,” said Souleles, “because many, many, many of these things are really really common problems.” https://breakingdefense.com/2020/07/intelligence-agencies-release-ai-ethics-principles/

  • Finland's top leaders press for rapid NATO membership

    May 13, 2022 | International, Aerospace, Naval, Land, C4ISR, Security

    Finland's top leaders press for rapid NATO membership

    The push can be expected to have an influential impact on Sweden's own decision-making process regarding NATO membership.

All news