4 septembre 2018 | International, Terrestre, C4ISR

New military drone roadmap ambivalent on killer robots

By:

Drones are everywhere in the Pentagon today. While unpeopled vehicles are most closely associated with the Air Force and targeted killing campaigns, remotely controlled robots are in every branch of the military and used across all combatant commands. The fiscal year 2018 defense authorization contained the largest budget for drones and robots across the services ever, a sign of just how much of modern warfare involves these machines.

Which is perhaps why, when the Department of Defense released its latest roadmap for unmanned systems, the map came in at a punchy 60 pages, far shy of the 160-page tome released in 2013. This is a document less about a military imagining a future of flying robots and more about managing a present that includes them.

The normalization of battlefield robots

Promised since at least spring 2017, the new roadmap focuses on interoperability, autonomy, network security and human-machine collaboration.

The future of drones, and of unpeopled ground vehicles or water vehicles, is as tools that anyone can use, that can do most of what is asked of them on their own, that communicate without giving away the information they are sharing, and that will work to make the humans using the machines function as more-than-human.

This is about a normalization of battlefield robots, the same way that mechanized warfare moved from a theoretical approach to the standard style of fighting by nations a few generations ago. Network security isn't as flashy a highlight as “unprecedented battlefield surveillance by flying robot,” but it's part of making sure that those flying cameras don't, say, transmit easily intercepted data over an open channel.

“Future warfare will hinge on critical and efficient interactions between war-fighting systems,” states the roadmap. “This interoperable foundation will transmit timely information between information gatherers, decision makers, planners and war fighters.”

A network is nothing without its nodes, and the nodes that need to be interoperable here are a vast web of sensors and weapons, distributed among people and machines, that will have to work in concert in order to be worth the networking at all. The very nature of war trends toward pulling apart networks, toward isolation. Those nodes each become a point at which a network can be broken, unless they are redundant or autonomous.

Where will the lethal decision lie?

Nestled in the section on autonomy, the other signpost feature of the Pentagon's roadmap, is a small chart about the way forward. In that chart is a little box labeled “weaponization,” and in that box it says the near-term goals are DoD strategy assessment and lethal autonomous weapon systems assessment.

Lethal autonomous weapon systems are of such international concern that there is a meeting of state dignitaries and humanitarian officials in Geneva happening at the exact moment this roadmap was released. That intergovernmental body is hoping to decide whether or not militaries will develop robots that can kill of their own volition, according to however they've been programmed.

The Pentagon, at least in the roadmap, seems content to wait for its own assessment and the verdict of the international community before developing thinking weapons. Hedging on this, the same chart lists “Armed Wingman/Teammate (Human decision to engage)” as the goal for somewhere between 2029 and 2042.

“Unmanned systems with integrated AI, acting as a wingman or teammate with lethal armament could perform the vast majority of the actions associated with target identification,tracking, threat prioritization, and post-attack assessment," reads the report.

"This level of automation will alleviate the human operator of task-level activities associated with the engagement of a target, allowing the operator to focus on the identified threat and the decision to engage.”

The roadmap sketches out a vision of future war that hands off many decisions to autonomous machines, everything from detection to targeting, then loops the lethal decision back to a human responsible for making the call on whether or not the robot should use its weapons on the targets it selected.

Humans as battlefield bot-shepards, guiding autonomous machines into combat and signing off on the exact attacks, is a possible future for robots in war, one that likely skirts within the boundaries of still-unsettled international law.

Like its predecessor, this drone roadmap is plotting a rough path through newly charted territory. While it leans heavily on the lessons of the present, the roadmap doesn't attempt to answer on its own the biggest questions of what robots will be doing on the battlefields of tomorrow. That is, fundamentally, a political question, and one that much of the American public itself doesn't yet have strong feelings about.

https://www.c4isrnet.com/unmanned/2018/08/31/new-military-drone-roadmap-ambivalent-on-killer-robots

Sur le même sujet

  • Intelligence Agencies Release AI Ethics Principles

    24 juillet 2020 | International, C4ISR, Sécurité

    Intelligence Agencies Release AI Ethics Principles

    Getting it right doesn't just mean staying within the bounds of the law. It means making sure that the AI delivers reports that accurate and useful to policymakers. By KELSEY ATHERTON ALBUQUERQUE — Today, the Office of the Director of National Intelligence released what the first take on an evolving set of principles for the ethical use of artificial intelligence. The six principles, ranging from privacy to transparency to cybersecurity, are described as Version 1.0, approved by DNI John Ratcliffe last month. The six principles are pitched as a guide for the nation's many intelligence especially, especially to help them work with the private companies that will build AI for the government. As such, they provide an explicit complement to the Pentagon's AI principles put forth by Defense Secretary Mark Esper back in February. “These AI ethics principles don't diminish our ability to achieve our national security mission,” said Ben Huebner, who heads the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.” The Pentagon's AI ethics principles came at the tail end of a long process set in motion by workers at Google. These workers called upon the tech giant to withdraw from a contract to build image-processing AI for Project Maven, which sought to identify objects in video recorded by the military. While ODNI's principles come with an accompanying six-page ethics framework, there is no extensive 80-page supporting annex, like that put forth by the Department of Defense. “We need to spend our time under framework and the guidelines that we're putting out to make sure that we're staying within the guidelines,” said Dean Souleles, Chief Technology Advisor at ODNI. “This is a fast-moving train with this technology. Within our working groups, we are actively working on many, many different standards and procedures for practitioners to use and begin to adopt these technologies.” Governing AI as it is developed is a lot like laying out the tracks ahead while the train is in motion. It's a tricky proposition for all involved — but the technology is evolving too fast and unpredictable to try to carve commandments in stone for all time. Here are the six principles, in the document's own words: Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties. Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes. Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias. Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties. Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence. Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector. The accompanying framework offers further questions for people to ask when programming, evaluating, sourcing, using, and interpreting information informed by AI. While bulk processing of data by algorithm is not a new phenomenon for the intelligence agencies, having a learning algorithm try to parse that data and summarize it for a human is a relatively recent feature. Getting it right doesn't just mean staying within the bounds of the law, it means making sure that the data produced by the inquiry is accurate and useful when handed off to the people who use intelligence products to make policy. “We are absolutely welcoming public comment and feedback on this,” said Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there's going to be aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some of the things that we are doing.” Internal legal review, as well as classified assessments from the Inspectors General, will likely be what makes the classified data processing AI accountable to policymakers. For the general public, as it offers comment on intelligence service use of AI, examples will have to come from outside classification, and will likely center on examples of AI in the private sector. “We think there's a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” said Souleles. He specifically pointed to the task of threat identification, using AI to spot malicious actors that seek to cause harm to networks, be they e-commerce giants or three-letter agencies. Depending on one's feelings towards the collection and processing of information by private companies vis-à-vis the government, it is either reassuring or ominous that when it comes to performing public accountability for spy AI, the intelligence community will have business examples to turn to. “There's many areas that I think we're going to be able to talk about going forward, where there's overlap that does not expose our classified sources and methods,” said Souleles, “because many, many, many of these things are really really common problems.” https://breakingdefense.com/2020/07/intelligence-agencies-release-ai-ethics-principles/

  • Cyberdéfense : Thales investit dans son site de Cholet

    3 février 2021 | International, C4ISR, Sécurité

    Cyberdéfense : Thales investit dans son site de Cholet

    Thales investit dans ses infrastructures à Cholet, où un tout nouveau centre de recherche et développement (R&D) doit être construit à côté de son plus ancien site de production en France, né en 1936. Ce site va se spécialiser dans les systèmes d'information sécurisés et les activités de cyberdéfense. « Sur ce site, nous voulons pouvoir lancer un cycle complet : développer un projet en R&D, le qualifier, produire et maintenir celui-ci, ainsi qu'assurer les missions de formation du personnel », explique Jean-Pascal Laporte, chef d'établissement de Cholet et directeur industriel pour les activités de communication de Thales. Entre 400 à 500 nouveaux salariés vont être recrutés sur le site de Cholet à partir de cette année. Les profils type ingénieur ou doctorant sont les plus recherchés à la fois dans les domaines de l'électronique, du développement logiciel, ainsi que des experts en cybersécurité et en communication par satellite. L'entreprise, qui compte aujourd'hui 23% de femmes dans ses effectifs, affiche également comme objectif prioritaire d'élever sensiblement ce pourcentage. Le Parisien du 1er février 2021

  • La Commission européenne lance des projets industriels de défense

    16 juin 2020 | International, Aérospatial, Naval, Terrestre, C4ISR, Sécurité

    La Commission européenne lance des projets industriels de défense

    La Commission européenne a lancé le 15 juin 16 projets industriels de défense pan-européens et trois projets technologiques de rupture. Ils vont bénéficier de 205 millions d'euros de financements à travers un Fonds pilote pour la défense EDIDP (programme européen de développement industriel de la défense) doté de 525 millions d'euros au total sur la période 2019/2020 : technologies portant sur les drones, sur le spatial (réseau de communications et technologie militaires pour satellites), sur les missiles anti-chars, sur les véhicules terrestres sans pilote et sur la cyber. Sur les 19 projets, dont neuf sont des projets PESCO (Coopération structurée permanente), 24 États membres sont représentés à travers leurs entreprises (223 concernées, dont 83 PME). De nouveaux projets européens devraient être signés en fin d'année, dont le drone MALE européen Eurodrone (100 millions d'euros) et le projet de communications militaires interopérables ESSOR (37 millions d'euros). La Tribune du 15 juin 2020

Toutes les nouvelles