29 janvier 2020 | International, Aérospatial

Germany walks away from $2.5 billion purchase of US Navy’s Triton spy drones

By: Sebastian Sprenger

COLOGNE, Germany — The German government has canceled plans to buy Northrop Grumman-made Triton drones to the tune of $2.5 billion, opting instead for manned planes carrying eavesdropping sensors.

The decision to buy Bombardier Global 6000 aircraft comes after officials became convinced that the Global Hawk derivatives would be unable to meet the safety standards needed for flying through European airspace by 2025, a target date for Berlin's NATO obligations.

A defense ministry spokeswoman told Defense News the Triton option had grown “significantly more expensive” compared with earlier planning assumptions.

The U.S. State Department in April 2018 cleared Germany's request to purchase four MQ-4C Triton drones for signals intelligence missions under the country's PEGASUS program, short for “Persistent German Airborne Surveillance System.” The program includes a sensor, dubbed “ISIS-ZB” and made by Hensoldt, for intercepting communications and locating targets by their electromagnetic signature.

The German Defence Ministry for years had been banking on the Triton purchase to come with a pre-installed safety-technology package that would be easily approved by European air traffic authorities. But officials saw their hopes dashed as Italy recently issued a military-type certificate for a sister drone — NATO's Alliance Ground Surveillance fleet of Global Hawks, stationed in Sigonella, Sicily — that prescribes tight restrictions on flights over the continent.

Manned aircraft like the envisioned Global 6000 are allowed to routinely fly alongside civilian traffic, a prospect that the Germans see as more palatable than dealing with drone-specific airspace corridors.

Berlin hopes to catch the tail end of Bombardier's Global 6000 manufacturing run, as the model is being phased out in favor of an upgrade. While that strategy could yield a better price, Berlin needs to move soon before the production line goes cold, according to officials.

Letting drones fly in the same airspace as civilian traffic remains an unresolved problem, as the requisite sensing technology and the regulatory framework are still emerging. Germany previously tried filling its signals-intelligence gap with the Euro Hawk, but the project tanked in 2013 after spending $700 million because officials underestimated the trickiness of attaining airworthiness qualification.

With the Triton gone, Germany's next ambition for a fully approved unmanned aircraft lies with the so-called Eurodrone, a cooperation with France. Officials have said that the program is designed from the start with manned-unmanned airspace integration in mind.

https://www.defensenews.com/breaking-news/2020/01/28/germany-walks-away-from-25-billion-purchase-of-us-navys-triton-spy-drones

Sur le même sujet

  • Lockheed Martin on track to increase production of weapons systems

    15 février 2024 | International, Terrestre

    Lockheed Martin on track to increase production of weapons systems

  • ‘More with less’: Lacking parts, airmen scramble to keep B-52s flying

    12 février 2024 | International, Aérospatial

    ‘More with less’: Lacking parts, airmen scramble to keep B-52s flying

    As the B-52H Stratofortress tops more than six decades in service, it’s grown increasingly temperamental.

  • Artificial intelligence systems need ‘checks and balances’ throughout development

    22 juin 2020 | International, C4ISR

    Artificial intelligence systems need ‘checks and balances’ throughout development

    Andrew Eversden The Pentagon's primary artificial intelligence hub is already studying how to aim a laser at the correct spot on an enemy vehicle, pinpointing which area to target to inflict the most damage, and identifying the most important messages headed to commanders, officials said June 16. But as part of that work, the Department of Defense needs to carefully implement checks and balances into the development process, experts urged June 16. “Fundamentally I would say there's a requirement ... that there's going to be a mixture of measures taken to ensure the governability of the system from the first stage of the design of the system all the way up through the operations of the system in a combat scenario,” said Greg Allen, the Joint Artificial Intelligence Center's chief of strategy and communications at the Joint Artificial Intelligence Center, at the Defense One Tech Summit June 16. The JAIC is working on several lethality projects through its new joint warfighting initiative, boosted by a new contract award to Booz Allen potentially worth $800 million. “With this new contract vehicle, we have the potential to do even more this next year than we did in the past,” Allen said. Meanwhile, the Army's Artificial Intelligence Task Force is working on an advanced threat recognition project. DARPA is exploring complementing AI systems that would identify available combat support assets and quickly plan their route to the area. Throughout all of the development work, experts from the military and from academia stressed that human involvement and experimentation was critical to ensuring that artificial intelligence assets are trustworthy. The department has released a document of five artificial intelligence ethical principles, but the challenge remains implementing those principles into projects across a department with disparate services working on separate artificial intelligence projects. “We want safe, reliable and robust systems deployed to our warfighters,” said Heather Roff, senior research analyst at the Johns Hopkins Applied Physics Lab. “We want to be able to trust those systems. We want to have some sort of measure of predictability even if those systems act unpredictably.” Brig. Gen. Matt Easley, director of the artificial intelligence task force at Army Futures Command, said the service is grappling with those exact challenges, trying to understand how the service can insert “checks and balances” as it trains systems and soldiers. Easley added that the unmanned systems under development by the Army will have to be adaptable to different environments, such as an urban or desert scenarios. In order to ensure that the systems and soldiers are ready for those scenarios, the Army has to complete a series of tests, just like the autonomous vehicle industry. “We don't think these systems are going to be 100 percent capable right out of the box,” Easley said on the webinar. “If you look at a lot of the evolution of the self-driving cars throughout our society today, they're doing a lot of experimentation. They're doing lots of testing, lots of learning every day. We in the Army have to learn how to go from doing one to two to three vehicle experiments to have many experiments going on every day across all our camp posts and stations.” Increasingly autonomous systems also mean that there needs to a cultural shift in among all levels of military personnel who will need to better understand how artificial intelligence is used. Roff said that operators, commanders and judge advocate generals will need to better understand how systems are supposed “to ensure that the human responsibility and governability is there.” “We need to make sure that we have training, tactics, procedures, as well as policies, ensuring where we know the human decision maker is,” Roff said. https://www.c4isrnet.com/it-networks/2020/06/18/artificial-intelligence-systems-need-checks-and-balances-throughout-development/

Toutes les nouvelles