14 janvier 2022 | International, Aérospatial
1 octobre 2020 | International, Terrestre, Autre défense
Ashley Roque
Weight and mobility challenges have forced the US Army to abandon a government-designed autoloader for its Extended Range Cannon Artillery (ERCA) programme and the service is now looking for help from six tech companies.
Brigadier General John Rafferty, the head of the Long-Range Precision Fires Cross-Functional Team, spoke at a virtual Fires Conference on 29 September and provided an update of programmes under his purview. One notable change is an army's decision not to move forward with an autoloader that it had been developing for the new weapon based off BAE Systems' Paladin M109A7 self-propelled howitzer.
“The integration challenge for [it] was too much of a trade with mobility and durability, and some of the results from putting 3,000 miles on a combat vehicle [out at Yuma Proving Ground] weighted up with the centre of gravity issue that we had,” the one-star general told the audience. “It was an easy decision to say that we can't do that.”
Instead the army is looking to a group of six companies previously picked to help find artillery munition resupplying solutions – Actuate, Apptronik, Carnegie Robotics, Hivemapper, Neya Systems, and Pratt Miller. Although Brig Gen Rafferty did not provide in-depth information on the path ahead, he noted that a future capability may not be an autoloader at all.
”I've learned that it was really stupid to go into this saying, ‘Hey, we want an autoloader'. I don't want an autoloader; What we want is an improved rate of fire,” he added.
”What I told them is I don't care if there's cannoneer there setting fuses if we're able to get the six to 10 rounds a minute,” Brig Gen Rafferty furthered.
14 janvier 2022 | International, Aérospatial
22 juin 2020 | International, C4ISR
Andrew Eversden The Pentagon's primary artificial intelligence hub is already studying how to aim a laser at the correct spot on an enemy vehicle, pinpointing which area to target to inflict the most damage, and identifying the most important messages headed to commanders, officials said June 16. But as part of that work, the Department of Defense needs to carefully implement checks and balances into the development process, experts urged June 16. “Fundamentally I would say there's a requirement ... that there's going to be a mixture of measures taken to ensure the governability of the system from the first stage of the design of the system all the way up through the operations of the system in a combat scenario,” said Greg Allen, the Joint Artificial Intelligence Center's chief of strategy and communications at the Joint Artificial Intelligence Center, at the Defense One Tech Summit June 16. The JAIC is working on several lethality projects through its new joint warfighting initiative, boosted by a new contract award to Booz Allen potentially worth $800 million. “With this new contract vehicle, we have the potential to do even more this next year than we did in the past,” Allen said. Meanwhile, the Army's Artificial Intelligence Task Force is working on an advanced threat recognition project. DARPA is exploring complementing AI systems that would identify available combat support assets and quickly plan their route to the area. Throughout all of the development work, experts from the military and from academia stressed that human involvement and experimentation was critical to ensuring that artificial intelligence assets are trustworthy. The department has released a document of five artificial intelligence ethical principles, but the challenge remains implementing those principles into projects across a department with disparate services working on separate artificial intelligence projects. “We want safe, reliable and robust systems deployed to our warfighters,” said Heather Roff, senior research analyst at the Johns Hopkins Applied Physics Lab. “We want to be able to trust those systems. We want to have some sort of measure of predictability even if those systems act unpredictably.” Brig. Gen. Matt Easley, director of the artificial intelligence task force at Army Futures Command, said the service is grappling with those exact challenges, trying to understand how the service can insert “checks and balances” as it trains systems and soldiers. Easley added that the unmanned systems under development by the Army will have to be adaptable to different environments, such as an urban or desert scenarios. In order to ensure that the systems and soldiers are ready for those scenarios, the Army has to complete a series of tests, just like the autonomous vehicle industry. “We don't think these systems are going to be 100 percent capable right out of the box,” Easley said on the webinar. “If you look at a lot of the evolution of the self-driving cars throughout our society today, they're doing a lot of experimentation. They're doing lots of testing, lots of learning every day. We in the Army have to learn how to go from doing one to two to three vehicle experiments to have many experiments going on every day across all our camp posts and stations.” Increasingly autonomous systems also mean that there needs to a cultural shift in among all levels of military personnel who will need to better understand how artificial intelligence is used. Roff said that operators, commanders and judge advocate generals will need to better understand how systems are supposed “to ensure that the human responsibility and governability is there.” “We need to make sure that we have training, tactics, procedures, as well as policies, ensuring where we know the human decision maker is,” Roff said. https://www.c4isrnet.com/it-networks/2020/06/18/artificial-intelligence-systems-need-checks-and-balances-throughout-development/
3 novembre 2022 | International, Aérospatial
Carrier boasts 23 new technologies including the Electromagnetic Aircraft Launch System and the Advanced Weapons Elevator.