January 25, 2024 | International, Land
December 17, 2024 | International, Aerospace
Opinion: American warplanes could be kept from joining the fray of a conflict with China for days or weeks, analysts have concluded.
January 25, 2024 | International, Land
February 11, 2021 | International, Land
by Ashley Roque US Army soldiers are in the midst of a five-month assessment of two different ‘light tank' prototypes – one version by BAE Systems and the other by General Dynamics Land Systems (GDLS) – but the former company has yet to deliver any of its vehicles, according to industry and the service. The army kicked off its Mobile Protected Firepower (MPF) soldier vehicle assessment (SVA) on 4 January and it is anticipated to continue through to June, Ashley John, director for public and congressional affairs for the Program Executive Office for Ground Combat Systems, told Janes on 27 January. Under the larger programme, both BAE Systems and GDLS are under contract to deliver 12 MPF prototypes to the army and soldiers are slated to test out four vehicles of each variant. However, this testing phase began with vehicles from only one company – GDLS. We have received 12 prototypes in total, and four ballistic hull and turrets,” John said. “We will continue to receive the remaining prototypes throughout fiscal year 2021.” Although John did not disclose which company produced the delivered prototypes, a GDLS spokesperson confirmed that the company delivered its 12th and final prototype to the army at the end of December 2020. GDLS's delivery completion means BAE Systems has delivered only two ballistic hulls to the service. https://www.janes.com/defence-news/news-detail/us-army-begins-light-tank-soldier-assessment-without-bae-systems-prototype
September 17, 2018 | International, C4ISR
By Editorial Board GOOGLE DECIDED after an employee backlash this summer that it no longer wanted to help the U.S. military craft artificial intelligence to help analyze drone footage. Now, the military is inviting companies and researchers across the country to become more involved in machine learning. The firms should accept the invitation. The Defense Department's Defense Advanced Research Projects Agency will invest up to $2 billion over the next five years in artificial intelligence, a significant increase for the bureau whose goal is promoting innovative research. The influx suggests the United States is preparing to start sprinting in an arms race against China. It gives companies and researchers who want to see a safer world an opportunity not only to contribute to national security but also to ensure a more ethical future for AI. The DARPA contracts will focus on helping machines operate in complex real-world scenarios. They will also tackle one of the central conundrums in AI: something insiders like to call “explainability.” Right now, what motivates the results that algorithms return and the decisions they make is something of a black box. That's worrying enough when it comes to policing posts on a social media site, but it is far scarier when lives are at stake. Military commanders are more likely to trust artificial intelligence if they know what it is “thinking,” and the better any of us understands technology, the more responsibly we can use it. There is a strong defense imperative to make AI the best it can be, whether to deter other countries from using their own machine-learning capabilities to target the United States, or to ensure the United States can effectively counter them when they do. Smarter technologies, such as improved target recognition, can save civilian lives, and allowing machines to perform some tasks instead of humans can protect service members. But patriotism is not the only reason companies should want to participate. They know better than most in government the potential these technologies have to help and to harm, and they can leverage that knowledge to maximize the former and minimize the latter. Because DARPA contracts are public, the work researchers do will be transparent in a way that Project Maven, the program that caused so much controversy at Google, was not. Employees aware of what their companies are working on can exert influence over how those innovations are used, and the public can chime in as well. DARPA contractors will probably develop products with nonlethal applications, like improved self-driving cars for convoys and autopilot programs for aircraft. But the killer robots that have many people worried are not outside the realm of technological possibility. The future of AI will require outlining principles that explain how what is possible may differ from what is right. If the best minds refuse to contribute, worse ones will. https://www.washingtonpost.com/opinions/silicon-valley-should-work-with-the-military-on-ai-heres-why/2018/09/12/1085caee-b534-11e8-a7b5-adaaa5b2a57f_story.html