Back to news

May 19, 2023 | International, Naval

US Navy may accelerate investments to extend some Ohio subs’ lives

The Navy is considering extending the lives of a few Ohio-class submarines in fiscal 2025 to hedge against any delays as the Columbia class is built.

https://www.defensenews.com/naval/2023/05/19/us-navy-may-accelerate-investments-to-extend-some-ohio-subs-lives/

On the same subject

  • Europe’s Industry Finally Gets MALE Drone Program

    March 8, 2022 | International, Aerospace

    Europe’s Industry Finally Gets MALE Drone Program

  • Can AI spot active shooters?

    September 9, 2022 | International, C4ISR

    Can AI spot active shooters?

    The Air Force is looking at a company called ZeroEyes to help notify Security Forces of armed intruders on base and facility perimeters.

  • Pentagon Seeks a List of Ethical Principles for Using AI in War

    January 7, 2019 | International, Aerospace, C4ISR

    Pentagon Seeks a List of Ethical Principles for Using AI in War

    BY PATRICK TUCKER An advisory board is drafting guidelines that may help shape worldwide norms for military artificial intelligence — and woo Silicon Valley to defense work. U.S. defense officials have asked the Defense Innovation Board for a set of ethical principles in the use of artificial intelligence in warfare. The principles are intended to guide a military whose interest in AI is accelerating — witness the new Joint Artificial Intelligence Center — and to reassure potential partners in Silicon Valley about how their AI products will be used. Today, the primary document laying out what the military can and can't do with AI is a 2012 doctrine that says a human being must have veto power over any action an autonomous system might take in combat. It's brief, just four pages, and doesn't touch on any of the uses of AI for decision support, predictive analytics, etc. where players like Google, Microsoft, Amazon, and others are making fast strides in commercial environments. “AI scientists have expressed concern about how DoD intends to use artificial intelligence. While the DoD has a policy on the role of autonomy in weapons, it currently lacks a broader policy on how it will use artificial intelligence across the broad range of military missions,” said Paul Scharre, the author of Army of None: Autonomous Weapons and the Future of War. Josh Marcuse, executive director of the Defense Innovation Board, said crafting these principles will help the department “safely and responsibly” employ new technologies. “I think it's important when dealing with a field that's emergent to think through all the ramifications,” he said. The Board, a group of Silicon Valley corporate and thought leaders chaired by former Google and Alphabet chairman Eric Schmidt, will make the list public at its June meeting. Defense Department leaders will take them under consideration. Marcuse believes that the Pentagon can be a leader not just in employing AI but in establishing guidelines for safe use — just as the military pioneered safety standards for aviation. “The Department of Defense should lead in this area as we have with other technologies in the past. I want to make sure the department is not just leading in developing AI for military purposes but also in developing ethics to use AI in military purposes,” he says. The effort, in part, is a response to what happened with the military's Project Maven, the Pentagon's flagship AI project with Google as its partner. The effort applied artificial intelligence to the vast store of video and and image footage that the Defense Department gathers to guide airstrikes. Defense officials emphasized repeatedly that the AI was intended only to cut down the workload of human analysts. But they also acknowledged that the ultimate goal was to help the military do what it does better, which sometimes means finding and killing humans. An employee revolt ensued at Google. Employees resigned en masse and the company said that they wouldn't renew the contract. Scharre, who leads the Technology and National Security Program at the Center for a New American Security, said, “One of the challenges for things like Project Maven, which uses AItechnology to process drone video feeds, is that some scientists expressed concern about where the technology may be heading. A public set of AI principles will help clarify DoD's intentions regarding artificial intelligence.” Full artcile: https://www.defenseone.com/technology/2019/01/pentagon-seeks-list-ethical-principles-using-ai-war/153940/

All news