1 janvier 2024 | International, Naval

New in 2024: Testing to decide future of new Marine landing ship

Sur le même sujet

  • Googlers headline new commission on AI and national security

    22 janvier 2019 | International, C4ISR

    Googlers headline new commission on AI and national security

    By: Kelsey D. Atherton Is $10 million and 22 months enough to shape the future of artificial intelligence? Probably not, but inside the fiscal 2019 national defense policy bill is a modest sum set aside for the creation and operations of a new National Security Commission for Artificial Intelligence. And in a small way, that group will try. The commission's full membership, announced Jan. 18, includes 15 people across the technology and defense sectors. Led by Eric Schmidt, formerly of Google and now a technical adviser to Google parent company Alphabet, the commission is co-chaired by Robert Work. former undersecretary of defense who is presently at the Center for New American Security. The group is situated as independent within the executive branch, and its scope is broad. The commission is to look at the competitiveness of the United States in artificial intelligence, how the US can maintain a technological advantage in AI, keep an eye on foreign developments and investments in AI, especially as related to national security. In addition, the authorization for the commission tasks it with considering means to stimulate investment in AI research and AI workforce development. The commission is expected to consider the risks of military uses of AI by the United States or others, and the ethics related to AI and machine learning as applied to defense. Finally, it is to look at how to establish data standards across the national security space, and to consider how the evolving technology can be managed. All of this has been discussed in some form in the national security community for months, or years, but now, a formal commission will help lay out a blue print. That is several tall orders, all of which will lead to at least three reports. The first report is set by law to be delivered no later than February 2019, with annual reports to follow in August of 2019 and 2020. The commission is set to wrap up its work by October 2020. Inside the authorization is a definition of artificial intelligence to for the commission to work from. Or, well, five definitions: Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. An artificial system designed to think or act like a human, including cognitive architectures and neural networks. A set of techniques, including machine learning that is designed to approximate a cognitive task. An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting. Who will be the people tasked with navigating AI and the national security space? Mostly the people already developing and buying the technologies that make up the modern AI sector. Besides Schmidt, the list includes several prominent players from the software and AI industries including Oracle co-CEO Safra Catz, Director of Microsoft Research Eric Horvitz, CEO of Amazon Web Services Andy Jassy, and Head of Google Cloud AI Andrew Moore. After 2018's internal protests in Google, Microsoft, and Amazon over the tech sector's involvement in Pentagon contracts, especially at Google, one might expect to see some skepticism of AI use in national security from Silicon Valley leadership. Instead, Google, which responded to employee pressure by declining to renew its Project Maven contract, is functionally represented twice, by Moore and functionally by Schmidt. Academia is also present on the commission, with a seat held by Dakota State University president. Jose-Marie Griffiths. CEO Ken Ford will represent Florida Institute for Human & Machine Cognition, which is tied to Florida's State University program. Caltech and NASA will be represented on the commission by the supervisor of Jet Propulsion Lab's AI group, Steve Chien. Intelligence sector will be present at the table in the form of In-Q-Tel CEO Christ Darby and former Director of Intelligence Advanced Research Projects Activity Jason Matheny. Rounding out the commission is William Mark, the director of the information and computing sciences division at SRI, a pair of consultants: Katrina McFarland of Cypress International and Gilman Louie of Alsop Louie Partners. Finally, Civil society groups are represented by Open Society Foundation fellow Mignon Clyburn. Balancing the security risks, military potential, ethical considerations, and workforce demands of the new and growing sector of machine cognition is a daunting task. Finding a way to bend the federal government to its conclusions will be tricky in any political climate, though perhaps especially so in the present moment, when workers in the technological sector are vocal about fears of the abuse of AI and the government struggles to clearly articulate technology strategies. The composition of the commission suggests that whatever conclusions are reached by the commission will be agreeable to the existing technology sector, amenable to the intelligence services, and at least workable by academia. Still, the proof is in the doing, and anyone interested in how the AI sector thinks the federal government should think about AI for national security should look forward to the commission's initial report. https://www.c4isrnet.com/c2-comms/2019/01/18/googlers-dominate-new-comission-on-ai-and-national-security/

  • Rheinmetall is supplying qualification rounds of a new generation of tank ammunition for a joint qualification of the Bundeswehr and the British Army

    9 octobre 2024 | International, Terrestre

    Rheinmetall is supplying qualification rounds of a new generation of tank ammunition for a joint qualification of the Bundeswehr and the British Army

    The new 120 mm x 570 KE2020Neo kinetic energy ammunition continues the successful series of kinetic energy (KE) rounds from Rheinmetall.

  • The US Military Is Genetically Engineering New Life Forms To Detect Enemy Subs

    7 décembre 2018 | International, Naval, C4ISR

    The US Military Is Genetically Engineering New Life Forms To Detect Enemy Subs

    BY PATRICK TUCKER The Pentagon is also looking at living camouflage, self-healing paint, and a variety of other applications of engineered organisms, but the basic science remains a challenge. How do you detect submarines in an expanse as large as the ocean? The U.S. military hopes that common marine microorganisms might be genetically engineered into living tripwires to signal the passage of enemy subs, underwater vessels, or even divers. It's one of many potential military applications for so-called engineered organisms, a field that promises living camouflage that reacts to its surroundings to better avoid detection, new drugs and medicines to help deployed forces survive in harsh conditions, and more. But the research is in its very early stages, military officials said. The Naval Research Laboratory, or NRL, is supporting the research. Here's how it would work: You take an abundant sea organism, like Marinobacter, and change its genetic makeup to react to certain substances left by enemy vessels, divers, or equipment. These could be metals, fuel exhaust, human DNA, or some molecule that's not found naturally in the ocean but is associated with, say, diesel-powered submarines. The reaction could take the form of electron loss, which could be detectable to friendly sub drones. Full article: https://www.defenseone.com/technology/2018/12/us-military-genetically-engineering-new-life-forms-detect-enemy-subs/153200/

Toutes les nouvelles