Back to news

December 5, 2022 | International, Aerospace

Vrgineers and multiSIM deliver unclassified F-35-like mixed reality trainer

With a growing fleet of 5th generation fighters like the F-35 Lightning II, Vrgineers Inc. has unveiled an unclassified F-35-like Classroom 2.0 Trainer.

https://www.skiesmag.com/vrgineers-and-multisim-deliver-unclassified-f-35-like-mixed-reality-trainer

On the same subject

  • Défense : le battle lab Terre va expérimenter son premier drone armé

    October 7, 2022 | International, Aerospace

    Défense : le battle lab Terre va expérimenter son premier drone armé

    Depuis un an et demi, DGA Techniques Terrestres (DGA TT) planche à Bourges sur le premier drone armé destiné au Battle Lab de l'Armée de Terre.

  • Le plan de l'armée française pour réduire son empreinte carbone

    July 6, 2020 | International, Aerospace, Naval, Land, C4ISR, Security

    Le plan de l'armée française pour réduire son empreinte carbone

    La ministre des Armées, Florence Parly, a présenté vendredi la nouvelle « stratégie énergétique de défense » des Armées. Parmi les mesures annoncées figurent notamment le lancement d'un démonstrateur de blindé hybride de modèle Griffon en 2022, ou le développement de la simulation pour les séances d'entraînement dans l'armée de l'Air afin d'économiser du kérosène. Les systèmes d'hébergement de données informatiques vont de plus être revus pour tenter de réutiliser la chaleur des réseaux pour des infrastructures des Armées. Des recherches sur l'hydrogène sont également programmées, notamment pour équiper les piles à combustibles des soldats ou trouver des minidrones propulsés à l'hydrogène. D'ici fin 2021, un logiciel mesurant précisément les consommations énergétiques de toutes les emprises des Armées devrait par ailleurs être disponible. Les Echos du 3 juillet

  • Silicon Valley should work with the military on AI. Here’s why.

    September 17, 2018 | International, C4ISR

    Silicon Valley should work with the military on AI. Here’s why.

    By Editorial Board GOOGLE DECIDED after an employee backlash this summer that it no longer wanted to help the U.S. military craft artificial intelligence to help analyze drone footage. Now, the military is inviting companies and researchers across the country to become more involved in machine learning. The firms should accept the invitation. The Defense Department's Defense Advanced Research Projects Agency will invest up to $2 billion over the next five years in artificial intelligence, a significant increase for the bureau whose goal is promoting innovative research. The influx suggests the United States is preparing to start sprinting in an arms race against China. It gives companies and researchers who want to see a safer world an opportunity not only to contribute to national security but also to ensure a more ethical future for AI. The DARPA contracts will focus on helping machines operate in complex real-world scenarios. They will also tackle one of the central conundrums in AI: something insiders like to call “explainability.” Right now, what motivates the results that algorithms return and the decisions they make is something of a black box. That's worrying enough when it comes to policing posts on a social media site, but it is far scarier when lives are at stake. Military commanders are more likely to trust artificial intelligence if they know what it is “thinking,” and the better any of us understands technology, the more responsibly we can use it. There is a strong defense imperative to make AI the best it can be, whether to deter other countries from using their own machine-learning capabilities to target the United States, or to ensure the United States can effectively counter them when they do. Smarter technologies, such as improved target recognition, can save civilian lives, and allowing machines to perform some tasks instead of humans can protect service members. But patriotism is not the only reason companies should want to participate. They know better than most in government the potential these technologies have to help and to harm, and they can leverage that knowledge to maximize the former and minimize the latter. Because DARPA contracts are public, the work researchers do will be transparent in a way that Project Maven, the program that caused so much controversy at Google, was not. Employees aware of what their companies are working on can exert influence over how those innovations are used, and the public can chime in as well. DARPA contractors will probably develop products with nonlethal applications, like improved self-driving cars for convoys and autopilot programs for aircraft. But the killer robots that have many people worried are not outside the realm of technological possibility. The future of AI will require outlining principles that explain how what is possible may differ from what is right. If the best minds refuse to contribute, worse ones will. https://www.washingtonpost.com/opinions/silicon-valley-should-work-with-the-military-on-ai-heres-why/2018/09/12/1085caee-b534-11e8-a7b5-adaaa5b2a57f_story.html

All news