Back to news

April 8, 2022 | International, C4ISR, Security

Cyber defence agency gets significant boost in Liberals’ Budget 2022

Canada’s cyber defence agency gets almost $700 million over five years to bolster cyber defences in government and the private sector – and launch offensive ‘cyber operations.’

https://q107.com/news/8743509/budget-2022-boosts-cyber-security/

On the same subject

  • How the Pentagon can improve AI adoption

    July 8, 2019 | International, Other Defence

    How the Pentagon can improve AI adoption

    By: Graham Gilmer The excitement of artificial intelligence today is like the space race of the 1960s, when nations were in fierce competition. Now, the United States is in first place. But continued leadership is not a given, especially as competitors, namely China and Russia, are making significant investments in AI for defense. To maintain our technological advantage, safeguard national security, and lead on the world stage, we have an imperative to invest strategically in AI. The successful and widespread adoption of AI requires the United States take a human-centric and technologically innovative approach to using AI to help maintain the peace and prosperity of our nation. As the Department of Defense and Joint Artificial Intelligence Center (JAIC) continue their efforts to accelerate AI adoption, they must address three key components of successful adoption: building trust in AI technology, operationalizing AI technologies to reach enterprise scale, and establishing ethical governance standards and procedures to reduce exposure to undue risk. Build trust in AI technology Fear and distrust hold technology adoption back. This was true during the first three industrial revolutions as mechanization, factories, and computers transformed the world, and it is the case in today's fourth industrial revolution of AI. The confusion surrounding AI has led to teams abandoning applications due to a lack of trust. To build that trust, we must prioritize training, explainability, and transparency. Trust in technology is built when leaders have accurate expectations around what it is going to deliver, mission owners can identify use cases connected to the core mission, and managers understand the true impact on mission performance. Building trust requires that all users, from executives and managers to analysts and operators, receive training on AI-enabled technologies. Training involves not only providing access to learning resources, but also creating opportunities for them to put their new skills to use. In its formal AI strategy, Pentagon leaders outlined extensive plans for implementing AI training programs across the department to build a digitally savvy workforce that will be key to maintaining the United States' leading position in the AI race. “Explainable AI” also curbs distrust by showing users how machines reach decisions. Consider computer vision. Users may wonder: How can such a tool sift through millions of images to identify a mobile missile launcher? A computer vision tool equipped with explainable AI could highlight aspects of the image that it uses in identification—in this case, elements that look like wheels, tracks, or launch tubes. Explainable AI gives users a “look under the hood,” tailored to their level of technical literacy. AI technologies must be more than understandable; they must also be transparent. This starts at the granular system level, including providing training data provenance and an audit trail showing what data, weights, and other inputs helped a machine reach its decision. Building AI systems that are explainable, transparent, and auditable will also link to governance standards and reduce risk. Operationalize AI at the enterprise scale AI will only be a successful tool if agencies can use AI at the enterprise level. At its core, this means moving AI beyond the pilot phase to real-world production across the enterprise or deployed out in the field on edge devices. Successfully operationalizing AI starts early. AI is an exciting new technology, but agencies too enamored with the hype run the risk of missing out on the real benefits. Too many organizations have developed AI pilot capabilities that work in the lab but cannot support the added noise of real-world environments. Such short-term thinking results in wasted resources. Agencies must think strategically about how the AI opportunities they choose to pursue align with their real-world mission and operations. Leaders must think through the processes and infrastructure needed to seamlessly extend AI to the enterprise at-scale. This involves building scalable infrastructure, data stores and standards, a library of reusable tools and frameworks, and security safeguards to protect against adversarial AI. It is equally important to prioritize investment in the infrastructure to organize, store, and access data, the computational needs for AI (cloud, GPU chips, etc.), as well as open, extensible software tools for ease of upgrade and maintenance. Establish governance to reduce risk Governance standards, controls, and ethical guidelines are critical to ensuring how AI systems are built, managed, and used in a manner that reduces exposure to undue risk. While our allies have engaged in conversations about how to ensure ethical AI, China and Russia have thus far shown little concern for the ethical risks associated with AI. Given this tension, it is imperative that the United States maintain its technological advantage and ethical leadership by establishing governance standards and proactive risk mitigation tactics. To this end, in May, three Senators introduced the bipartisan Artificial Intelligence Initiative Act, which includes provisions for establishing a National AI Coordination Office and national standards for testing AI algorithm effectiveness. Building auditability and validation functions into AI not only ensures trust and adoption, but also reduces risk. By establishing proactive risk management procedures and processes for continuous testing and validation for compliance purposes, organizations can ensure that their AI systems are performing at optimal levels. Governance controls and system auditability also ensure that AI systems and tools are robust against hacking and adversarial AI threats. AI could be the most transformative technological development of our lifetime—and it's a necessity for maintaining America's competitive edge. To ensure that we develop AI that users trust and can scale to the enterprise with reduced risk, organizations must take a calm, methodical approach to its development and adoption. Focus on these three areas is crucial to protecting our national security, maintaining our competitive advantage and leading on the world stage. Graham Gilmer is a principal at Booz Allen who helps manage artificial intelligence initiatives across the Department of Defense. https://www.c4isrnet.com/opinion/2019/07/08/how-the-pentagon-can-improve-ai-adoption/

  • Florence Parly : « Nous avons besoin de cybercombattants »

    March 15, 2021 | International, C4ISR, Security

    Florence Parly : « Nous avons besoin de cybercombattants »

    La ministre des Armées, Florence Parly, était l'invitée du Club de l'économie du Monde, jeudi 11 mars. Elle est revenue notamment sur la situation de la filière industrielle française et sur l'extension des nouvelles menaces, particulièrement dans le domaine du cyber. « La loi de programmation militaire prévoit non seulement des investissements massifs pour ce qui concerne le spatial, avec le renouvellement de la totalité de nos capacités spatiales, mais aussi pour ce qui concerne le cyber. Nous avons besoin de cybercombattants. L'objectif est d'accroître de 1 000 cybercombattants notre force et d'avoir, en 2025, 4 000 cybercombattants », précise-t-elle. La ministre a également évoqué le programme européen sur l'avion de combat du futur, ainsi que l'Eurodrone : « nous avons besoin de ces programmes de très grande ampleur, dont je ne suis pas certaine que nous pourrions les financer seuls, et qui constituent une base industrielle et technologique de défense européenne. Plus les Européens seront forts, plus ils investiront dans leur propre défense et plus l'Alliance atlantique, à laquelle ces pays appartiennent et sont naturellement très attachés, sera elle-même forte et efficace ». Le Monde du 13 mars

  • Avec SpaceHub, la région bordelaise veut tenir son rang dans la mobilité spatiale

    September 18, 2020 | International, Aerospace, C4ISR

    Avec SpaceHub, la région bordelaise veut tenir son rang dans la mobilité spatiale

    Pierre Cheminade Défendre et promouvoir la position de la région bordelaise dans la compétition mondiale de la mobilité spatiale, c'est l'ambition du SpaceHub lancé conjointement par des acteurs publics et privés dont la Région Nouvelle-Aquitaine, Bordeaux Métropole, ArianeGroup et Dassault Aviation. Malgré la vente d'avions Rafale à la Grèce, qui bénéficiera directement à l'usine Dassault de Mérignac, la filière régionale de l'aéronautique-spatial-défense (ASD) a bien besoin de signaux positifs dans un contexte compliqué, tout particulièrement pour les avionneurs civils et leurs sous-traitants. Alors que Technowest vient de lancer un appel à projets pour repérer et accompagner trois nouvelles startups de l'ASD, le conseil régional de Nouvelle-Aquitaine, Bordeaux Métropole, Saint-Médard-en-Jalles, ArianeGroup et Dassault Aviation se mobilisent pour fédérer les énergies en matière spatiale. Une nouvelle initiative après que la présidence bordelaise de la Communauté des villes Ariane en 2020 a subi le confinement et la crise sanitaire de plein fouet tout comme le festival Big Bang. La démarche SpaceHub, présentée le 7 septembre en présence d'une myriade de partenaires publics et privés (*), vise à ainsi à démontrer que le territoire tient son rang en matière de mobilités spatiales dans un contexte plus concurrentiel que jamais avec les progrès à marche forcée réalisés par les acteurs du New Space. Deux activités seront développées de concert pour mêler étroitement recherche fondamentale et applications concrètes : un centre d'analyse prospective dédié à la mobilité spatiale travaillant avec les universités et grandes écoles françaises et internationales ainsi qu'avec les agences spatiales, civiles et de défense ; un centre d'exploration et d'accélération des concepts spatiaux "pour aboutir rapidement aux meilleures solutions, obtenir des financements associés à ses projets innovants et générer de nouvelles opportunités d'affaires." Le tout dans une logique d'ouverture et de collaboration. Une démarche qui s'inscrit également dans une logique de relance économique à moyen terme tant les innovations et technologies développées initialement pour le spatial se traduisent, dans un second temps, dans l'économie et les usages plus grand public, de la communication à la santé en passant par l'environnement et la mobilité. (*) Les partenaires industriels du projet (ArianeGroup et Dassault Aviation), le CEA, les représentants de l'écosystème de la recherche (Université de Bordeaux, CNRS, Inria, Chaire défense & aérospatial de Sciences Po Bordeaux, etc.), des acteurs majeurs soutenant le projet (Thales, Cnes, Nouvelle-Aquitaine Academic Space Center, l'association Hyfar-Ara et la Fondation Bordeaux Université). https://objectifaquitaine.latribune.fr/business/aeronautique-et-defense/2020-09-17/avec-spacehub-la-region-bordelaise-veut-tenir-son-rang-dans-la-mobilite-spatiale

All news