7 décembre 2023 | International, Aérospatial
Ospreys had history of safety issues long before they were grounded
More than 50 troops have died in Osprey accidents since the early 1990s.
6 avril 2018 | International, C4ISR
By: Mark Pomerleau
Prior to its deployment to Afghanistan, the Army's newest unit received special assistance in cyber and electronic warfare techniques.
The 1st Security Force Assistance Brigade, or SFAB, is a first of its kind specialized group designed solely to advise and assist local, indigenous forces. As such, these units need specialized equipment and received training from Army Cyber Command on offensive and defensive cyber operations, as well as electronic warfare and information operations, Army Cyber Command commander Lt. Gen. Paul Nakasone wrote in prepared testimony before the Senate Armed Services Cyber Subcommittee in early March.
The distinct makeup of the unit ― smaller than a typical brigade and lacking all the resources and technical expertise therein ― means the operators at the tactical edge have to do the networking and troubleshooting themselves in addition to advising battalion sized Afghan units. The command's tailored support sought to advise SFAB personnel how best to leverage a remote enterprise to achieve mission effects, according to the spokesman. That means knowing how to perform electronic warfare and cyber tasks are part of every soldier's basic skill set.
This was unique support with tailored training to meet the SFAB's advisory role mission, an Army Cyber Command spokesman said.
Team members from Army Cyber Command specializing in offensive cyber and defensive cyber to serve as instructors during SFAB's validation exercise at the Joint Readiness Training Center at Fort Polk, Louisiana in January, a command spokesman told Fifth Domain. Electronic warfare personnel from 1st SFAB were also briefed on how cyber capabilities in use in Afghanistan currently support U.S. Forces.
Specifically, the trainers provided the unit's communications teams best practices to harden networks.
The Army Cyber Command team discussed planning factors working with down-range networks and mission relevant cyber terrain with the SFAB, specifically, the need to maintain situational awareness of the blue network and ability to identify key cyber terrain, the Army Cyber Command spokesman said.
The unit was also given lessons on implementing defensive measure using organic tools.
https://www.fifthdomain.com/dod/army/2018/04/05/in-armys-newest-unit-everyone-learns-cyber-skills/
7 décembre 2023 | International, Aérospatial
More than 50 troops have died in Osprey accidents since the early 1990s.
8 juillet 2019 | International, Autre défense
By: Graham Gilmer The excitement of artificial intelligence today is like the space race of the 1960s, when nations were in fierce competition. Now, the United States is in first place. But continued leadership is not a given, especially as competitors, namely China and Russia, are making significant investments in AI for defense. To maintain our technological advantage, safeguard national security, and lead on the world stage, we have an imperative to invest strategically in AI. The successful and widespread adoption of AI requires the United States take a human-centric and technologically innovative approach to using AI to help maintain the peace and prosperity of our nation. As the Department of Defense and Joint Artificial Intelligence Center (JAIC) continue their efforts to accelerate AI adoption, they must address three key components of successful adoption: building trust in AI technology, operationalizing AI technologies to reach enterprise scale, and establishing ethical governance standards and procedures to reduce exposure to undue risk. Build trust in AI technology Fear and distrust hold technology adoption back. This was true during the first three industrial revolutions as mechanization, factories, and computers transformed the world, and it is the case in today's fourth industrial revolution of AI. The confusion surrounding AI has led to teams abandoning applications due to a lack of trust. To build that trust, we must prioritize training, explainability, and transparency. Trust in technology is built when leaders have accurate expectations around what it is going to deliver, mission owners can identify use cases connected to the core mission, and managers understand the true impact on mission performance. Building trust requires that all users, from executives and managers to analysts and operators, receive training on AI-enabled technologies. Training involves not only providing access to learning resources, but also creating opportunities for them to put their new skills to use. In its formal AI strategy, Pentagon leaders outlined extensive plans for implementing AI training programs across the department to build a digitally savvy workforce that will be key to maintaining the United States' leading position in the AI race. “Explainable AI” also curbs distrust by showing users how machines reach decisions. Consider computer vision. Users may wonder: How can such a tool sift through millions of images to identify a mobile missile launcher? A computer vision tool equipped with explainable AI could highlight aspects of the image that it uses in identification—in this case, elements that look like wheels, tracks, or launch tubes. Explainable AI gives users a “look under the hood,” tailored to their level of technical literacy. AI technologies must be more than understandable; they must also be transparent. This starts at the granular system level, including providing training data provenance and an audit trail showing what data, weights, and other inputs helped a machine reach its decision. Building AI systems that are explainable, transparent, and auditable will also link to governance standards and reduce risk. Operationalize AI at the enterprise scale AI will only be a successful tool if agencies can use AI at the enterprise level. At its core, this means moving AI beyond the pilot phase to real-world production across the enterprise or deployed out in the field on edge devices. Successfully operationalizing AI starts early. AI is an exciting new technology, but agencies too enamored with the hype run the risk of missing out on the real benefits. Too many organizations have developed AI pilot capabilities that work in the lab but cannot support the added noise of real-world environments. Such short-term thinking results in wasted resources. Agencies must think strategically about how the AI opportunities they choose to pursue align with their real-world mission and operations. Leaders must think through the processes and infrastructure needed to seamlessly extend AI to the enterprise at-scale. This involves building scalable infrastructure, data stores and standards, a library of reusable tools and frameworks, and security safeguards to protect against adversarial AI. It is equally important to prioritize investment in the infrastructure to organize, store, and access data, the computational needs for AI (cloud, GPU chips, etc.), as well as open, extensible software tools for ease of upgrade and maintenance. Establish governance to reduce risk Governance standards, controls, and ethical guidelines are critical to ensuring how AI systems are built, managed, and used in a manner that reduces exposure to undue risk. While our allies have engaged in conversations about how to ensure ethical AI, China and Russia have thus far shown little concern for the ethical risks associated with AI. Given this tension, it is imperative that the United States maintain its technological advantage and ethical leadership by establishing governance standards and proactive risk mitigation tactics. To this end, in May, three Senators introduced the bipartisan Artificial Intelligence Initiative Act, which includes provisions for establishing a National AI Coordination Office and national standards for testing AI algorithm effectiveness. Building auditability and validation functions into AI not only ensures trust and adoption, but also reduces risk. By establishing proactive risk management procedures and processes for continuous testing and validation for compliance purposes, organizations can ensure that their AI systems are performing at optimal levels. Governance controls and system auditability also ensure that AI systems and tools are robust against hacking and adversarial AI threats. AI could be the most transformative technological development of our lifetime—and it's a necessity for maintaining America's competitive edge. To ensure that we develop AI that users trust and can scale to the enterprise with reduced risk, organizations must take a calm, methodical approach to its development and adoption. Focus on these three areas is crucial to protecting our national security, maintaining our competitive advantage and leading on the world stage. Graham Gilmer is a principal at Booz Allen who helps manage artificial intelligence initiatives across the Department of Defense. https://www.c4isrnet.com/opinion/2019/07/08/how-the-pentagon-can-improve-ai-adoption/
20 mars 2020 | International, Naval, C4ISR
By: Mike Gruss The Navy plans to test next year whether it can push new software — not just patches but new algorithms and battle-management aids — to its fleet without the assistance of in-person installation teams. Navy officials plan to send the first upgrades to the aircraft carrier Abraham Lincoln's C4I systems for a test in early 2021, officials said during a March 3 media roundtable at the West 2020 trade show in San Diego. Today, Navy teams frequently deliver security patches to ships, but that process does not allow for new capabilities. The reason is because service officials fear that one change to the ship's software could have unintended consequences, creating a cascading effect and inadvertently breaking other parts of the system. But in recent years, Navy officials have embraced the idea of digital twins, which are cloud-based replicas of the software running on a ship's systems. This setup allows Navy engineers to experiment with how new code will react with the existing system. It also helps software developers work on the same baseline and avoid redundancies. Ultimately, the setup offers Navy officials a higher degree of confidence that the software they're uploading will work without any surprises. The Navy completed its first digital twin, the Lincoln, in fall 2019 and has started building a digital twin of the aircraft carrier Theodore Roosevelt. Eventually Navy leaders expect to complete a digital twin of all the service's ships. However, only those in the fleet that have already been upgraded to a certain version of the Navy's tactical afloat network, known as the Consolidate Afloat Networks and Enterprise Services program, or CANES, would be eligible for the over-the-air updates. “In the Information Warfare community, software is a weapon,” Rear Adm. Kathleen M. Creighton, the Navy's cybersecurity division director in the Office of the Deputy Chief of Naval Operations for Information Warfare, told C4ISRNET in a March 17 statement. “If we were to ask a warfighter if it would be valuable to conceptualize, order and receive additional kinetic capability at sea, of course the answer would be yes. The same is true of software. “In an ever-dynamic warfighting environment, the ability to improve, add to, or build new capabilities quickly has extraordinary value. We believe our sailors on the front line are the best positioned to tell us what they need to win. That is what we are trying to accomplish. Put the warfighter's perspective at the center of the software we deliver and do it iteratively at speed.” In this case, think of a capability update for a ship much like downloading a new app on a smartphone. Today, some ships in the fleet can receive security updates for applications they've already downloaded, but they cannot download new applications. Navy officials expect that to change. The new capability would arrive as an automatic, over-the-air update or come pierside, but would not require an installation team as is the case today. “Anytime there's a new capability or a new change, we're just going to do it the same way that you get that done on your smartphone,” said Delores Washburn, chief engineer at the Naval Information Warfare Center Pacific, which is leading the change. “What we will be able to [do] now is do a rapid update to the ships.” Navy engineers hope to be able to push the updates as quickly as war fighters need them. “We're going to try to go slowly here because, again, we're having to tackle simultaneously cultural, technical and operational problems,” said Robert Parker, the deputy program executive officer for command, control, communications, computers and intelligence. The Navy plans to test this new arrangement by installing a set of software, performing an update and then fairly quickly pushing that update to the ship. https://www.c4isrnet.com/battlefield-tech/it-networks/2020/03/19/the-navy-will-test-pushing-new-software-to-ships-at-sea/