Back to news

May 29, 2023 | International, Aerospace

Les pilotes ukrainiens seront formés en Europe pour utiliser les chasseurs américains | Guerre en Ukraine

Les responsables américains doutent que l’Ukraine puisse utiliser les F-16 dans sa contre-offensive annoncée.

https://ici.radio-canada.ca/nouvelle/1982262/pilotes-ukrainiens-formes-europe-chasseurs-americains-f-16

On the same subject

  • US Army testing communications gear for different fighting styles

    August 14, 2023 | International, Land

    US Army testing communications gear for different fighting styles

    The Army is investing in the division as it prepares for potential conflict in the Indo-Pacific, against China, or in Europe, against Russia.

  • Future Missile War Needs New Kind Of Command: CSIS

    July 7, 2020 | International, Aerospace

    Future Missile War Needs New Kind Of Command: CSIS

    Integrating missile defense – shooting down incoming missiles – with missile offense – destroying the launchers before they fire again – requires major changes in how the military fights. By SYDNEY J. FREEDBERG JR.on July 07, 2020 at 4:00 AM WASHINGTON: Don't try to shoot down each arrow as it comes; shoot the archer. That's a time-honored military principle that US forces would struggle to implement in an actual war with China, Russia, North Korea, or Iran, warns a new report from thinktank CSIS. New technology, like the Army's IBCS command network – now entering a major field test — can be part of the solution, but it's only part, writes Brian Green, a veteran of 30 years in the Pentagon, Capitol Hill, and the aerospace industry. Equally important and problematic are the command-and-control arrangements that determine who makes the decision to fire what, at what, and when. Today, the military has completely different units, command systems, doctrines, and legal/regulatory authorities for missile defense – which tries to shoot down threats the enemy has already launched – and for long range offensive strikes – which could keep the enemy from launching in the first place, or at least from getting off a second salvo, by destroying launchers, command posts, and targeting systems. While generals and doctrine-writers have talked about “offense-defense integration” for almost two decades, Green says, the concept remains shallow and incomplete. “A thorough implementation of ODI would touch almost every aspect of the US military, including policy, doctrine, organization, training, materiel, and personnel,” Green writes. “It would require a fundamental rethinking of terms such as ‘offense' and ‘defense' and of how the joint force fights.” Indeed, it easily blurs into the even larger problem of coordinating all the services across all five domains of warfare – land, sea, air, space, and cyberspace – in what's known as Joint All-Domain Operations. The bifurcation between offense and defense runs from the loftiest strategic level down to tactical: At the highest level, US Strategic Command commands both the nation's nuclear deterrent and homeland missile defense. But these functions are split between three different subcommands within STRATCOM, one for Air Force ICBMs and bombers (offense), one for Navy ballistic missile submarines (also offense), and one for Integrated Missile Defense. In forward theaters, the Army provides ground-based missile defense, but those units – Patriot batteries, THAAD, Sentinel radars – belong to separate brigades from the Army's own long-range missile artillery, and they're even less connected to offensive airstrikes from the Air Force, Navy, and Marine Corps. The Navy's AEGIS system arguably does the best job of integrating offense and defense in near-real-time, Green says, but even there, “different capabilities onboard a given ship can come under different commanders,” one with the authority to unleash Standard Missile interceptors against incoming threats and the other with the authority to fire Tomahawk missiles at the enemy launchers. This division of labor might have worked when warfare was slower. But China and Russia have invested massively in their arsenals of long-range, precision-guided missiles, along with the sensors and command networks to direct them to their targets. So, on a lesser scale, have North Korea and Iran. The former deputy secretary of defense, Bob Work, warned of future conflicts in which “salvo exchanges” of hundreds of missiles – hopefully not nuclear ones – might rocket across the war zone within hours. It's been obvious for over a decade that current missile defense systems simply can't cope with the sheer number of incoming threats involved, which led the chiefs of the Army and Navy to sign a famous “eight-star memo” in late 2014 that called, among other things, for stopping enemy missiles “left of launch.” But that approach would require real-time coordination between the offensive weapons, responsible for destroying enemy launchers, command posts, and targeting systems, and the defensive ones, responsible for shooting down whatever missiles made it into the air. While Navy Aegis and Army IBCS show some promise, Green writes, neither is yet capable of moving the data required among all the users who would need it: Indeed, IBCS is still years away from connecting all the Army's defensive systems, while Aegis only recently gained an offensive anti-ship option, a modified SM-6, alongside its defensive missiles. As two Army generals cautioned in a recent interview with Breaking Defense, missile defense and offense have distinctly different technical requirements that limit the potential of using a single system to run both. There are different legal restrictions as well: Even self-defense systems operate under strict limits, lest they accidentally shoot down friendly aircraft or civilian airliners, and offensive strikes can easily escalate a conflict. Green's 35-page paper doesn't solve these problems. But it's useful examination of how complex they can become. https://breakingdefense.com/2020/07/future-missile-war-needs-new-kind-of-command-csis/

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    January 21, 2020 | International, C4ISR

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

All news