9 novembre 2020 | International, Aérospatial

BAE Systems secures US Army 'A-Team' technology development deals

by Carlo Munoz

BAE Systems has secured several US Army research and development pacts that are designed to help create advanced technologies to team manned, unmanned, and autonomous aircraft in future combat operations. The company's FASTLabs research directorate was awarded the army contracts, totalling USD9 million, which will focus technology development projects for human-machine interface, resource capability, and situational awareness management on the service's Advanced Teaming Demonstration Program (A-Team).

The three focus areas in which BAE Systems' engineers were contracted to take on under the A-Team programme are “designed to advance manned and unmanned teaming (MUM-T) capabilities that are expected to be critical components in the U.S. Army's Future Vertical Lift (FVL) program,” according to a company statement issued on 3 November.

Company officials anticipate the development of a “highly automated system to provide situational awareness, information processing, resource management, and decision making that is beyond human capabilities”, the statement said. “These advantages become exceedingly important as the Army moves toward mission teams of unmanned aircraft that will be controlled by pilots in real time,” it added.

A majority of BAE Systems' A-Team work will leverage the company's Future Open Rotorcraft Cockpit Environment Lab, which will host “simulation tests and demonstrations with products from different contractors” vying to integrate their MUM-T applications into the army's FVL programme. Teaming of manned and unmanned aerial assets was a key objective of the army's initial capstone exercise for Project Convergence.

https://www.janes.com/defence-news/news-detail/bae-systems-secures-us-army-a-team-technology-development-deals

Sur le même sujet

  • No AI For Nuclear Command & Control: JAIC’s Shanahan

    26 septembre 2019 | International, C4ISR

    No AI For Nuclear Command & Control: JAIC’s Shanahan

    By SYDNEY J. FREEDBERG JR. GEORGETOWN UNIVERSITY: “You will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense,” Lt. Gen. Jack Shanahan said here, “but there is one area where I pause, and it has to do with nuclear command and control.” In movies like WarGames and Terminator, nuclear launch controls are the first thing fictional generals hand over to AI. In real life, the director of the Pentagon's Joint Artificial Intelligence Center says, that's the last thing he would integrate AI with. The military is beginning a massive multi-billion dollar modernization of its aging system for Nuclear Command, Control, & Communications (NC3), much of which dates to the Cold War. But the Joint Artificial Intelligence Center is not involved with it. A recent article on the iconoclastic website War on the Rocks argued “America Needs A ‘Dead Hand',” a reference to the Soviet system designed to automatically order a nuclear launch if the human leadership was wiped out. “I read that,” Shanahan told the Kalaris Intelligence Conference here this afternoon. “My immediate answer is ‘No. We do not.'” Instead, the JAIC is very deliberately starting with relatively low-risk, non-lethal projects — predicting breakdowns in helicopter engines and mapping natural disasters — before moving on to combat-related functions such as intelligence analysis and targeting next year. On the Pentagon's timeline, AI will be coming to command posts before it is embedded in actual weapons, and even then the final decision to use lethal force will always remain in human hands. The standard term in the Pentagon now for human involvement with AI and weapons now is “human on the loop,” a shift from human IN the loop. That reflects greater stress on the advisory function of humans with AI and a recognition that domains like cyber require almost instantaneous responses that can't wait for a human. Hawkish skeptics say slowing down to ask human permission could cripple US robots against their less-restrained Russian or Chinese counterparts. Dovish skeptics say this kind of human control would be too easily bypassed. Shanahan does see a role for AI in applying lethal force once that human decision is made. “I'm not going to go straight to ‘lethal autonomous weapons systems,'” he said, “but I do want to say we will use artificial intelligence in our weapons systems... to give us a competitive advantage. It's to save lives and help deter war from happening in the first place.” The term “lethal autonomous weapons systems” was popularized by the Campaign to Stop Killer Robots, which seeks a global ban on all AI weapons. Shanahan made clear his discomfort with formal arms control measures, as opposed to policies and international norms, which don't bind the US in the same way. “I'll be honest with you,” Shanahan said. “I don't like the term, and I do not use the term, ‘arms control' when it comes to AI. I think that's unhelpful when it comes to artificial intelligence: It's largely a commercial technology,” albeit with military applications. “I'm much more interested, at least as a starting point, in international rules and norms and behavior,” he continued. (Aside from the space is governed almost exclusively “It's extremely important to have those discussions.” “This is the ultimate human decision that needs to be made....nuclear command and control,” he said. “We have to be very careful. Knowing ...the immaturity of technology today, give us a lot of time to test and evaluate.” “Can we use artificial intelligence to make better decisions, to make more informed judgments about what might be happening, to reduce the potential for civilian casualties or collateral damage?” Shanahan said. “I'm an optimist. I believe you can. It will not eliminate it, never. It's war; bad things are going to happen.” While Shanahan has no illusions about AI enabling some kind of cleanly surgical future conflict, he doesn't expect a robo-dystopia, either. “The hype is a little dangerous, because it's uninformed most of the time, and sometimes it's a Hollywood-driven killer robots/Terminator/SkyNet worst case scenario,” he said. “I don't see that worst case scenario any time in my immediate future.” “I'm very comfortable saying our approach — even though it is emerging technology, even though it unfolds very quickly before our eyes — it will still be done in a deliberate and rigorous way so we know what we get when we field it,” Shanahan said. “As the JAIC director, I'm focused on really getting to the fielding,” he said, moving AI out of the lab into the real world — but one step at a time. “We're always going to start with limited narrow use cases. Say, can we take some AI capability and put it in a small quadcopter drone that will make it easier to clear out a cave, [and] really prove that it works before we ever get it to a [large] scale production.” “We will have a very clear understanding of what it can do and what it can't do,” he said. “That will be through experimentation, that will be through modeling and simulation, and that will be in wargames. We've done that with every piece of technology we've ever used, and I don't expect this to be any different.” The JAIC is even looking to hire an in-house ethicist of sorts, a position Shanahan has mentioned earlier but sought to clarify today. “It'll be someone who's a technical standards [expert] / ethicist,” he said. “As we develop the models and algorithms... they can look at that make sure the process is abiding by our rules of the road.” “I'm also interested in, down the road, getting some help from the outside on sort of those deeper philosophical questions,” he continued. “I don't focus on them day to day, because of my charter to field now, but it's clear we have to be careful about this.” “I do not see that same approach in Russia or China,” Shanahan said. “What sets us apart is... our focus on real rigor in test and evaluation, validation and verification, before we field capability that could have lives at stake.” https://breakingdefense.com/2019/09/no-ai-for-nuclear-command-control-jaics-shanahan

  • Joint Expeditionary Force to strengthen sharing of tactical intelligence

    13 juin 2023 | International, Autre défense

    Joint Expeditionary Force to strengthen sharing of tactical intelligence

    A British-led defence alliance of several European countries will strengthen its sharing of tactical intelligence, the group, known as the Joint Expeditionary Force (JEF), said on Tuesday.

  • Hyten to issue new joint requirements on handling data

    24 septembre 2020 | International, Aérospatial, Naval, Terrestre, C4ISR, Sécurité, Autre défense

    Hyten to issue new joint requirements on handling data

    Aaron Mehta WASHINGTON — While the phrase “tsunami of data” seems to have exited everyday use by Defense Department officials, the problem remains the same: The Pentagon simply cannot exploit the sheer amount of information that comes in every day to its fullest. It's a challenge that will only get worse as more sources of information come online, with each branch having its own data sets, which often don't talk to each other. At the same time, the lack of ability to properly sort, catalog and exploit the data means the department cannot fully achieve its goals of using artificial intelligence to its fullest. After almost a decade of talking about the problem, military leaders appear to have a target date for when the department will get its arms around the problem, according to Gen. John Hyten, the vice chairman of the Joint Chiefs of Staff. By 2030, the Pentagon expects handling data will no longer be an overwhelming challenge, Hyten said Monday during an event organized by the Defense Innovation Unit. But, he added, the department is looking at any way to move that date closer, including by reworking how requirements are developed in the Joint Requirements Oversight Council, or JROC, a group chaired by Hyten, which serves as an oversight body on the development of new capabilities and acquisition efforts. Currently, “a service develops the capability, it comes up through the various coordination boards in the JROC, eventually getting to the JROC where we validate a service concept and make sure it meets the joint interoperability requirement,” Hyten explained. “But what was intended is the JROC would develop joint requirements and push those out to the services and tell the services ‘you have to meet those joint requirements.'” To get back to that top-down model, Hyten plans to push out a list of joint requirements for two major department priorities in all domain command and control and logistics for joint fires, which will have specific requirements for data and software. “They're not going to be the traditional requirements that you've looked at for years, capability description documents and capability production documents. They're going to capabilities and attributes that programs have to have,” he said. “And if you don't meet these, you don't meet the joint requirements and therefore you don't get through the gate, you don't get money. That's how we're going to hold it.” Hyten added that the goal is to have those data requirements out to the services around the end of the year, shortly after the expected publication of the new joint warfighting concept. That concept — which Hyten has previously described as essentially eliminating lines between units and services on the battlefield — inherently relies on the ability to combine data to be successful, he noted. https://www.defensenews.com/pentagon/2020/09/23/hyten-to-issue-new-joint-requirements-on-handling-data/

Toutes les nouvelles