13 janvier 2021 | International, Terrestre

Hanwha-led team launches Redback vehicle for Australian Army competition

By:

MELBOURNE, Australia — Hanwha-led Team Redback officially launched its Redback infantry fighting vehicle on Tuesday ahead of delivering three for evaluation trials as part of a risk mitigation effort for the Australian Army.

The infantry fighting vehicles are undergoing trials as part of Project Land 400 Phase 3, which is tasked to acquire about 450 tracked IFVs that will replace Australia's fleet of M113AS4 armored personnel carriers. The Redback, which is named after a venomous spider found in Australia, is up against Rheinmetall's Lynx KF41 for the program, which is due to announce a winner in 2022.

The risk mitigation effort involves detailed test and evaluation of the vehicles throughout 2021 with the aim of providing objective quality evidence to support a government decision on the preferred platform.

Team Redback is the group of companies led by Hanwha Defense Australia, and includes Electro Optic Systems, Elbit Systems and several other Australian companies.

Protection for the Redback meets STANAG Level 6 requirements (a NATO standard), and is fitted with a range of active and passive protection systems in addition to survivable seats in the troop compartment, a floating floor to mitigate the effects of mines or improvised explosive devices, and Plasan-made add-on armor.

The passive protection system includes Elbit laser warning devices providing all-around coverage, while active protection comes in the form of the Israeli company's Iron Fist active protection system.

The Redback is based on South Korea's AS21 infantry fighting vehicle and is fitted with an EOS T2000 turret mounting a Mk44S Bushmaster II 30mm cannon and a 7.62mm coaxially mounted machine gun.

An EOS R400 four-axis remote weapons station is also mounted on the turret roof and can be fitted with a range of weapons including machine guns or an automatic grenade launcher.

Grant Sanderson, CEO of the Defense Systems division at Electro Optic Systems, told Defense News that the coronavirus pandemic has slowed efforts to integrate the turret, pointing out that having to fly engineers between Australia, Israel and South Korea has been a challenge.

However, the lethality testing of the integrated turret is continuing and is expected to culminate in a live-fire demonstration of the turret with Australian optics and systems in August.

The Redback is also designed with ride comfort in mind, with rubber tracks and independent suspension in lieu of more common metal running gear and torsion bar suspension. Hanwha added that noise reduction measures has also meant it is possible to conduct conversations in the troop compartment, even when the vehicle is moving.

https://www.defensenews.com/industry/2021/01/12/hanwha-led-team-launches-redback-vehicle-for-australian-army-competition/

Sur le même sujet

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    21 janvier 2020 | International, C4ISR

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

  • Electronic warfare system production starts for U.S. Air Force F-15s

    3 mars 2021 | International, Aérospatial

    Electronic warfare system production starts for U.S. Air Force F-15s

    The all-digital EPAWSS enables pilots to monitor, jam, and deceive threats in contested airspace

  • Plans for a new base closing round may be running out of time: Report

    16 août 2019 | International, Aérospatial

    Plans for a new base closing round may be running out of time: Report

    By: Leo Shane III The next few months could decide whether the Defense Department gets another base closing round in the next decade, according to a new analysis from a conservative think tankwarning military officials not to dismiss the potential looming impact on budgets and readiness. Officials from the Heritage Foundation, whose policy priorities have helped influence President Donald Trump's administration, have in the past supported a new base closing round to cut back on excess military infrastructure and more efficiently spend annual defense funding. In the analysis released this week, author Frederico Bartels — policy analyst for defense budgeting at the foundation — said a Pentagon report on the issue being compiled now represents “the best chance for the Department of Defense to make the case for a new round of BRAC” in years, and perhaps the last realistic chance to advance the idea for the near future. “I think it's the last chance of the Trump administration to make an argument for this,” he said in an interview with Military Times. “Even if he gets re-elected next year, I think it will be hard to go back and make the case if they're unsuccessful this time.” The military convened six base Realignment and Closure (BRAC) Commissions between 1988 and 2005, shutting down dozens of military installations and turning over that land to state and local municipalities. The process has always been fraught with political turmoil, as lawmakers protest any loss of jobs, military personnel and resulting economic benefits in their districts. But the 2005 BRAC round was particularly controversial, as defense officials consolidated numerous service locations into joint bases and massively rearranged force structure in an attempt to modernize the military. As a result, cost saving projections from that process were significantly below past rounds, and members of Congress have strongly opposed any attempts at another round since then. In the fiscal 2019 national defense authorization act, lawmakers did include language for a new military infrastructure capacity report — due next February — where defense officials can make the case for the need for additional closures. Similar Pentagon reports in the last few years have shown excess capacity of between 19 and 22 percent. Bartels said Pentagon leaders have repeatedly supported the idea of another round in recent years, but have done a poor job selling lawmakers on the idea. “The department needs to make the case for a new round of BRAC based on two key tenets: potential savings and the National Defense Strategy,” he wrote. “A new BRAC round could save $2 billion by reducing unneeded infrastructure. Additionally, a new round of BRAC would permit the department to assess its infrastructure against the threats outlined by the National Defense Strategy, providing a holistic look at all of the infrastructure.” He warns that naming specific locations will only exacerbate political tensions on the issue, and said defense officials also need to publicly acknowledge problems with the 2005 base realignment round to win back congressional support. And Bartels argues that the Trump administration must do more to push the issue. Defense officials requested a base closing round as part of their annual budget for six consecutive years before the Trump White House dropped the idea in their fiscal 2019 and 2020 budget plans. If officials fail to request one next spring, or if the planned infrastructure report is delayed by several months, the department risks pushing the idea back at minimum an entire extra budget cycle and likely several more years down the road. Even if approved, the new BRAC round is likely to take several years of research and debate before any recommendations are made. “I think there is still support for this in Congress,” Bartels said. “I think there are enough people that are about good stewardship of government funds that this can move ahead, if (defense officials) make the right argument. At least, I hope those lawmakers still exist.” The full analysis is available on the Heritage Foundation's website. https://www.militarytimes.com/news/pentagon-congress/2019/08/15/plans-for-a-new-base-closing-round-may-be-running-out-of-time-report/

Toutes les nouvelles