July 3, 2024 | International, Aerospace
Kratos’ Erinyes test vehicle logs hypersonic speeds on first flight
The company developed Erinyes in three years for under $15 million with a mix of internal investment and congressional funding.
August 12, 2020 | International, Naval
By: David B. Larter
WASHINGTON – The U.S. Navy Saturday commissioned its latest littoral combat ship amid a top-level push to fix the ship's nagging reliability issues and forge a path to make the small surface combatants useful in the years ahead.
The monohull Freedom-variant LCS St. Louis was commissioned at a private event in its namesake city, the 22nd LCS and 10th Freedom variant to join the fleet. There will be 35 LCS in the fleet once all are commissioned.
Change is in the wind for LCS once again, which has already seen several shakeups to its system. A high-level effort is underway to address problems with its complicated drive train built for high speeds that have limited the ships availability for tasking as well as to finally field its long-delayed mission packages. Mission packages will make the ships either a surface warfare hull, a mine hunter or an antisubmarine ship.
Chief of Naval Operations Adm. Michael Gilday told Defense News in a July 16 interview that he was preparing to increase LCS deployments by two-and-a-half times over the next two years to finally shake out how to best employ the ships, as well as develop a plan to finally field the mine and ASW mission modules.
“There are things in the near term that I have to deliver, that I'm putting heat on now, and one of them is LCS,” Gilday said. “One part is sustainability and reliability. We know enough about that platform and the problems that we have that plague us with regard to reliability and sustainability, and I need them resolved.
“That requires a campaign plan to get after it and have it reviewed by me frequently enough so that I can be sighted on it. Those platforms have been around since 2008 — we need to get on with it.
Experts who spoke to Defense News in July said the Navy would most likely need to accept less capability than they had planned for the ships to have if the service is to get the most out of the ships.
July 3, 2024 | International, Aerospace
The company developed Erinyes in three years for under $15 million with a mix of internal investment and congressional funding.
November 7, 2018 | International, Naval, C4ISR
By: Jill Aitoro Most of us are comfortable with Suri, or Alexa, or “Hey, Google.” But many will tell you artificial intelligence and autonomy in the context of military operations is a whole a different animal. That said, if you ask Rear Admiral David Hahn, one factor remains the same: the need for trust. Understand the algorithm and the consequences, he argues, but then relinquish (some) control. He shared his vision of AI in the military in an interview following the Defense News Conference in September. Much of the discussion around artificial intelligence and autonomy involves the proper role of machine versus human. Where do you stand? We're at an inflection point for what technology will allow us to do. For artificial intelligence that could be brought to bear in the military context, there has been anexpectation that the human is always going to be in control. But as the sophistication of these algorithms and the sophistication of the application of the tools now out there mature, and are brought into the operational space, we need to get at a place of trust. [We need trust] between the algorithm, what's behind that curtain, and our ability as the humans to agree that the decision or the space that it's going to operate in – the context in which its making that decision – is understood by us. And that more and more is going to have to happen at machine speed, because when machines are interacting with machines, we're going to have to comfortably move from a human in the loop to a human on the loop. That doesn't mean it's an unsupervised act; it means we understand it well enough to trust it. So, there is relinquishing of control? There is, but there are clearly pieces of our system today where we do that. That happens when you let your car park itself – you relinquish that control and trust that the machine is not going to run into the grocery cart behind you or the car next to you. That's already part of the conversation. And as we get more used to machines performing, and performing accurately over and over and over, our ability to trust these machines [increases], if we understand the algorithm and the consequence. It's not ‘I just ran into a shopping cart' if the consequence we're talking about is the release of weapons, or something along those lines; but we've gotten to the point where we're comfortable [because of our understanding of the technology]. We had similar conversations in recent years on cybersecurity, in terms of confidence in the technology, whether we could be sure networks are properly protected, and accepting a degree of risk. Has progress there helped with progress in AI? I think it's helping and it will continue to drive us toward this human-machine teaming environment that we all see coming. There are clearly pieces of our system that make us uncomfortable. But we see more and more, that if we don't take the action to allow it to occur, we might as well have not even created the tool. It's a shift in culture, beyond policy. Is that happening yet? Or is it too soon to expect that? I don't think we're too early, and I think it's happening. And it's going to be one of those things where we didn't know it was happening, then we find ourselves there. Ten years ago, the App Store opened. Can you imagine a world without the App Store and what that's enabled you to do in your daily life with your smartphone? The young people today are almost at a point where there was never a world without a smartphone, there was never a world without an App Store. If you start at that point, this is not a big leap. It's happening around us, and we just need to find a way to keep up. Looking ahead, 5 or 10 years, how do you see AI being used in an operational capacity? The limiting factor is not going to be the tools. To borrow a phrase, the ‘democratization' of the tools that are associated with developing AI capabilities will allow anybody to work on the data. Our challenge will be whether we have harnessed our own data and done it in a way where we can make the connections between relevant data sets to optimize the mission effect we could get by applying those tools available to everybody. That's our challenge. And it's a challenge we'll need to figure out within each service, amongst the services in the joint environment, from that joint environment into the same space with partners and allies, from the DoD or military into the industrial base, all while moving seamlessly across academia, and [keeping in mind how] the commercial industry plays. If we don't all dogpile on this thing, were going to find ourselves behind in this great power competition in a very important space. So, establish a playbook so to speak? And recognize that as soon as we've established that playbook, it will change. https://www.c4isrnet.com/it-networks/2018/11/06/the-chief-of-naval-research-on-ai-if-we-dont-all-dogpile-on-this-thing-were-going-to-find-ourselves-behind
December 18, 2024 | International, C4ISR, Security