7 novembre 2018 | International, Naval, C4ISR

The chief of naval research on AI: ‘If we don’t all dogpile on this thing, were going to find ourselves behind’

By:

Most of us are comfortable with Suri, or Alexa, or “Hey, Google.” But many will tell you artificial intelligence and autonomy in the context of military operations is a whole a different animal.

That said, if you ask Rear Admiral David Hahn, one factor remains the same: the need for trust. Understand the algorithm and the consequences, he argues, but then relinquish (some) control.

He shared his vision of AI in the military in an interview following the Defense News Conference in September.

Much of the discussion around artificial intelligence and autonomy involves the proper role of machine versus human. Where do you stand?

We're at an inflection point for what technology will allow us to do. For artificial intelligence that could be brought to bear in the military context, there has been anexpectation that the human is always going to be in control. But as the sophistication of these algorithms and the sophistication of the application of the tools now out there mature, and are brought into the operational space, we need to get at a place of trust. [We need trust] between the algorithm, what's behind that curtain, and our ability as the humans to agree that the decision or the space that it's going to operate in – the context in which its making that decision – is understood by us. And that more and more is going to have to happen at machine speed, because when machines are interacting with machines, we're going to have to comfortably move from a human in the loop to a human on the loop. That doesn't mean it's an unsupervised act; it means we understand it well enough to trust it.

So, there is relinquishing of control?

There is, but there are clearly pieces of our system today where we do that. That happens when you let your car park itself – you relinquish that control and trust that the machine is not going to run into the grocery cart behind you or the car next to you. That's already part of the conversation. And as we get more used to machines performing, and performing accurately over and over and over, our ability to trust these machines [increases], if we understand the algorithm and the consequence. It's not ‘I just ran into a shopping cart' if the consequence we're talking about is the release of weapons, or something along those lines; but we've gotten to the point where we're comfortable [because of our understanding of the technology].

We had similar conversations in recent years on cybersecurity, in terms of confidence in the technology, whether we could be sure networks are properly protected, and accepting a degree of risk. Has progress there helped with progress in AI?

I think it's helping and it will continue to drive us toward this human-machine teaming environment that we all see coming. There are clearly pieces of our system that make us uncomfortable. But we see more and more, that if we don't take the action to allow it to occur, we might as well have not even created the tool.

It's a shift in culture, beyond policy. Is that happening yet? Or is it too soon to expect that?

I don't think we're too early, and I think it's happening. And it's going to be one of those things where we didn't know it was happening, then we find ourselves there. Ten years ago, the App Store opened. Can you imagine a world without the App Store and what that's enabled you to do in your daily life with your smartphone? The young people today are almost at a point where there was never a world without a smartphone, there was never a world without an App Store. If you start at that point, this is not a big leap. It's happening around us, and we just need to find a way to keep up.

Looking ahead, 5 or 10 years, how do you see AI being used in an operational capacity?

The limiting factor is not going to be the tools. To borrow a phrase, the ‘democratization' of the tools that are associated with developing AI capabilities will allow anybody to work on the data. Our challenge will be whether we have harnessed our own data and done it in a way where we can make the connections between relevant data sets to optimize the mission effect we could get by applying those tools available to everybody. That's our challenge. And it's a challenge we'll need to figure out within each service, amongst the services in the joint environment, from that joint environment into the same space with partners and allies, from the DoD or military into the industrial base, all while moving seamlessly across academia, and [keeping in mind how] the commercial industry plays.

If we don't all dogpile on this thing, were going to find ourselves behind in this great power competition in a very important space.

So, establish a playbook so to speak?

And recognize that as soon as we've established that playbook, it will change.

https://www.c4isrnet.com/it-networks/2018/11/06/the-chief-of-naval-research-on-ai-if-we-dont-all-dogpile-on-this-thing-were-going-to-find-ourselves-behind

Sur le même sujet

  • US Navy secretary talks drones, fleet size and South American security

    8 décembre 2022 | International, Naval

    US Navy secretary talks drones, fleet size and South American security

    What's weighing on the service for the new year? Carlos Del Toro discusses what's top of mind.

  • Maxar to aid L3Harris in tracking missiles from space

    10 août 2022 | International, C4ISR

    Maxar to aid L3Harris in tracking missiles from space

    The United States, Russia and China are among countries developing hypersonic missiles, which can exceed the speed of sound and are harder to track than conventional missiles.

  • Boeing proposes designs for new ICBM deterrent

    25 juillet 2018 | International, Aérospatial

    Boeing proposes designs for new ICBM deterrent

    By Stephen Carlson July 24 (UPI) -- Boeing has proposed design options to the U.S. Air Force for design of the Ground Based Strategic Deterrent, a possible replacement for the Minuteman III intercontinental ballistic missile. "We offered the Air Force cost and performance trades for a deterrent that will address emerging and future threats," Frank McCall, vice president for Boeing Strategic Deterrence Systems, said in a press release. "By considering the various capabilities and opportunities for cost savings, the Air Force can prioritize system requirements as we progress toward the program's next phase," McCall said. Boeing received a $349 million contract from the Air Force last August for work on the GBSD, and completed a design review in November. A system functional review will be completed later this year, while Boeing is expected to present the completed design to the Air Force in 2020. Along with Boeing, Northrop Grumman and Lockheed Martin are competing for development contracts on the new missile. The Ground Based Strategic Deterrence program is the U.S. Air Force effort to replace the venerable LGM Minuteman II ICBM, which is nearing the end of its lifespan. Upgrades of the Minuteman series of ICBMs have been in service since the early 1960's. Much of its components are over 50 years old and making replacement necessary. The GDSM program is still in its early stages but is expected to start entering service in 2027 and is planned to be in service until 2075. The current Minuteman III is an underground silo-launched missile armed with nuclear warheads with up to a 350 kiloton yield. It has a range of well over 6,000 miles, though the exact maximum range classified. The Minuteman III can carry up to three multiple independent reentry vehicle warheads but is restricted to one per missile by treaty. The United States currently has 450 ICBMs in service. https://www.upi.com/Defense-News/2018/07/24/Boeing-proposes-designs-for-new-ICBM-deterrent/7861532445298

Toutes les nouvelles