23 mars 2020 | International, Terrestre

The trouble when military robots go underground

By: Kelsey D. Atherton

Picture the scene: A rural compound in northwest Syria. An underground tunnel beneath the compound, where a cornered man with a suicide vest and two children hides from a raid by the U.S. Army's Delta Force.

Outside the compound on Oct. 26, waiting and at the ready, was a robot.

The vested man was later identified as Abu Bakr Al-Baghdadi, the self-proclaimed caliph of the Islamic State of Syria and the Levant.

“We had a robot just in case because we were afraid he had a suicide vest and if you get close to him and he blows it up, you're going to die. You're going to die. He had a very powerful suicide vest,” President Donald Trump said in a press conference about the raid in the following days.

“The robot was set, too, but we didn't hook it up because we were too — they were moving too fast. We were moving fast,” the president continued. “We weren't 100 percent sure about the tunnel being dead ended. It's possible that there could have been an escape hatch somewhere along that we didn't know about.”

In this case, the robot never went in the tunnels.

Picture the scene, four months later, in the damp subterranean levels of the never-finished Satsop nuclear power plant outside Elma, Washington. There, engineers and scientists are testing the machines and algorithms that may guide missions for a time, preparing for a time when the robots won't remain on the sidelines.

None of the robots fielded at the Defense Advanced Research Projects Agency's Subterranean Challenge urban circuit in Elma in February are particularly battle-ready, though a few could likely work in a pinch.

Apart from a single human commander able to take remote control, the robots navigate, mostly autonomously. As captured on hours of video, the robots crawled, floated, rolled and stumbled their way through the course. They mapped their environment and searched for up to 20 special artifacts in the special urban circuit courses, built in the underground levels around a never-used cooling tower.

The artifacts included cellphones emitting bluetooth, Wi-Fi and occasionally video. They included red backpacks and thermal manikins warmed to the temperature of humans playing an audio recording, and they included carbon dioxide gas and warm blowing vents.

This urban circuit is the second of three underground environments that DARPA is using to test robots. Phones, manikins and backpacks are common across the tunnel, urban and cave settings that constitute the full range of subterranean challenges. The straightforward mission of the contest is to create machines that are better at rescue in environments that are dangerous and difficult for first responders, who are humans. If robots can find people trapped underground, then humans can use their energy getting to those same people, rather than expend that energy searching themselves.

A subtext of the Subterranean Challenge is that the same technologies that lead robots to rescue people underground could also lead infantry to find enemies hiding in tunnel complexes. While Delta Force was able to corner al-Baghdadi in Syria, much of the military's modern interest in tunnel warfare can be traced back to Osama bin Laden evading capture for years by escaping through the tunnels at Tora Bora.

Underground at Satsop, the future of warfare was far less a concern than simply making sure the robots could navigate the courses before them. That meant, most importantly, maintaining contact with the other robots on the team, and with a human supervisor.

Thick concrete walls, feet of dirt, heavy cave walls and the metals embedded in the structure all make underground sites that the military describes as passively denied environments, where the greatest obstacle to communication through the electromagnetic spectrum is the terrain itself. It's a problem military leaders, particularly in the Army, are hoping to solve for future iterations of their networks.

Team NUS SEDS, the undergrad roboticists representing the National University of Singapore Students for Exploration and Development of Space, arrived in Washington with one of the smallest budgets of any competitor, spending roughly $12,000 on everything from robot parts to travel and lodging. One of their robots, a larger tracked vehicle, was held up by U.S. Customs, and unable to take part in the competition.

Not to be deterred, at the team's preparation area, members showed off a version of the most striking design innovation at the competition: droppable Wi-Fi repeaters. As designed, the robots would release a repeater the moment they lost contact with the human operator. To lighten the data load, the onboard computers would compress the data to one-hundredth of its size, and then send it through the repeater.

“It's like dropping bread crumbs,” said Ramu Vairavan, the team's president.

Unfortunately for NUS SEDS, the bread crumbs were not enough, and the team only found one artifact in its four runs between the two courses. But the bread-crumb concept was shared across various teams.

Besides the physical competition taking place underground at Satsop, the urban circuit held a parallel virtual challenge, where teams selected robots and sensors from a defined budget and then programmed algorithms to tackle a challenge fully autonomously. The repeaters, such a popular innovation in the physical space, will likely be programmed into the next round of the virtual challenge.

The first DARPA Grand Challenge, launched in 2004, focused on getting roboticists together to provide a technological answer to a military problem. Convoys, needed for sustaining logistics in occupied countries, are vulnerable to attack, and tasking humans to drive the vehicles and escort the cargo only increasing the fixed costs of resupply. What if, instead, the robots could drive themselves over long stretches of desert?

After much attention and even more design, the March 2004 challenge ended with no vehicle having gone even a tenth the distance of the 142-mile track. A second Grand Challenge, held 18 months later, delivered far more successful results, and is largely credited with sparking the modern wave of autonomous driving features in cars.

Open desert is a permissive space, and navigation across it is aided by existing maps and the ever-present GPS data. This is the same architecture that undergirds much of autonomous navigation today, where surface robots and flying drones can all plug into communication networks offering useful location data.

Underground offers a fundamentally unknowable environment. Robots can explore parts of it, but even the most successful team on its most successful run found fewer than half of the artifacts hidden in the space. That team, CoSTAR (an acronym for “Collaborative SubTerranean Autonomous Resilient robots) included participants from Jet Propulsion Laboratory, CalTech, MIT, KAIST in South Korea and Lulea University of Technology in Sweden. CoSTAR used a mixture of wheeled and legged machines, and in the off-hours would practice everywhere from a local high school to a hotel staircase.

Yet, for all the constraints on signal that impeded navigation, it was the human-built environment that provided the greatest hurdle.

On a tour of the courses, it was easy to see how an environment intuitive to humans is difficult for machines. Backpacks and cellphones were not just placed on corners of roofs, but on internal ledges, impossible to spot without some aerial navigation.

Whereas the tunnel course held relatively flat, the urban circuit features levels upon levels to explore. Stairs and shafts, wide-open rooms with the jangly mess of a mezzanine catwalk, all require teams and robots to explore space in three dimensions. Between runs, the humans running the competition would adjust some features, so that completing the course once does not automatically translate into perfect information for a second attempt.

“How do we design equally hard for air and ground?” Viktor

Orekhov, a DARPA contractor who designed the course, said. “There's an art to it, not a science. But there's also a lot of science.”

Part of that art was building ramps into and out of an early room that would otherwise serve as a run-ending chokepoint. Another component was making sure that the course “leveled up” in difficulty the further teams got, requiring more senses and more tools to find artifacts hidden deeper and deeper in the space.

“Using all senses is helpful for humans. It's helpful for robots, too,” said Orekhov.

Teams competing in the Subterranean Challenge have six months to incorporate lessons learned into their designs and plans. The cave circuit, the next chapter of the Challenge scheduled for August 2020, will inevitably feature greater strain on communications and navigation, and will not even share the at least familiarity of a human-designed spaces seen in the urban circuit. After that, teams will have a year to prepare for the final circuit, set to incorporate aspects of tunnel, urban and cave circuits, and scheduled for August 2021.

DARPA prides itself on spurring technological development, rather than iterating it in a final form. Like the Grand Challenges before it, the goal is at least as much to spark industry interest and collaboration in a useful but unexplored space.

Programming a quadcopter or a tracked robot to find a manikin in a safety-yellow vest is a distant task from tracking and capturing armed people in the battlefields of the future, but the tools workshopped in late nights at a high school cafeteria between urban circuit runs may lead to the actual sensors on the robots brought along by Delta Force on future raids.

The robots of the underground wars of tomorrow are gestating, in competitions and workshops and github pages. Someday, they won't just be brought along on the raid against a military leader.

Wordlessly — with spinning LiDAR, whirring engines, and millimeter-wave radar — the robots might lead the charge themselves.

https://www.c4isrnet.com/battlefield-tech/it-networks/2020/03/20/the-trouble-when-military-robots-go-underground/

Sur le même sujet

  • 5 Key Questions CISOs Must Ask Themselves About Their Cybersecurity Strategy

    8 juillet 2024 | International, Sécurité

    5 Key Questions CISOs Must Ask Themselves About Their Cybersecurity Strategy

    Cybersecurity gaps exposed: Only 5% of CISOs report to CEOs, 2/3 are two levels down. CISOs must present risks in business terms to bridge communicat

  • No decision on S-400 as US, India sign key defense agreement

    7 septembre 2018 | International, Aérospatial, Naval, Terrestre, C4ISR

    No decision on S-400 as US, India sign key defense agreement

    By: Tara Copp NEW DELHI — The U.S. and India signed a critical defense information sharing agreement Wednesday that will allow each country greater access to each others' communications networks, but could not come to an agreement on India's planned purchase of Russia's S-400 air defense system. Mattis and Minister of Defense Nirmala Sitharaman signed the Communications Compatibility and Security Agreement, or COMCASA, which in practical terms will improve information network access and sharing so that in future weapons acquisition, secure communications links common in U.S. weapons systems, such as Link 16 in U.S. jets, can be included. Until now, those tactical communications capabilities have not been included in India's major weapons purchases. The two sides also agreed to enhanced defense cooperation, to include joint exercises on India's coast in 2019 and the establishment of a hotline between the U.S. and India. Mattis and Sitharaman then joined Secretary of State Mike Pompeo and India's minister of foreign affairs Sushma Swaraj to address Indian and U.S. media. The defense and diplomatic leaders said the agreements were the latest sign of a strengthened U.S.-India relationship, recently underscored through the U.S. renaming Pacific Command to Indo-Pacific Command. But the two sides did not come to a resolution on one of the higher-visibility issues between the two sides, India's planned purchase of five S-400 systems, in a deal worth an estimated $6 billion. Full article: https://www.militarytimes.com/news/your-military/2018/09/06/no-decision-on-s-400-as-us-india-sign-key-defense-agreement

  • DARPA: Teaching AI Systems to Adapt to Dynamic Environments

    18 février 2019 | International, C4ISR

    DARPA: Teaching AI Systems to Adapt to Dynamic Environments

    Current AI systems excel at tasks defined by rigid rules – such as mastering the board games Go and chess with proficiency surpassing world-class human players. However, AI systems aren't very good at adapting to constantly changing conditions commonly faced by troops in the real world – from reacting to an adversary's surprise actions, to fluctuating weather, to operating in unfamiliar terrain. For AI systems to effectively partner with humans across a spectrum of military applications, intelligent machines need to graduate from closed-world problem solving within confined boundaries to open-world challenges characterized by fluid and novel situations. To attempt this leap, DARPA today announced the Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) program. SAIL-ON intends to research and develop the underlying scientific principles and general engineering techniques and algorithms needed to create AI systems that act appropriately and effectively in novel situations that occur in open worlds. The program's goals are to develop scientific principles to quantify and characterize novelty in open-world domains, create AI systems that react to novelty in those domains, and to demonstrate and evaluate these systems in a selected DoD domain. A Proposers Day for interested proposers is scheduled for March 5, 2019, in Arlington, Virginia: https://go.usa.gov/xEUWh “Imagine if the rules for chess were changed mid-game,” said Ted Senator, program manager in DARPA's Defense Sciences Office. “How would an AI system know if the board had become larger, or if the object of the game was no longer to checkmate your opponent's king but to capture all his pawns? Or what if rooks could now move like bishops? Would the AI be able to figure out what had changed and be able to adapt to it?” Existing AI systems become ineffective and are unable to adapt when something significant and unexpected occurs. Unlike people, who recognize new experiences and adjust their behavior accordingly, machines continue to apply outmoded techniques until they are retrained. Given enough data, machines are able to do statistical reasoning well, such as classifying images for face-recognition, Senator said. Another example is DARPA's AI push in self-driving cars in the early 2000s, which led to the current revolution in autonomous vehicles. Thanks to massive amounts of data that include rare-event experiences collected from tens of millions of autonomous miles, self-driving technology is coming into its own. But the available data is specific to generally well-defined environments with known rules of the road. “It wouldn't be practical to try to generate a similar data set of millions of self-driving miles for military ground systems that travel off-road, in hostile environments and constantly face novel conditions with high stakes, let alone for autonomous military systems operating in the air and on sea,” Senator said. If successful, SAIL-ON would teach an AI system how to learn and react appropriately without needing to be retrained on a large data set. The program seeks to lay the technical foundation that would empower machines, regardless of the domain, to go through the military OODA loop process themselves – observe the situation, orient to what they observe, decide the best course of action, and then act. “The first thing an AI system has to do is recognize the world has changed. The second thing it needs to do is characterize how the world changed. The third thing it needs to do is adapt its response appropriately,” Senator said. “The fourth thing, once it learns to adapt, is for it to update its model of the world.” SAIL-ON will require performers and teams to characterize and quantify types and degrees of novelty in open worlds, to construct software that generates novel situations at distinct levels of a novelty hierarchy in selected domains, and to develop algorithms and systems that are capable of identifying and responding to novelty in multiple open-world domains. SAIL-ON seeks expertise in multiple subfields of AI, including machine learning, plan recognition, knowledge representation, anomaly detection, fault diagnosis and recovery, probabilistic programming, and others. A Broad Agency Announcement (BAA) solicitation is expected to be posted in the near future and will be available on DARPA's FedBizOpps page: http://go.usa.gov/Dom https://www.darpa.mil/news-events/2019-02-14

Toutes les nouvelles