Back to news

January 21, 2020 | International, C4ISR

Trustworthy AI: A Conversation with NIST's Chuck Romine

By: Charles Romine

Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers.

How would you define artificial intelligence? How is it different from regular computing?

One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community.

That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human.

There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy?

AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being.

Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system.

NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI?

Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house.

What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance.

That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there.

I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us.

AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears?

I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence.

We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution.

What's next?

I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that.

Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted.

https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

On the same subject

  • La cellule de soutien aux industries de défense toujours active

    September 9, 2020 | International, Aerospace, Naval, Land, C4ISR, Security

    La cellule de soutien aux industries de défense toujours active

    Mise en place dès mars dernier, la cellule de soutien de la DGA (Direction générale de l'armement) à la base industrielle et technologique de défense (BITD) lancée à l'initiative du ministère des Armées est toujours mobilisée pour les entreprises qui en ont besoin, rappelle Air & Cosmos. Cette « task force » a déjà réussi à trouver des solutions pour 47 entreprises dont l'activité est stratégique ou critique pour la BITD française. Cela représente pratiquement la moitié « des 92 chantiers ouverts » et d'autres s'annoncent pour l'automne. « Une vague va arriver avec l'automne mais nous ne connaissons pas son ampleur. Certaines sociétés ne le savent peut être pas elles-mêmes et toute la difficulté sera de détecter les problèmes et d'utiliser au mieux les moyens dont nous disposons et dans des délais très contraints », indique l'ingénieur général Vincent Imbert qui dirige cette cellule. Air & Cosmos du 9 septembre 2020

  • RTX CEO does not see 'transformative' deals, open to pruning business

    September 11, 2024 | International, Aerospace

    RTX CEO does not see 'transformative' deals, open to pruning business

  • Space Software Startup To Pursue SDA Contracts

    February 12, 2020 | International, Aerospace

    Space Software Startup To Pursue SDA Contracts

    NewSpace Networks will bid against Lockheed Martin for bankrupt Vector Launch's GalacticSky software-defined satellite assets, says co-founder Shaun Coleman. By THERESA HITCHENS WASHINGTON: Three of the founders of bankrupt Vector Launch have created a new startup, NewSpace Networks, to develop space software products for applications such as data analysis, cybersecurity, and the Internet of Things (IoT). As one of their first forays into the market, the company intends to respond to the Space Development Agency's January call for “leap-ahead technologies” for its evolving DoD space architecture. The new San Jose-based company is eyeing SDA's top two priorities: the so-called ‘transport layer' for Internet and communications connectivity and the ‘tracking layer' that will also cover hypersonic missiles. NewSpace Networks leadership believe they could provide capabilities to the ‘battle management layer,' and the ‘support layer' to enable ground and launch segments to support a responsive space architecture. “We could occupy several of those layers,” Robert Cleave, formerly Vector's chief revenue office, told me in a phone conversation today, which included NewSpace Network co-founders Shaun Coleman and John Metzger. Coleman was the first investor in Vector Launch; Metzger was vice president of software engineering. As we reported, the SDA's Jan. 21 Broad Area Announcement gives interested vendors one year to pitch their ideas. Coleman said that NewSpace Networks is the only company focused on creating a software-based infrastructure in space. Rather than building satellites, Cleave explained, “we see ourselves as a provider of software that makes the satellite smarter.” The idea is to move the aerospace industry from its current hardware focus to a focus on software, as has happened at big tech firms across Silicon Valley and is recognized by many of the Air Force's leadership. NewSpace Networks intends to target military and defense-related customers, along with commercial firms and civilian government agencies. This includes pitching to be a part of DoD's efforts to develop and use 5G high-speed communications capabilities and to provide connectivity to Army vehicles. But it also is looking at potential sales outside of the traditional aerospace community, such as vendors of autonomous vehicles, city governments interested in infrastructure monitoring, and even direct consumer sales of healthcare devices and entertainment services. The wide variety of potential customers is based on the fact that NewSpace Networks' planned products are focused on computing, data storage and processing capabilities at the edge, ones that have a wide variety of potential uses. According to today's announcement, NewSpace Networks's initial products will focus on “the unique challenges of edge computing via space connectivity.” But the company's tech also could be used with aircraft, drones or aerostats serving as the connectivity node, the co-founders explained. The company also intends to work on: Data analytics and analysis; Cloud integration; Network optimization; Virtualization & Hyperconvergence (the latter is industry jargon for combining computing, storage and networking in a single system); Space and air integration; Security and encryption; Application lifecycle management; and IoT enablement. Tuscon-based Vector was one of three commercial space firms chosen in April by the Defense Advanced Research Projects Agency for its DARPA Launch Challenge, a $12 million competition to rapidly launch small satellites to Low Earth Orbit (LEO), until its surprise withdrawal in September due to financial difficulties. The other two companies were Virgin Orbit, which withdrew in October to concentrate on more lucrative customers, and the secretive California-based startup Astra, that first went public in early February via a website. According to a Feb. 3 profile in Bloomberg Businessweek, the firm intends its first launch on Feb. 21. Vector declared Chapter 11 bankruptcy in December, and as colleague Jeff Foust reported on Jan. 24 announced it would auction off its assets. Vector already has a $4.5 million bid from Lockheed Martin for its GalacticSky software-defined satellite technology — essentially a computer on orbit that can be configured for various satellite missions that will be accepted if no other firms issues a bid by Feb. 21. If others throw their hats in the ring, there will be an auction for GalacticSky on Feb. 25. And guess what? NewSpace Networks intends to do just that. “We will be bidding for GalacticSky as well,” Coleman said, noting that I was the first reporter they have told. The founders believe that GalacticSky's technology, that allows a satellite to act more like a cloud node than a mainframe computer, would be complementary to their own developments. Even if they don't win the auction, they hope to work with whoever wins. https://breakingdefense.com/2020/02/space-software-startup-to-pursue-sda-contracts

All news