Back to news

September 26, 2018 | International, C4ISR

Members of Congress look to make AI a priority

By:

Congress and the executive branch need to make a more concerted effort to address and prepare for the rise of artificial intelligence, Reps. Will Hurd, R-Texas, and Robin Kelly, D-Ill., said in a white paper released Sept. 25.

The congressmen, who serve as the chairman and ranking member of the House IT Subcommittee, compiled information gathered in past congressional hearings and meetings with experts to argue for the criticality of federal input in the many facets of AI.

“In light of that potential for disruption, it's critical that the federal government address the different challenges posed by AI, including its current and future applications. The following paper presents lessons learned from the Subcommittee's oversight and hearings on AI and sets forth recommendations for moving forward,” Hurd and Kelly wrote.

“Underlying these recommendations is the recognition the United States cannot maintain its global leadership in AI absent political leadership from Congress and the executive branch. Therefore, the Subcommittee recommends increased engagement on AI by Congress and the administration.”

According to the White Paper, under current trends the United States is soon slated to be outpaced in research and development investments by countries like China that have prioritized artificial intelligence investment.

“Particularly concerning is the prospect of an authoritarian country, such as Russia or China, overtaking the United States in AI. As the Subcommittee's hearings showed, AI is likely to have a significant impact in cybersecurity, and American competitiveness in AI will be critical to ensuring the United States does not lose any decisive cybersecurity advantage to other nation-states,” Hurd and Kelly wrote.

Hurd characterized the Chinese investment in AI as a race with the U.S.

“It's a race, we all know this, and one of the things we need [is] a national strategy, similar to what we've seen in the conversations around quantum computing yesterday at the White House. What we saw almost a decade ago when it came to nanotechnology. And part of that strategy does include increasing basic research, opening up data sets and making sure the U.S. is playing a part, leader on ethics when it comes to artificial intelligence,” said Hurd in a Sept. 25 press call.

The paper applauded current investments in R&D, such as the Defense Advanced Research Projects Agency's creation of the Artificial Intelligence Exploration program, and encouraged government hosting more “Grand Challenges” like those conducted by DARPA to encourage outside-government innovation.

“I do believe the federal government has a role, because we're sitting on data sets that could be used as a backbone of a Grand Challenge around artificial intelligence,” said Hurd, who added that the National Oceanic and Atmospheric Administration, healthcare agencies and many other components of the federal government possess the data to administer meaningful AI competitions.

“I think this would be a maybe a great opportunity for a public private partnership,” added Kelly on the press call.

The paper also identified four primary challenges that can arise as AI becomes more prevalent: workforce, privacy, bias and malicious use.

AI has the potential to both put portions of the workforce out of a job as more tasks become automated and increase the number of jobs for those trained to work with artificial intelligence.

Hurd and Kelly called on the federal government to lead the way in adapting its workforce by planning for and investing in training programs that will enable them to transition into AI work.

As with many technologies, AI has the potential to infringe on privacy, as intelligent products or systems such as virtual assistants constantly collect data on individuals. That data could be exploited by both the company that created the technology and hackers looking to steal personal information.

“The growing collection and use of personal data in AI systems and applications raises legitimate concerns about privacy. As such, federal agencies should review federal privacy laws, regulations, and judicial decisions to determine how they may already apply to AI products within their jurisdiction, and—where necessary—update existing regulations to account for the addition of AI,” Hurd and Kelly wrote.

The white paper also calls on federal agencies to make government data more available to the public for AI experimentation, while also ensuring that any AI algorithms used by agencies to “make consequential decisions about individuals” are “inspectable” to ensure that they operate without coded bias.

According to Hurd, the question of whether and how that inspectable information would be made available to the public still needs to be asked.

Finally, Hurd and Kelly called on government entities to consider how AI may be used to perpetuate cyber attacks or otherwise cause harm.

However, while recommending that agencies look to existing regulation and statute and some limited changes to those statutes, the paper encouraged a similar hands off approach that the federal government took to the development of the internet.

“The government should begin by first assessing whether the risks to public safety or consumers already fall within existing regulatory frameworks and, if so, consideration should be made as to whether those existing frameworks can adequately address the risks,” Hurd and Kelly wrote.

“At minimum, a widely agreed upon standard for measuring the safety and security of AI products and applications should precede any new regulations. A common taxonomy also would help facilitate clarity and enable accurate accounting of skills and uses of AI.”

https://www.federaltimes.com/federal-oversight/congress/2018/09/25/members-of-congress-look-to-make-ai-a-priority

On the same subject

  • Advancing National Defense: Lessons from the Pentagon’s Cyber Strategy

    June 25, 2024 | International, Security

    Advancing National Defense: Lessons from the Pentagon’s Cyber Strategy

    Opinion: Just as the Navy focuses on sea dominance, the Air Force controls the sky and the Army establishes ground supremacy, cyberspace has become a new domain.

  • Fully autonomous ‘mobile intelligent entities’ coming to the battlefields of the future

    September 7, 2018 | International, C4ISR

    Fully autonomous ‘mobile intelligent entities’ coming to the battlefields of the future

    By: Kelsey Atherton WASHINGTON — A killer robot by any other name is far more palatable to the general public. That may be part of the logic behind the Army Research Laboratory Chief Scientist Alexander Kott's decision to refer to thinking and moving machines on the battlefield as “mobile intelligent entities.” Kott pitched the term, along with the new ARL concept of fully autonomous maneuver, at the 2nd Annual Defense News Conference yesterday, in an panel on artificial intelligence that kept circling back to underlying questions of great power competition. “Fully autonomous maneuver is an ambitious, heretical terminology,” Kott said. “Fully autonomous is more than just mobility, it's about decision making.” If there is a canon against which this autonomy seems heretical, it is likely the international community's recent conference and negotiations over how, exactly, to permit or restrict lethal autonomous weapon systems. The most recent meeting of the Group of Governmental Experts on Lethal Autonomous Weapons Systems took place last week in Geneva, Switzerland and concluded with a draft of recommendations on Aug. 31st. This diplomatic process, and the potential verdict of international law, could check or halt the development of AI-enabled weapons, especially ones where machines select and attack targets without human interventions. That's the principle objection raised by humanitarian groups like the Campaign to Stop Killer Robots, as well as the nations that called for a preemptive ban on such autonomous weapons. Kott understands the ethical concern, drawing an analogy to the moral concerns and tradeoffs in developing self driving cars. “All know about self driving cars, all the angst, the issue of mobility... take all this concern and multiply it by orders of magnitude and now you have the issues of mobility on the battlefield,” said Kott. “Mobile intelligent entities on the battlefield have to deal with a much more unstructured, much less orderly environment than what self-driving cars have to do. This is a dramatically different world of urban rubble and broken vehicles, and all kind of dangers, in which we are putting a lot of effort.” Full article: https://www.defensenews.com/smr/defense-news-conference/2018/09/06/fully-autonomous-maneuver-coming-to-the-battlefields-of-the-future

  • All aboard the Sea Train!

    June 2, 2020 | International, Naval

    All aboard the Sea Train!

    Imagine the following scenario. Four medium-sized U.S. Navy vessels depart from a port along the United States' coast. There's no crew aboard any of them. About 15 nautical miles off the coast, the four vessels rendezvous, autonomously arranging themselves in a line. Using custom mechanisms, they attach to each other to form a train, except they're in the water and there's no railroad to guide them. In this configuration the vessels travel 6,500 nautical miles across the open ocean to Southeast Asia. But as they approach their destination, they disconnect, splitting up as each unmanned ship goes its own way to conduct independent operations, such as collecting data with a variety of onboard sensors. Once those operations are complete, the four reunite, form a train and make the return journey home. This is the Sea Train, and it may not be as far-fetched as it sounds. The Defense Advanced Research Projects Agency is investing in several technologies to make it a reality. “The goal of the Sea Train program is to be able to develop and demonstrate long-range deployment capabilities for a distributed fleet of medium-sized tactical unmanned vessels,” said Andrew Nuss, DARPA's program manager for Sea Train. “So we're really focusing on ways to enable extended transoceanic transit and long-range naval operations, and the way that we're looking to do that is by taking advantage of some of the efficiencies that we can gain in a system of connected vessels — that's where the name ‘Sea Train' comes from.” According to DARPA, the current security environment has incentivized the Navy and the Marine Corps to move from a small number of exquisite, large manned platforms to a more distributed fleet structure comprised of smaller vessels, including unmanned platforms that can conduct surveillance and engage in electronic warfare and offensive operations. While these unmanned vessels are smaller and more agile than their large, manned companions, they are limited by the increased wave-making resistance that plagues smaller vessels. And due to their size, they simply can't carry enough fuel to make the long-range journeys envisioned by DARPA without refueling. By connecting the vessels — physically or in a formation — the agency hopes the Sea Train can reduce that wave resistance and enable long-range missions. In February, the agency released a broad agency announcement to find possible vendors. Citing agency practice, Nuss declined to share how many proposals were submitted, although he did say there was significant interest in the announcement. The agency completed its review of any submissions and expects to issue contracts by the end of the fiscal year. Sea Train is expected to consist of two 18-month periods, where contractors will work to develop and test technologies that could enable the Sea Train concept. The program will culminate with model testing in scaled ocean conditions. If successful, DARPA hopes to see the technologies adopted by the Navy for its unmanned platforms. “What we're looking to do is be able to reduce the risk in this unique deployment approach,” Ness said. “And then be able to just deliver that set of solutions to the Navy in the future, to be able to demonstrate to them that there is, potentially, a new way to deploy these vessels, to be able to provide far more operational range without the risk of relying on actual refueling or in-port refueling.” And while DARPA's effort is focused on medium-sized unmanned vessels — anywhere from 12 to 50 meters in length — the lessons learned could be applied to larger or smaller vessels, manned or unmanned. https://www.c4isrnet.com/unmanned/2020/06/01/all-aboard-the-sea-train/

All news