13 février 2019 | International, C4ISR

Academia a Crucial Partner for Pentagon’s AI Push

By Tomás Díaz de la Rubia

The dust lay thick upon the ruins of bombed-out buildings. Small groups of soldiers, leaden with their cargo of weaponry, bent low and scurried like beetles between the wrecked pillars and remains of shops and houses.

Intelligence had indicated that enemy troops were planning a counterattack, but so far, all was quiet across the heat-shimmered landscape. The allied soldiers gazed intently out at the far hills and closed their weary, dust-caked eyes against the glare coming off the sand.

Suddenly, the men were aware of a low humming sound, like thousands of angry bees, coming from the northeast. Growing louder, this sound was felt, more than heard, and the buzzing was intensifying with each passing second. The men looked up as a dark, undulating cloud approached, and found a swarm of hundreds of drones, dropped from a distant unmanned aircraft, heading to their precise location in a well-coordinated group, each turn and dip a nuanced dance in close collaboration with their nearest neighbors.

Although it seems like a scene from a science fiction movie, the technology already exists to create weapons that can attack targets without human intervention. The prevalence of this technology is pervasive and artificial intelligence as a transformational technology shows virtually unlimited potential across a broad spectrum of industries.

In health care, for instance, robot-assisted surgery allows doctors to perform complex procedures with fewer complications than surgeons operating alone, and AI-driven technologies show great promise in aiding clinical diagnosis and automating workflow and administrative tasks, with the benefit of potentially saving billions in health care dollars.

In a different area, we are all aware of the emergence of autonomous vehicles and the steady march toward driverless cars being a ubiquitous sight on U.S. roadways. We trust that all this technology will be safe and ultimately in the best interest of the public.

Warfare, however, is a different animal.

In his new book, Army of None, Paul Scharre asks, “Should machines be allowed to make life-and-death decisions in war? Should it be legal? Is it right?” It is with these questions and others in mind, and in light of the advancing AI arms race with Russia and China that the Pentagon has announced the creation of the Joint Artificial Intelligence Center, which will have oversight of most of the AI efforts of U.S. service and defense agencies. The timeliness of this venture cannot be underestimated; automated warfare has become a “not if, but when” scenario.

In the fictional account above, it is the enemy combatant that, in a “strategic surprise,” uses advanced AI-enabled autonomous robots to attack U.S. troops and their allies. Only a few years ago, we may have dismissed such a scenario — an enemy of the U.S. having more and better advanced technology for use in the battlefield — as utterly unrealistic.

Today, however, few would question such a possibility. Technology development is global and accelerating worldwide. China, for example, has announced that it will overtake the United States within a few years and will dominate the global AI market by 2030. Given the pace and scale of investment the Chinese government is making in this and other advanced technology spaces such as quantum information systems, such a scenario is patently feasible.

Here, the Defense Department has focused much of its effort courting Silicon Valley to accelerate the transition of cutting-edge AI into the warfighting domain. While it is important for the Pentagon to cultivate this exchange and encourage nontraditional businesses to help the military solve its most vexing problems, there is a role uniquely suited for universities in this evolving landscape of arming decision makers with new levels of AI.

Universities like Purdue attribute much of their success in scientific advancement to the open, collaborative environment that enables research and discovery. As the Joint Artificial Intelligence Center experiments with and implements new AI solutions, it must have a trusted partner. It needs a collaborator with the mission of verifying and validating trustable and explainable AI algorithms, and with an interest in cultivating a future workforce capable of employing and maintaining these new technologies, in the absence of a profit motive.

"The bench in academia is already strong for mission-inspired AI research."

That's not to diminish the private sector's interest in supporting the defense mission. However, the department's often “custom” needs and systems are a small priority compared to the vast commercial appetite for trusted AI, and Silicon Valley is sure to put a premium on customizing its AI solutions for the military's unique specifications.

Research universities, by contrast, make their reputations on producing trustable, reliable, verifiable and proven results — both in terms of scientific outcomes and in terms of the scientists and engineers they graduate into the workforce.

A collaborative relationship between the Defense Department and academia will offer the military something it can't get anywhere else — a trusted capability to produce open, verifiable solutions, and a captive audience of future personnel familiar with the defense community's problems. If the center is to scale across the department and have any longevity, it needs talent and innovation from universities and explainable trusted AI solutions to meet national mission imperatives.

As the department implements direction from the National Defense Authorization Act to focus resources on leveraging AI to create efficiency and maintain dominance against strategic technological competitors, it should focus investment in a new initiative that engages academic research centers as trusted agents and AI talent developers. The future depends on it.

But one may ask, why all this fuss about AI competition in a fully globalized and interdependent world? The fact is, in my opinion and that of others, that following what we perceived as a relatively quiet period after the Cold War, we live today again in a world of great power competition. Those groups and nations that innovate most effectively and dominate the AI technology landscape will not only control commercial markets but will also hold a very significant advantage in future warfare and defense. In many respects, the threat of AI-based weapons to national security is perhaps as existential a threat to the future national security of the United States and its allies as nuclear weapons were at the end of World War II.

Fortunately, the U.S. government is rising to the challenge. Anticipating these trends and challenges, the Office of Management and Budget and the Office of Science and Technology Policy announced, in a recent memo, that the nation's top research-and-development priorities would encompass defense, AI, autonomy, quantum information systems and strategic computing.

This directly feeds into the job of the aforementioned Joint Artificial Intelligence Center, which is to establish a repository of standards, tools, data, technology, processes and expertise for the department, as well as coordinate with other government agencies, industry, U.S. allies and academia.

The bench in academia is already strong for mission-inspired AI research. Purdue University's Discovery Park has positioned itself as a paragon of collaborative, interdisciplinary research in AI and its applications to national security. Its Institute for Global Security and Defense Innovation is already answering needs for advanced AI research by delving into areas such as biomorphic robots, automatic target recognition for unmanned aerial vehicles, and autonomous exploration and localization of targets for aerial drones.

Complementary to the mission of the Joint Artificial Intelligence Center, the Purdue Policy Research Institute is actively investigating the ethical, legal and social impacts of connected and autonomous vehicles. Some of the topics being researched include privacy and security; workforce disruption; insurance and liability; and economic impact. It is also starting to investigate the question of ethics, technology and the future of war and security.

Purdue University is a key player in the Center for Brain-Inspired Computing project, forging ahead on “AI+” mentality by combining neuromorphic computing architectures with autonomous systems applications.

The Integrative Data Science Initiative at Purdue aims to ensure that every student, no matter what their major is, graduates from the university with a significant degree of literacy in data science and AI-related technologies.

Data science is used by all of the nation's security agencies and no doubt will be integral to the functioning of the Joint Artificial Intelligence Center and its mission.

The opportunities for Purdue and Discovery Park to enter into a partnership with the center are vast and span a wide range of disciplines and research areas. In short, the university is primed to play a vital role in the future of the nation's service and defense agencies and must be relentless in pursuing opportunities.

It has become apparent that the United States is no longer guaranteed top dog status on the dance card that is the future of war. To maintain military superiority, the focus must shift from traditional weapons of war to advanced systems that rely on AI-based weaponry. The stakes are just too high and the prize too great to for the nation to be left behind.

Therefore, we must call upon the government to weave together academia, government and industry for the greater good. We're stepping up to secure our place in the future of the nation.

Tomás Díaz de la Rubia is Purdue University's vice president of Discovery Park.

http://www.nationaldefensemagazine.org/articles/2019/2/11/viewpoint-academia-a-crucial-partner-for-pentagons-ai-push

Sur le même sujet

  • KONGSBERG awarded second follow-on JSM contract with Japan valued 820 MNOK

    4 décembre 2020 | International, Aérospatial

    KONGSBERG awarded second follow-on JSM contract with Japan valued 820 MNOK

    December 1, 2020 - The JSM is a 5th generation stealth air-to surface missile developed to fill F-35A anti-surface warfare (ASuW) and land attack capability gaps. JSM can be carried internally in the F-35 thus ensuring the aircraft's low-signature capabilities. The JSM has superior performance against well-defended sea- and land targets across long distances. “The international F-35 user community is continuing to show great interest in the JSM and KONGSBERG is very proud to have been selected by Japan to provide the JSM for their F-35 fleet. “Our relationship is growing even stronger with this second follow-on contract”, says Eirik Lie, President, Kongsberg Defence & Aerospace AS. For editors: KONGSBERG and the Government of Japan are not disclosing any further detail on value, volume or timeline of the contract. For further information, please contact: Ronny Lie, Group Vice President Communications, Kongsberg Gruppen ASA, Tel.: (+47) 916 10 798. Jan Erik Hoff, Group Vice President Investor Relations, Kongsberg Gruppen ASA, Tel: (+47) 991 11 916. View source version on KONGSBERG: https://www.kongsberg.com/newsandmedia/news-archive/20202/kongsberg-awarded-second-follow-on-jsm-contract-with-japan-valued-820-mnok/

  • Air Force Uses AI to Accelerate Pilot Training

    19 septembre 2018 | International, Aérospatial

    Air Force Uses AI to Accelerate Pilot Training

    By Mandy Mayfield The Air Force is hoping a suite of new artificial intelligence and augmented reality technologies will help accelerate the speed at which pilots and airmen are trained, the Air Education and Training Command leader said Sept. 18. “We are actually allowing our students to explore these [AI] tools of learning and measuring what's going on in their brain, what's going on in their body, what's going on with the effectiveness of them doing the job we are trying to teach them to do,” Lt. Gen. Steven Kwast, Air Education and Training Command commander, said at the Air Force Association's annual Air, Space and Cyber Conference at National Harbor, Maryland. AETC is in the midst of an experimental program, the Pilot Training Next initiative, which is utilizing AI to train pilots — in hopes of not only streamlining the process of airmen becoming flight ready — but also improving the quality of their education, Kwast said. “So the data is very promising in that we can accentuate the adult brain to learn fast, better and, I'll say, [with] more ‘stick' — meaning that when you learn something you remember it longer and better,” Kwast added. As pilots use the “emerging technologies” to learn, the Air Force is learning alongside them, aggregating each pilot's data onto a grade sheet, he said. Although leadership is enthusiastic about the new technologies, the program is still underway and results about its effectiveness aren't available yet,, Kwast said. “We aren't at the place where we can say what we can do with it yet.” Some of the beta testing should be completed by the summer of 2019, he added. Maj. Justin Chandler, a Pilot Training Next team member, also touted the technologies, saying they allow future airmen 24-hour access to pilot instruction. “The artificial intelligence allows us to ensure that they [student pilots] don't pick up bad habits,” Chandler said. http://www.nationaldefensemagazine.org/articles/2018/9/18/air-force-uses-ai-to-accelerate-pilot-training

  • UK: Defence and Security Accelerator funding competitions

    1 novembre 2018 | International, Aérospatial, Naval, Terrestre, C4ISR, Sécurité

    UK: Defence and Security Accelerator funding competitions

    Details of our current, future and past funding competitions. Published 8 December 2016 Last updated 30 October 2018 — see all updates From: Defence and Security Accelerator and Ministry of Defence Contents Events and market interest activities (Open) Themed competitions (open now for application) Themed competitions (opening for applications soon) Past events and market interest activities (closed) Past themed competitions (closed) You can submit a Defence and Security Accelerator proposal either to our Open Call for Innovation or in response to the technical challenges in a specific themed competition, as detailed below. You can submit your themed competition proposal online once the full detailed competition document is published. Summary competition documents may be published a few weeks in advance of full competition document releases. Events and market interest activities (Open) Maximising Human Performance - Market Exploration 18 October 2018 DASA dial in event: many drones make light work competition 18 October 2018 Themed competitions (open now for application) The competitions below are in order of closing date, earliest at the top. Competition: predictive cyber analytics 6 September 2018 Competition: Biosensing across wide areas 31 August 2018 Competition: stopping it in its tracks 28 September 2018 Competition: Don't Blow It! Safely eliminating chemical and biological munitions on the battlefield 9 October 2018 Competition: many drones make light work phase 3 18 October 2018 Competition: Behavioural Analytics for Defence and Security 11 October 2018 Themed competitions (opening for applications soon) Please note we publish these summary documents ahead of publishing the full detailed competition documents to give potential applicants early information on the competition. Full documents are typically published within a couple of weeks of the summary documents. The competitions below are in order of closing date, earliest at the top. Competition: Tackling Knife Crime in the UK 30 October 2018 https://www.gov.uk/government/collections/defence-and-security-accelerator-funding-competitions

Toutes les nouvelles