Back to news

November 6, 2023 | International, Aerospace

Air Force boss urges staying the course in first message to airmen

“We have a responsibility to lead and advance the integration of the joint force," Air Force Chief of Staff Gen. David Allvin told airmen.

https://www.c4isrnet.com/news/your-air-force/2023/11/06/air-force-boss-urges-staying-the-course-in-first-message-to-airmen/

On the same subject

  • Googlers headline new commission on AI and national security

    January 22, 2019 | International, C4ISR

    Googlers headline new commission on AI and national security

    By: Kelsey D. Atherton Is $10 million and 22 months enough to shape the future of artificial intelligence? Probably not, but inside the fiscal 2019 national defense policy bill is a modest sum set aside for the creation and operations of a new National Security Commission for Artificial Intelligence. And in a small way, that group will try. The commission's full membership, announced Jan. 18, includes 15 people across the technology and defense sectors. Led by Eric Schmidt, formerly of Google and now a technical adviser to Google parent company Alphabet, the commission is co-chaired by Robert Work. former undersecretary of defense who is presently at the Center for New American Security. The group is situated as independent within the executive branch, and its scope is broad. The commission is to look at the competitiveness of the United States in artificial intelligence, how the US can maintain a technological advantage in AI, keep an eye on foreign developments and investments in AI, especially as related to national security. In addition, the authorization for the commission tasks it with considering means to stimulate investment in AI research and AI workforce development. The commission is expected to consider the risks of military uses of AI by the United States or others, and the ethics related to AI and machine learning as applied to defense. Finally, it is to look at how to establish data standards across the national security space, and to consider how the evolving technology can be managed. All of this has been discussed in some form in the national security community for months, or years, but now, a formal commission will help lay out a blue print. That is several tall orders, all of which will lead to at least three reports. The first report is set by law to be delivered no later than February 2019, with annual reports to follow in August of 2019 and 2020. The commission is set to wrap up its work by October 2020. Inside the authorization is a definition of artificial intelligence to for the commission to work from. Or, well, five definitions: Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. An artificial system designed to think or act like a human, including cognitive architectures and neural networks. A set of techniques, including machine learning that is designed to approximate a cognitive task. An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting. Who will be the people tasked with navigating AI and the national security space? Mostly the people already developing and buying the technologies that make up the modern AI sector. Besides Schmidt, the list includes several prominent players from the software and AI industries including Oracle co-CEO Safra Catz, Director of Microsoft Research Eric Horvitz, CEO of Amazon Web Services Andy Jassy, and Head of Google Cloud AI Andrew Moore. After 2018's internal protests in Google, Microsoft, and Amazon over the tech sector's involvement in Pentagon contracts, especially at Google, one might expect to see some skepticism of AI use in national security from Silicon Valley leadership. Instead, Google, which responded to employee pressure by declining to renew its Project Maven contract, is functionally represented twice, by Moore and functionally by Schmidt. Academia is also present on the commission, with a seat held by Dakota State University president. Jose-Marie Griffiths. CEO Ken Ford will represent Florida Institute for Human & Machine Cognition, which is tied to Florida's State University program. Caltech and NASA will be represented on the commission by the supervisor of Jet Propulsion Lab's AI group, Steve Chien. Intelligence sector will be present at the table in the form of In-Q-Tel CEO Christ Darby and former Director of Intelligence Advanced Research Projects Activity Jason Matheny. Rounding out the commission is William Mark, the director of the information and computing sciences division at SRI, a pair of consultants: Katrina McFarland of Cypress International and Gilman Louie of Alsop Louie Partners. Finally, Civil society groups are represented by Open Society Foundation fellow Mignon Clyburn. Balancing the security risks, military potential, ethical considerations, and workforce demands of the new and growing sector of machine cognition is a daunting task. Finding a way to bend the federal government to its conclusions will be tricky in any political climate, though perhaps especially so in the present moment, when workers in the technological sector are vocal about fears of the abuse of AI and the government struggles to clearly articulate technology strategies. The composition of the commission suggests that whatever conclusions are reached by the commission will be agreeable to the existing technology sector, amenable to the intelligence services, and at least workable by academia. Still, the proof is in the doing, and anyone interested in how the AI sector thinks the federal government should think about AI for national security should look forward to the commission's initial report. https://www.c4isrnet.com/c2-comms/2019/01/18/googlers-dominate-new-comission-on-ai-and-national-security/

  • Space Force seeks sizeable budget increase, reflecting the domain’s importance

    May 31, 2021 | International, Aerospace

    Space Force seeks sizeable budget increase, reflecting the domain’s importance

    The increase of 13 percent over last year's budget request includes the transfer of some satellite communications efforts from the U.S. Army and Navy.

  • Academia a Crucial Partner for Pentagon’s AI Push

    February 13, 2019 | International, C4ISR

    Academia a Crucial Partner for Pentagon’s AI Push

    By Tomás Díaz de la Rubia The dust lay thick upon the ruins of bombed-out buildings. Small groups of soldiers, leaden with their cargo of weaponry, bent low and scurried like beetles between the wrecked pillars and remains of shops and houses. Intelligence had indicated that enemy troops were planning a counterattack, but so far, all was quiet across the heat-shimmered landscape. The allied soldiers gazed intently out at the far hills and closed their weary, dust-caked eyes against the glare coming off the sand. Suddenly, the men were aware of a low humming sound, like thousands of angry bees, coming from the northeast. Growing louder, this sound was felt, more than heard, and the buzzing was intensifying with each passing second. The men looked up as a dark, undulating cloud approached, and found a swarm of hundreds of drones, dropped from a distant unmanned aircraft, heading to their precise location in a well-coordinated group, each turn and dip a nuanced dance in close collaboration with their nearest neighbors. Although it seems like a scene from a science fiction movie, the technology already exists to create weapons that can attack targets without human intervention. The prevalence of this technology is pervasive and artificial intelligence as a transformational technology shows virtually unlimited potential across a broad spectrum of industries. In health care, for instance, robot-assisted surgery allows doctors to perform complex procedures with fewer complications than surgeons operating alone, and AI-driven technologies show great promise in aiding clinical diagnosis and automating workflow and administrative tasks, with the benefit of potentially saving billions in health care dollars. In a different area, we are all aware of the emergence of autonomous vehicles and the steady march toward driverless cars being a ubiquitous sight on U.S. roadways. We trust that all this technology will be safe and ultimately in the best interest of the public. Warfare, however, is a different animal. In his new book, Army of None, Paul Scharre asks, “Should machines be allowed to make life-and-death decisions in war? Should it be legal? Is it right?” It is with these questions and others in mind, and in light of the advancing AI arms race with Russia and China that the Pentagon has announced the creation of the Joint Artificial Intelligence Center, which will have oversight of most of the AI efforts of U.S. service and defense agencies. The timeliness of this venture cannot be underestimated; automated warfare has become a “not if, but when” scenario. In the fictional account above, it is the enemy combatant that, in a “strategic surprise,” uses advanced AI-enabled autonomous robots to attack U.S. troops and their allies. Only a few years ago, we may have dismissed such a scenario — an enemy of the U.S. having more and better advanced technology for use in the battlefield — as utterly unrealistic. Today, however, few would question such a possibility. Technology development is global and accelerating worldwide. China, for example, has announced that it will overtake the United States within a few years and will dominate the global AI market by 2030. Given the pace and scale of investment the Chinese government is making in this and other advanced technology spaces such as quantum information systems, such a scenario is patently feasible. Here, the Defense Department has focused much of its effort courting Silicon Valley to accelerate the transition of cutting-edge AI into the warfighting domain. While it is important for the Pentagon to cultivate this exchange and encourage nontraditional businesses to help the military solve its most vexing problems, there is a role uniquely suited for universities in this evolving landscape of arming decision makers with new levels of AI. Universities like Purdue attribute much of their success in scientific advancement to the open, collaborative environment that enables research and discovery. As the Joint Artificial Intelligence Center experiments with and implements new AI solutions, it must have a trusted partner. It needs a collaborator with the mission of verifying and validating trustable and explainable AI algorithms, and with an interest in cultivating a future workforce capable of employing and maintaining these new technologies, in the absence of a profit motive. "The bench in academia is already strong for mission-inspired AI research." That's not to diminish the private sector's interest in supporting the defense mission. However, the department's often “custom” needs and systems are a small priority compared to the vast commercial appetite for trusted AI, and Silicon Valley is sure to put a premium on customizing its AI solutions for the military's unique specifications. Research universities, by contrast, make their reputations on producing trustable, reliable, verifiable and proven results — both in terms of scientific outcomes and in terms of the scientists and engineers they graduate into the workforce. A collaborative relationship between the Defense Department and academia will offer the military something it can't get anywhere else — a trusted capability to produce open, verifiable solutions, and a captive audience of future personnel familiar with the defense community's problems. If the center is to scale across the department and have any longevity, it needs talent and innovation from universities and explainable trusted AI solutions to meet national mission imperatives. As the department implements direction from the National Defense Authorization Act to focus resources on leveraging AI to create efficiency and maintain dominance against strategic technological competitors, it should focus investment in a new initiative that engages academic research centers as trusted agents and AI talent developers. The future depends on it. But one may ask, why all this fuss about AI competition in a fully globalized and interdependent world? The fact is, in my opinion and that of others, that following what we perceived as a relatively quiet period after the Cold War, we live today again in a world of great power competition. Those groups and nations that innovate most effectively and dominate the AI technology landscape will not only control commercial markets but will also hold a very significant advantage in future warfare and defense. In many respects, the threat of AI-based weapons to national security is perhaps as existential a threat to the future national security of the United States and its allies as nuclear weapons were at the end of World War II. Fortunately, the U.S. government is rising to the challenge. Anticipating these trends and challenges, the Office of Management and Budget and the Office of Science and Technology Policy announced, in a recent memo, that the nation's top research-and-development priorities would encompass defense, AI, autonomy, quantum information systems and strategic computing. This directly feeds into the job of the aforementioned Joint Artificial Intelligence Center, which is to establish a repository of standards, tools, data, technology, processes and expertise for the department, as well as coordinate with other government agencies, industry, U.S. allies and academia. The bench in academia is already strong for mission-inspired AI research. Purdue University's Discovery Park has positioned itself as a paragon of collaborative, interdisciplinary research in AI and its applications to national security. Its Institute for Global Security and Defense Innovation is already answering needs for advanced AI research by delving into areas such as biomorphic robots, automatic target recognition for unmanned aerial vehicles, and autonomous exploration and localization of targets for aerial drones. Complementary to the mission of the Joint Artificial Intelligence Center, the Purdue Policy Research Institute is actively investigating the ethical, legal and social impacts of connected and autonomous vehicles. Some of the topics being researched include privacy and security; workforce disruption; insurance and liability; and economic impact. It is also starting to investigate the question of ethics, technology and the future of war and security. Purdue University is a key player in the Center for Brain-Inspired Computing project, forging ahead on “AI+” mentality by combining neuromorphic computing architectures with autonomous systems applications. The Integrative Data Science Initiative at Purdue aims to ensure that every student, no matter what their major is, graduates from the university with a significant degree of literacy in data science and AI-related technologies. Data science is used by all of the nation's security agencies and no doubt will be integral to the functioning of the Joint Artificial Intelligence Center and its mission. The opportunities for Purdue and Discovery Park to enter into a partnership with the center are vast and span a wide range of disciplines and research areas. In short, the university is primed to play a vital role in the future of the nation's service and defense agencies and must be relentless in pursuing opportunities. It has become apparent that the United States is no longer guaranteed top dog status on the dance card that is the future of war. To maintain military superiority, the focus must shift from traditional weapons of war to advanced systems that rely on AI-based weaponry. The stakes are just too high and the prize too great to for the nation to be left behind. Therefore, we must call upon the government to weave together academia, government and industry for the greater good. We're stepping up to secure our place in the future of the nation. Tomás Díaz de la Rubia is Purdue University's vice president of Discovery Park. http://www.nationaldefensemagazine.org/articles/2019/2/11/viewpoint-academia-a-crucial-partner-for-pentagons-ai-push

All news