5 mai 2020 | International, Aérospatial, Naval, Terrestre, C4ISR, Sécurité

The defense industry needs new entrants, and a supportive government during crises

By: Venture capital community leaders

The COVID-19 health crisis is quickly leading to an economic meltdown, throwing millions of Americans out of work and forcing strategic reevaluations across industries. The defense industry is no exception. We are praying for a swift end to the crisis, but its effects will linger, shaping the Pentagon's priorities, organizational structure, military operations, logistics, supply chains and interactions with the defense-industrial base for years to come.

In the past few weeks, we have had numerous conversations with government officials about our venture and growth equity investments in the defense sector. These discussions have centered on the eligibility rules of the CARES Act's Paycheck Protection Program and the risk of foreign capital seeking entry into defense technology startups desperate for investment in these trying times.

But these are secondary questions. The primary question is this: How can the Pentagon best preserve its innovation base and develop the most competitive and advanced technologies?

The answer is simple: Buy commercial. New and emerging defense startups — and our men and women in uniform — don't need symbolic gestures. What they need is concerted action to bring the latest and most advanced technologies — many of which are routinely used in industry — to dangerously antiquated defense weapons systems and internal IT infrastructure. This was true before COVID-19, it is true now and it will be true when the next crisis strikes.

All too often the government has responded to crises by circling wagons around incumbent firms — the large prime contractors, whose political connections afford them bailouts in the name of “ensuring ongoing competition.” This process is already underway. After announcing its hope for a $60 billion relief package for the aerospace manufacturing industry, Boeing successfully lobbied for $17 billion worth of loans for firms “critical to maintaining national security.”

The CARES Act also announced provisions to streamline the Defense Department's contracting process, which sounds promising, except for the fact that these provisions apply only to contracts worth over $100 million. This discriminates against smaller, more nimble innovators and providers of cutting-edge technology.

This isn't how things have always been. After complaints about large horse dealers monopolizing military contracts during the Civil War, the government allowed quartermasters to purchase horses and mules from any dealer on the open market. In World War II, Congress created the Smaller War Plants Corporation, which awarded tens of thousands of contracts to small, competitive firms. Today, through innovative use of Small Business Innovation Research money, other transactional authorities, rapid work programs and the like, the Pentagon is certainly signaling interest in emerging technologies.

But let us be clear: We are not advocating continuing to invest larger dollar amounts into never-ending, short-term pilots and prototypes. The key to sustaining the innovation base through this crisis and any future crises is transitioning the best of these companies and products into real production contracts serving the day-to-day needs of the mission. Host tough, but fair competitions for new innovations, and then rapidly scale the winners.

America's technological supremacy has afforded our country nearly a century of military hegemony, but it is not a law of nature. Sovereign states and peer competitors like Russia and China will quickly outpace us if we take our prowess for granted. We need new entrants into the defense industry more than ever, but without government support through crises like this one, the talent and capital simply won't be there.

Why do investors say defense isn't a safe bet?

As the Department of Defense readily acknowledges, its mission is fundamentally changing. Breakthroughs in technological fields like artificial intelligence, autonomous systems, robotics, resilient networks and cyberwarfare mean that future conflicts will look nothing like those we have seen before. The DoD of tomorrow needs a fresh wave of technical expertise to understand and respond to these new kinds of threats.

That is not to say that legacy defense contractors are not needed; their expertise in large air and sea vehicles is currently unparalleled. But the expertise to build these new technologies resides in pockets of talent that the big and bureaucratic incumbents, who made their names with 20th century technology, lost access to decades ago.

The DoD has publicly exalted the importance of innovative defense startups for years. That is partly why we are so excited to invest capital into the defense sector at this moment in history. Silicon Valley has a chance to live up to its oft-ridiculed but sincere ambition to make the world a better place by investing in American national security.

However, we as venture capitalists and growth equity investors also have a duty to our limited partners who have entrusted us to invest and grow their capital. If we see the same old story of the government claiming to support small businesses but prioritizing its old incumbents, those investment dollars will disappear.

Times of rapid and unprecedented change, as COVID-19 has precipitated, also provide opportunities. The DoD and Congress can reshape budget priorities to put their money where their mouths have been and support innovative defense technologies. Each dollar awarded to a successful venture capital and growth equity-backed defense startup through a competitively awarded contract attracts several more dollars in private investment, providing the DoD significantly more leverage that if that same dollar was spent on a subsidy or loan to a large legacy contractor. This leverage of private capital means that every contract a startup receives accelerates by up to 10 times their ability to build technology and hire talent to support the DoD's mission.

The bottom line is this: There's no reason to let a health crisis today become a national security crisis tomorrow. The DoD has an opportunity to not only sustain but grow its innovation base, and give contracts, not lip service, to innovators. We, the undersigned, hope they do.

The contributors to this commentary are: Steve Blank of Stanford University; Katherine Boyle of General Catalyst; James Cham of Bloomberg Beta; Ross Fubini of XYZ Capital; Antonio Gracias of Valor Equity Partners, who sits on the boards of Tesla and SpaceX; Joe Lonsdale of 8VC, who also co-founded Palantir; Raj Shah of Shield Capital, who is a former director of the U.S. Defense Innovation Unit; Trae Stephens of, Founders Fund; JD Vance of Narya Capital; Albert Wenger of Union Square Ventures; Josh Wolfe of Lux Capital; Hamlet Yousef of IronGate Capital; and Dan Gwak of Point72.

https://www.defensenews.com/opinion/commentary/2020/05/04/the-defense-industry-needs-new-entrants-and-a-supportive-government-during-crises/

Sur le même sujet

  • Can the Army perfect an AI strategy for a fast and deadly future?

    15 octobre 2019 | International, C4ISR

    Can the Army perfect an AI strategy for a fast and deadly future?

    By: Kelsey D. Atherton Military planners spent the first two days of the Association of the United States Army's annual meeting outlining the future of artificial intelligence for the service and tracing back from this imagined future to the needs of the present. This is a world where AI is so seamless and ubiquitous that it factors into everything from rifle sights to logistical management. It is a future where every soldier is a node covered in sensors, and every access point to that network is under constant threat by enemies moving invisibly through the very parts of the electromagnetic spectrum that make networks possible. It is a future where weapons can, on their own, interpret the world, position themselves within it, plot a course of action, and then, in the most extreme situations, follow through. It is a world of rich battlefield data, hyperfast machines and vulnerable humans. And it is discussed as an inevitability. “We need AI for the speed at which we believe we will fight future wars,” said Brig. Gen. Matthew Easley, director of the Army AI Task Force. Easley is one of a handful of people with an outsized role shaping how militaries adopt AI. The past of data future Before the Army can build the AI it needs, the service needs to collect the data that will fuel and train its machines. In the shortest terms, that means the task force's first areas of focus will include preventative maintenance and talent management, where the Army is gathering a wealth of data. Processing what is already collected has the potential for an outsized impact on the logistics and business side of administering the Army. For AI to matter in combat, the Army will need to build a database of what sensor-readable events happen in battle, and then refine that data to ultimately provide useful information to soldiers. And to get there means turning every member of the infantry into a sensor. “Soldier lethality is fielding the Integrated Visual Augmentation Systems, or our IVAS soldier goggles that each of our infantry soldiers will be wearing,” Easley said. “In the short term, we are looking at fielding nearly 200,000 of these systems.” The IVAS is built on top of Microsoft's HoloLens augmented reality tool. That the equipment has been explicitly tied to not just military use, but military use in combat, led to protests from workers at Microsoft who objected to the product of their labor being used with “intent to harm.” And with IVAS in place, Easley imagines a scenario where IVAS sensors plot fields of fire for every soldier in a squad, up through a platoon and beyond. “By the time it gets to [a] battalion commander,” Easley said, “they're able to say where their dead zones are in front of [the] defensive line. They'll know what their soldiers can touch right now, and they'll know what they can't touch right now.” Easley compared the overall effect to the data collection done by commercial companies through the sensors on smartphones — devices that build detailed pictures of the individuals carrying them. Fitting sensors to infantry, vehicles or drones can help build the data the Army needs to power AI. Another path involves creating synthetic data. While the Army has largely fought the same type of enemy for the past 18 years, preparing for the future means designing systems that can handle the full range of vehicles and weapons of a professional military. With insurgents unlikely to field tanks or attack helicopters at scale anytime soon, the Army may need to generate synthetic data to train an AI to fight a near-peer adversary. Faster, stronger, better, more autonomous “I want to proof the threat,” said Bruce Jette, the Army's assistant secretary for acquisition, logistics and technology, while speaking at a C4ISRNET event on artificial intelligence at AUSA. Jette then set out the kind of capability he wants AI to provide, starting from the perspective of a tank turret. “Flip the switch on, it hunts for targets, it finds targets, it classifies targets. That's a Volkswagen, that's a BTR [Russian-origin armored personnel carrier], that's a BMP [Russian-origin infantry fighting vehicle]. It determines whether a target is a threat or not. The Volkswagen's not a threat, the BTR is probably a threat, the BMP is a threat, and it prioritizes them. BMP is probably more dangerous than the BTR. And then it classifies which one's [an] imminent threat, one's pointing towards you, one's driving away, those type of things, and then it does a firing solution to the target, which one's going to fire first, then it has all the firing solutions and shoots it.” Enter Jette's ideal end state for AI: an armed machine that senses the world around it, interprets that data, plots a course of action and then fires a weapon. It is the observe–orient–decide–act cycle without a human in the loop, and Jette was explicit on that point. “Did you hear me anywhere in there say ‘man in the loop?,' ” Jette said. “Of course, I have people throwing their hands up about ‘Terminator,' I did this for a reason. If you break it into little pieces and then try to assemble it, there'll be 1,000 interface problems. I tell you to do it once through, and then I put the interface in for any safety concerns we want. It's much more fluid.” In Jette's end state, the AI of the vehicle is designed to be fully lethal and autonomous, and then the safety features are added in later — a precautionary stop, a deliberate calming intrusion into an already complete system. Jette was light on the details of how to get from the present to the thinking tanks of tomorrow's wars. But it is a process that will, by necessity, involve buy-in and collaboration with industry to deliver the tools, whether it comes as a gestalt whole or in a thousand little pieces. Learning machines, fighting machines Autonomous kill decisions, with or without humans in the loop, are a matter of still-debated international legal and ethical concern. That likely means that Jette's thought experiment tank is part of a more distant future than a host of other weapons. The existence of small and cheap battlefield robots, however, means that we are likely to see AI used against drones in the more immediate future. Before robots fight people, robots will fight robots. Before that, AI will mostly manage spreadsheets and maintenance requests. “There are systems now that can take down a UAS pretty quickly with little collateral damage,” Easley said. “I can imagine those systems becoming much more autonomous in the short term than many of our other systems.” Autonomous systems designed to counter other fast, autonomous systems without people on board are already in place. The aptly named Counter Rocket, Artillery, and Mortar, or C-RAM, systems use autonomous sensing and reaction to specifically destroy projectiles pointed at humans. Likewise, autonomy exists on the battlefield in systems like loitering munitions designed to search for and then destroy anti-air radar defense systems. Iterating AI will mean finding a new space of what is acceptable risk for machines sent into combat. “From a testing and evaluation perspective, we want a risk knob. I want the commander to be able to go maximum risk, minimum risk,” said Brian Sadler, a senior research scientist at the Army Research Laboratory. “When he's willing to take that risk, that's OK. He knows his current rules of engagement, he knows where he's operating, he knows if he uses some platforms; he's willing to make that sacrifice. In his work at the Vehicle Technology Directorate of the Army Combat Capabilities Development Command, Sadler is tasked with catching up the science of AI to the engineered reality of it. It is not enough to get AI to work; it has to be understood. “If people don't trust AI, people won't use it,” Tim Barton, chief technology officer at Leidos, said at the C4ISRNET event. Building that trust is an effort that industry and the Army have to tackle from multiple angles. Part of it involves iterating the design of AI tools with the people in the field who will use them so that the information analyzed and the product produced has immediate value. “AI should be introduced to soldiers as an augmentation system,” said Lt. Col. Chris Lowrance, a project manager in the Army's AI Task Force. “The system needs to enhance capability and reduce cognitive load.” Away from but adjacent to the battlefield, Sadler pointed to tools that can provide immediate value even as they're iterated upon. “If it's not a safety of life mission, I can interact with that analyst continuously over time in some kind of spiral development cycle for that product, which I can slowly whittle down to something better and better, and even in the get-go we're helping the analyst quite a bit,” Sadler said. “I think Project Maven is the poster child for this,” he added, referring to the Google-started tool that identifies objects from drone footage. Project Maven is the rare intelligence tool that found its way into the public consciousness. It was built on top of open-source tools, and workers at Google circulated a petition objecting to the role of their labor in creating something that could “lead to potentially lethal outcomes.” The worker protest led the Silicon Valley giant to outline new principles for its own use of AI. Ultimately, the experience of engineering AI is vastly different than the end user, where AI fades seamlessly into the background, becoming just an ambient part of modern life. If the future plays out as described, AI will move from a hyped feature, to a normal component of software, to an invisible processor that runs all the time. “Once we succeed in AI,” said Danielle Tarraf, a senior information scientist at the think tank Rand, “it will become invisible like control systems, noticed only in failure.” https://www.c4isrnet.com/artificial-intelligence/2019/10/15/can-the-army-perfect-an-ai-strategy-for-a-fast-and-deadly-future

  • Russia splashes $12 billion to keep aviation sector in the air

    21 décembre 2023 | International, Aérospatial

    Russia splashes $12 billion to keep aviation sector in the air

  • Navy changing LCS maintenance and staffing practices

    8 novembre 2023 | International, Naval

    Navy changing LCS maintenance and staffing practices

    More sailor maintenance and changes to crew sizes are in the works for the troubled LCS fleet, officials say.

Toutes les nouvelles