4 septembre 2024 | International, Aérospatial
Space Development Agency’s first satellites demo key capabilities
This week’s test involved two satellites, built by SpaceX, exchanging data via a laser-communication link.
15 février 2021 | International, Aérospatial
By: Valerie Insinna
WASHINGTON — It started as a dare.
When Will Roper, then the Air Force's top acquisition official, visited Beale Air Force Base in California last fall, he issued a challenge to the U-2 Federal Laboratory, a five-person organization founded in October 2019. The team was established to create advanced technologies for the venerable Lockheed Martin U-2 spyplane, and Roper wanted to push the team further.
“He walked into the laboratory and held his finger out and pointed directly at me,” recalled Maj. Ray Tierney, the U-2 pilot who founded and now leads the lab. “He said, ‘Ray, I got a challenge.' We didn't even say hello.”
Roper, a string theorist turned reluctant government bureaucrat who was known for his disruptive style and seemingly endless references to science-fiction, wanted the team to update the U-2′s software during a flight. It was a feat the U.S. military had never accomplished, but to Tierney's exasperation, Roper wanted only to know how long it would take for the lab to pull off.
The answer, it turns out, was two days and 22 hours.
A month later, in mid-November, Roper laid out a second challenge: Create an AI copilot for the U-2, a collection of algorithms that would be able to learn and adapt in a way totally unlike the mindlessness of an autopilot that strictly follows a preplanned route.
That task took a month, when an AI entity called Artuμ (pronounced Artoo, as in R2-D2 of Star Wars fame) was given control of the U-2′s sensors and conveyed information about the location of adversary missile launchers to the human pilot during a live training flight on Dec. 15.
Now, the U-2 Federal Laboratory is at work again on another undisclosed challenge. Tierney and Roper declined to elaborate on the task in interviews with Defense News. But Roper acknowledged, more broadly, that a future where AI copilots regularly fly with human operators was close at hand.
“Artuμ has a really good chance of making it into operations by maybe the summer of this year,” Roper told Defense News before his Jan. 20 departure from the service. “I'm working with the team on how aggressive is the Goldilocks of being aggressive enough? The goal is fairly achievable, but still requires a lot of stress and effort.”
In order to ready Artuμ for day-to-day operations, the AI entity will be tested in potentially millions of virtual training missions — including ones where it faces off against itself. The Air Force must also figure out how to certify it so that it can be used outside of a test environment, Roper said.
“The first time we fly an AI in a real operation or real world mission — that's the next big flag to plant in the ground,” Roper said. “And my goal before I leave is to provide the path, the technical objectives, the program approach that's necessary to get to that flag and milestone.”
Meanwhile, the team has its own less formal, longer-term challenge: How do you prove to a giant organization like the Air Force, one that is full of bureaucracy and thorough reviews, that a small team of five people can quickly create the innovation the service needs?
No regulations, no rules
During a Dec. 22 interview, Tierney made it clear that he had little interest in discussing what the U-2 Federal Lab is currently working on. What he wanted to promote, he said, was the concept of how federal laboratories could act as innovation pressure chambers for the military — a place where operators, scientists and acquisition personnel would have the freedom to create without being hamstrung by red tape.
For those immersed in military technology, focusing on the promise of federal laboratories can seem like a bit of a letdown, if not outright academic, especially when compared to a discussion about the future of artificial intelligence. The U.S. government is rife with organizations — often named after tired Star Wars references that would make even the most enthusiastic fanboy cringe — created in the name of fostering innovation and rapidly developing new technologies. Many of those advances never make it over the “valley of death” between when a technology is first designed and when it is finally mature enough to go into production.
Ultimately, that's the problem the U-2 Federal Lab was created to solve.
As a federally accredited laboratory, the team is empowered to create a technology, test it directly with users, mature it over time, and graduate it into the normal acquisition process at Milestone B, Tierney said. At that stage, the product is ready to be treated as a program of record going through the engineering and manufacturing development process, which directly precedes full-rate production.
“We're basically front loading all the work so that when we hand it to the acquisition system, there's no work left to do,” Tierney said. The lab essentially functions as a “blue ocean,” as an uncontested market that does not normally exist in the acquisition system, he explained. “There's no regulations; there's no rules.”
While that might sound similar to organizations the Air Force has started to harness emerging technologies, such as its Kessel Run software development factory, Tierney bristled at the comparison.
“We're basically developing on the weapon system, and then working our way back through the lines of production, as opposed to a lot of these organizations like Kessel Run, which is developing it on servers and server environments,” he said.
That distinction is critical when it comes to bringing modern software technologies to an aging platform like the U-2, an aircraft that took its first flight in 1955 and is so idiosyncratic that high speed muscle cars are needed to chase the spyplane and provide situational awareness as it lands.
Because the team works only with the U-2, they understand the precise limitations of the weapon system, what its decades-old computers are capable of handling, and how to get the most out of the remaining space and power inside the airplane.
Besides Tierney, there are only four other members of the U-2 Federal Lab: a National Guardsman with more than a decade of experience working for IBM, and three civilians with PhDs in machine learning, experimental astrophysics and applied mathematics. (The Air Force declined to provide the names of the other employees from the lab.)
As the lone member of the team with experience flying the U-2, Tierney provides perspective on how the aircraft is used operationally and what types of technologies rank high on pilots' wish lists. But what most often drives the team are the projects that can make the biggest impact — not just for the U-2, but across the whole Defense Department.
Making it work
One of those projects was an effort to use Kubernetes, a containerized system that allows users to automate the deployment and management of software applications, onboard a U-2. The technology was originally created by Google and is currently maintained by the Cloud Native Computing Foundation.
“Essentially, what it does is it federates or distributes processing between a bunch of different computers. So you can take five computers in your house and basically mush them all together into one more powerful computer,” Tierney said.
The idea generated some resistance from other members of the lab, who questioned the usefulness of deploying Kubernetes to the U-2′s simple computing system.
“They said, ‘Kubernetes is useless to us. It's a lot of extra processing overhead. We don't have enough containers. We have one processing board, [so] what are you distributing against? You got one computer,'” Tierney said. But a successful demonstration, held in September, proved that it was possible for even a 1950s-era aircraft to run Kubernetes, opening the door for the Defense Department to think about how it could be used to give legacy platforms more computing power.
It also paved the way for the laboratory to do something the Air Force had long been aiming to accomplish: update an aircraft's code while it was in flight.
“We wanted to show that a team of five in two days could do what the Department of Defense has been unable to do in its history,” Tierney said. “Nobody helped us with this; there was no big company that rolled in. We didn't outsource any work, it was literally and organically done by a team of five. Could you imagine if we grew the lab by a factor of two or three or four, what that would look like?”
The lab has also created a government-owned open software architecture for the U-2, a task that took about three months and involved no additional funding. Once completed, the team was able to integrate advanced machine learning algorithms developed by Sandia National Laboratories in less than 30 minutes.
“That's my litmus test for open architecture,” Tierney said. “Go to any provider that says I have open architecture, and just ask them two questions. How long is it going to take you to integrate your service? And how much is it going to cost? And if the answer isn't minutes and free, it's not quite as open as what people want.”
The U-2 Federal Lab hopes to export the open architecture system to other military aircraft and is already in talks with several Air Force and Navy program offices on potential demonstrations.
Could the Air Force create other federal laboratories to create specialized tech for other aircraft? The U-2 lab was designed from the outset to be franchisable, but Tierney acknowledged that much of the success of future organizations will rest in the composition of the team and the level of expertise of its members.
“Can it scale? Absolutely. How does it scale is another question,” Tierney said. “Do you have one of these for every weapon system? Do you have just a couple sprinkled throughout the government? Does it proliferate en masse? Those are all questions that I think, largely can be explored.”
For now, it's unclear whether the Air Force will adopt this framework more widely. The accomplishments of the U-2 Federal Laboratory have been lauded by Air Force leaders such as Chief of Staff Gen. Charles “CQ” Brown, who in December wrote on Twitter that the group “continue[s] to push the seemingly impossible.”
However, it remains to be seen whether the Biden administration will give the lab the champion it found in Roper, and continued pressure on the defense budget — and to retire older aircraft like the U-2 — could present greater adversity for the lab.
But as for the other challenge, the one Tierney and Roper didn't want to discuss, Tierney offered only a wink as to what comes next:
“What I can say is that the future is going to be an interesting one.”
4 septembre 2024 | International, Aérospatial
This week’s test involved two satellites, built by SpaceX, exchanging data via a laser-communication link.
20 mai 2022 | International, Aérospatial, Sécurité
Pushing back initial operational capability of the CC-295 will disrupt SAR pilots and maintenance technicians preparing to transition to the new aircraft.
30 juillet 2018 | International, C4ISR
By: Kelsey Atherton Bob Work, in his last months as deputy secretary of defense, wanted everything in place so that the Pentagon could share in the sweeping advances in data processing already enjoyed by the thriving tech sector. A memo dated April 26, 2017, established an “Algorithmic Warfare Cross-Functional Team,” a.k.a. “Project Maven.” Within a year, the details of Google's role in that program, disseminated internally among its employees and then shared with the public, would call into question the specific rationale of the task and the greater question of how the tech community should go about building algorithms for war, if at all. Project Maven, as envisioned, was about building a tool that could process drone footage quickly and in a useful way. Work specifically tied this task to the Defeat-ISIS campaign. Drones are intelligence, surveillance and reconnaissance platforms first and foremost. The unblinking eyes of Reapers, Global Hawks and Gray Eagles record hours and hours of footage every mission, imagery that takes a long time for human analysts to scan for salient details. While human analysts process footage, the ground situation is likely changing, so even the most labor-intensive approach to analyzing drone video delivers delayed results. In July 2017, Marine Corps Col. Drew Cukor, the chief of the Algorithmic Warfare Cross-Function Team, presented on artificial intelligence and Project Maven at a defense conference. Cukor noted, “AI will not be selecting a target [in combat] ... any time soon. What AI will do is complement the human operator.” As Cukor outlined, the algorithm would allow human analysts to process two or three times as much data within the same timeframe. To get there, though, the algorithm to detect weapons and other objects has to be built and trained. This training is at the heart of neural networks and deep learning, where the computer program can see an unfamiliar object and classify it based on its resemblance to other, more familiar objects. Cukor said that before deploying to battle “you've got to have your data ready and you've got to prepare and you need the computational infrastructure for training.” At the time, the contractor who would develop the training and image-processing algorithms for Project Maven was unknown, though Cukor did specifically remark on how impressive Google was as an AI company. Google's role in developing Maven would not come to light until March 2018, when Gizmodo reported that Google is helping the Pentagon build AI for drones. Google's role in the project was discussed internally in the company, and elements of that discussion were shared with reporters. “Some Google employees were outraged that the company would offer resources to the military for surveillance technology involved in drone operations,” wrote Kate Conger and Dell Cameron, “while others argued that the project raised important ethical questions about the development and use of machine learning.” A petition by the Tech Workers Coalition that circulated in mid-April called upon not just Google to pull out of Pentagon contracts, but for Amazon, Microsoft and IBM to refuse to pick up the work of Project Maven. (The petition attracted 300 signatures at the time of this story.) Silicon Valley's discord over the project surprised many in positions of leadership within the Pentagon. During the 17th annual C4ISRNET Conference, Justin Poole, the deputy director of the National Geospatial-Intelligence Agency, was asked how the intelligence community can respond to skepticism in the tech world. Poole's answer was to highlight the role of intelligence services in reducing risk to war fighters. Disagreement between some of the people working for Google and the desire of the company's leadership to continue pursuing Pentagon contracts exacerbated tension in the company throughout spring. By May, nearly a dozen Google employees had resigned from the company over its involvement with Maven, and an internal petition asking the company to cancel the contract and avoid future military projects garnered thousands of employee signatures. To calm tensions, Google would need to find a way to reconcile the values of its employees with the desire of its leadership to develop further AI projects for a growing range of clients. That list of clients, of course, includes the federal government and the Department of Defense. While efforts to convince the tech community at large to refuse Pentagon work have stalled, the pressure within Google resulted in multiple tangible changes. First, Google leadership announced the company's plan to not renew the Project Maven contract when it expired in 2019. Then, the company's leaders released principles for AI, saying it would not develop intelligence for weapons or surveillance applications. After outlining how Google intends to build AI in the future, with efforts to mitigate bias, aid safety and be accountable, Google CEO Sundar Pichai set out categories of AI work that the company will not pursue. This means refusing to design or deploy “technologies that cause or are likely to cause overall harm,” including an explicit prohibition on weapons principally designed to harm people, as well as surveillance tech that violates international norms. Taken together, these principles amount to a hard-no only on developing AI specifically intended for weapons. The rest are softer no's, objections that can change with interpretations of international law, norms, and even in how a problem set is described. After all, when Poole was asked how to sell collaboration with the intelligence community to technology companies, he framed the task as one about saving the lives of war fighters. The “how” of that lifesaving is ambiguous: It could equally mean better and faster intelligence analysis that gives a unit on patrol the information it needs to avoid an ambush, or it could be the advance info that facilitates an attack on an adversary's encampment when the guard shift is particularly understaffed. Image processing with AI is so ambiguous a technology, so inherently open to dual-use, that the former almost certainly isn't a violation of Google's second objection to AI use, but the latter example absolutely would be. In other words, the long-term surveillance that goes into targeted killing operations above Afghanistan and elsewhere is likely out of bounds. However, the same technology used over Iraq for the fight against ISIS might be permissible. And software built to process drone footage in the latter context would be identical to the software built to process images for the former. The lines between what this does and doesn't prevent becomes even murkier when one takes into account that Google built its software for Project Maven on top of TensorFlow, an open-source software library. This makes it much harder to build in proprietary constraints on the code, and it means that once the Pentagon has a trainable algorithm on hand, it can continue to develop and refine its object-recognition AI as it chooses. But the window for Google to be involved in such a project, whether to the joy or dismay of its employees and executive leadership, is likely closing. In late June, the Pentagon announced creation of a Joint Artificial Intelligence Center, which among other functions would take over Project Maven from the Algorithmic Warfare Cross-Functional Team. The defense sector is vast, and with Google proving to be a complicated contractor for the Pentagon, new leadership may simply take its AI contracts worth million elsewhere with to see if it can get the programming it needs. And Maven itself still receives accolades within the Pentagon. Gen. Mike Holmes, commander of Air Combat Command, praised Project Maven at a June 28 defense writers group breakfast, saying that the use of learning machines and algorithms will speed up the process by which humans process information and pass on useful insights to decisions makers. Inasmuch as the Pentagon has a consensus view of explaining tools like Maven, it is about focusing on the role of the human in the process. The software will do the first pass through the imagery collected, and then as designed highlight other details for a human to review and act upon. Holmes was adamant that fears of malicious AIs hunting humans, like Skynet from the “Terminator” movies, are beyond premature. “We're going to have to work through as Americans our comfort level on how technologies are used and how they're applied,” said Holmes. “I'd make the case that our job is to compete with these world-class peer competitors that we have, and by competing and by setting this competition on terms that we can compete without going to conflict, it's better for everybody.” AI of the tiger Project Maven, from the start, is a program specifically sold and built for the work of fighting a violent nonstate actor, identifying the weapons and tools of an insurgency that sometimes holds swaths of territory. “Our responsibility is to help people understand what the intent is with the capability that we are helping to develop. ... Maven is focused on minimizing collateral damage on the battlefield. There's goodness in that,” said Capt. Sean Heritage, acting managing partner of Defense Innovation Unit Experimental (DIUx). “There's always risk in how it will be used down the road, and I guess that's where a small pocket of people at Google's heads were. But, as Mr. Work pointed out during his panel at Defense One, they don't seem to have as challenging of a time contributing to AI capability development in China.” Google's fight over Project Maven is partly about the present — the state of AI, the role of the United States in pursuing insurgencies abroad. It is also a fight about how the next AI will be built, and who that AI will be built to be used against. And the Pentagon seems to understand this, too. In the same meeting where Holmes advocated for Maven as a useful tool for now, he argued that it was important for the United States to develop and field tools that can match peer or near-peer rivals in a major conflict. That's a far cry from selling the tool to Silicon Valley as one of immediate concern, to protect the people fighting America's wars presently through providing superior real-time information. “The idea of a technology being built and then used for war, even if that wasn't the original intent,” says author Malka Older, “is what science fiction writers call a ‘classic trope.' ” Older's novels, set two or three generations in the near-future, focus on the ways in which people, governments and corporations handle massive flows of data, and provide one possible vision of a future where the same kinds and volumes of data are collected, but where that data is also held by a government entity and shared transparently. While radical transparency in data is alien to much of the defense establishment, it's an essential part of the open-source technology community for security concerns both genuine and sometimes not-so genuine. Building open source means publishing code and letting outsiders find flaws and vulnerabilities in the algorithm, without looking at any of the sensitive data the algorithm is built to process. And Project Maven is built on top of open-source framework. “One of the dangerous concepts that we have of technology is that progress only goes in one direction,” says Older. “There's constantly choices being made of where technology goes and where concepts go and what we are trying to do.” While it's entirely possible that the Pentagon will be able to continue the work of Project Maven and other AI programs with new contractors, if it wanted to reach out to those skeptical of how the algorithm would interpret images, it could try justifying the mission not just with national security concerns, but with transparency. “Part of being an American is that Americans have expectations about what their government does and whether the government uses tech and tools to infringe upon their rights or not,” said Holmes. “And, so, we have really high standards as a nation that the things that we bring forward as military tools have to live up to.” To work with the coders of the future, it may not be enough to say that the code — open source or not — is going to be used in ways consistent with their values. The Pentagon may have to find ways to transparently prove it. https://www.c4isrnet.com/it-networks/2018/07/27/targeting-the-future-of-the-dods-controversial-project-maven-initiative/