Back to news

September 20, 2019 | International, Aerospace

L3Harris awarded nearly $12.8M for Eglin AN/FPS-85 radar work

The radar, located at Eglin Air Force Base in Florida, performs detection, target recognition, acquisition and tracking of many space objects.

Sept. 19 (UPI) -- L3Harris Technologies has been awarded a $12.8 million in a contract for sustainment support of the Eglin AN/FPS-85 radar in the Air Force Space Command Space Surveillance Network.

The contract, announced Wednesday by the Department of Defense, applies to a previously awarded contract to L3 Harris Technologies, Colorado Springs, Colorado for sustainment support of the radar.

The Eglin AN/FPS-85 Radar is a computer-controlled, phased-array radar set operating in the Air Force Space Command Surveillance Network that performs detection, target recognition, acquisition and tracking of many space objects.

The radar operates at Site C-6 Eglin Air Force Base as part of the weapon systems for the 20th Space Control Squadron to conduct space object identification and intelligence in support of space domain control.

Earlier this year, the 20th Space Control Squadron celebrated the 50th anniversary of the AN/FPS radar since space operation began for the AN/FPS-85 Space Track Radar in February 1969.

Work on the new contract will be performed at Eglin Air Force Base, Fla, where the radar is located, with a completion date of June 30, 2020.

On the same subject

  • What Countries Lead In Developing Next-Gen Combat Aircraft?

    July 30, 2020 | International, Aerospace

    What Countries Lead In Developing Next-Gen Combat Aircraft?

    Tony Osborne July 29, 2020 Aviation Week's July 16 webinar on the future of combat aircraft mentioned British, French-German and Japanese fifth- and sixth-generation developments. Are there any others on the radar, such as Turkey or South Korea? Will these quieter players be able to pull the rabbit from the hat as the Turks have done with UAVs in Libya and Syria? London Bureau Chief Tony Osborne responds: Had we had more time during the webinar, we would have talked more about developments from Turkey and South Korea—in particular, the Turkish Aerospace Industries TF-X and Korea Aerospace Industries' KF-X. Taiwan and Pakistan are also making investments in fighter technologies, although their progress is not as mature. Turkey benefits from having a capable partner in BAE Systems to support the design process, and I believe they could produce a combat aircraft in the next 5-10 years. The Turkish electronics industry is well advanced, and Turkish Aerospace is growing its capabilities fairly rapidly. The biggest question is around development of engine technologies: Turkey wants an indigenous 25,000-30,000-lb. engine to power the TF-X. Although Turkey is not starting from scratch—given its experience on General Electric engines for the F-16—it has a long way to go before it can produce a reliable, locally developed powerplant. Without that, Turkey will have difficulty exporting such an aircraft. Surety of supply for a foreign engine, especially from the U.S., is doubtful given the political strains between the two countries. In South Korea, it is a slightly different story. Its platform will use a U.S.-supplied engine, and given the close relationship between South Korea and the U.S., there is that surety of supply. Time will tell whether that will change when it comes to exporting the KF-X. With assembly of the first prototype well underway, South Korea appears to be making strong progress. We are still waiting for metal to be cut.

  • « Le Tigre Mark 3 n'aura pas d'équivalent au niveau mondial » : entretien avec Bruno Even, CEO d’Airbus Helicopters

    March 8, 2022 | International, Aerospace

    « Le Tigre Mark 3 n'aura pas d'équivalent au niveau mondial » : entretien avec Bruno Even, CEO d’Airbus Helicopters

    Dans une interview accordée à La Tribune, le CEO d'Airbus Helicopters, Bruno Even, revient sur les enjeux du contrat du Tigre Mark 3, récemment notifié à Airbus Helicopters et ses partenaires. « C'est une très bonne nouvelle au niveau politique, industriel et opérationnel. Le lancement de ce programme est important pour l'Europe de la défense, notamment sous son angle politique. On voit bien l'importance du Tigre, qui a été et est l'un des programmes emblématiques de la coopération européenne, pour une Europe de la défense forte et pour son industrie. Ce programme appuie par ailleurs l'évolution actuelle importante, qui est le renforcement de la souveraineté de l'Europe et de ses pays membres ». Il rappelle que « Airbus Helicopters a près de deux tiers de la charge de travail sur le développement de cette nouvelle version du Tigre. Au niveau de notre supply chain, Thales (avionique), Safran (viseurs et chaîne optronique) et MBDA (armements) seront nos principaux fournisseurs sur le Tigre Mark 3 ». Au niveau opérationnel, le Tigre Mark 3 « n'aura pas d'équivalent au niveau européen », se félicite-t-il. « Au niveau mondial, il y a encore l'Apache mais le Tigre Mark 3, avec ses futures capacités, sera un hélicoptère d'attaque, qui dans la haute intensité n'aura pas d'équivalent au niveau mondial que ce soit en termes de connectivité (Man Machine Teaming) mais aussi en termes de connectivité tactique et d'échanges de données sur le champ de bataille et, enfin, en termes de capacités de feu et d'armement. Nous développons avec Thales une nouvelle avionique, qui va alléger la charge de travail du pilote pour lui permettre de se concentrer sur ses missions, et avec Safran des nouveaux systèmes de mission et de détection (optronique). C'est pour cela que sur le plan opérationnel et dans un monde incertain, ce nouvel hélicoptère continuera d'être sur le champ de bataille l'ange gardien de nos soldats ». La Tribune du 8 mars

  • Can the Army perfect an AI strategy for a fast and deadly future?

    October 15, 2019 | International, C4ISR

    Can the Army perfect an AI strategy for a fast and deadly future?

    By: Kelsey D. Atherton Military planners spent the first two days of the Association of the United States Army's annual meeting outlining the future of artificial intelligence for the service and tracing back from this imagined future to the needs of the present. This is a world where AI is so seamless and ubiquitous that it factors into everything from rifle sights to logistical management. It is a future where every soldier is a node covered in sensors, and every access point to that network is under constant threat by enemies moving invisibly through the very parts of the electromagnetic spectrum that make networks possible. It is a future where weapons can, on their own, interpret the world, position themselves within it, plot a course of action, and then, in the most extreme situations, follow through. It is a world of rich battlefield data, hyperfast machines and vulnerable humans. And it is discussed as an inevitability. “We need AI for the speed at which we believe we will fight future wars,” said Brig. Gen. Matthew Easley, director of the Army AI Task Force. Easley is one of a handful of people with an outsized role shaping how militaries adopt AI. The past of data future Before the Army can build the AI it needs, the service needs to collect the data that will fuel and train its machines. In the shortest terms, that means the task force's first areas of focus will include preventative maintenance and talent management, where the Army is gathering a wealth of data. Processing what is already collected has the potential for an outsized impact on the logistics and business side of administering the Army. For AI to matter in combat, the Army will need to build a database of what sensor-readable events happen in battle, and then refine that data to ultimately provide useful information to soldiers. And to get there means turning every member of the infantry into a sensor. “Soldier lethality is fielding the Integrated Visual Augmentation Systems, or our IVAS soldier goggles that each of our infantry soldiers will be wearing,” Easley said. “In the short term, we are looking at fielding nearly 200,000 of these systems.” The IVAS is built on top of Microsoft's HoloLens augmented reality tool. That the equipment has been explicitly tied to not just military use, but military use in combat, led to protests from workers at Microsoft who objected to the product of their labor being used with “intent to harm.” And with IVAS in place, Easley imagines a scenario where IVAS sensors plot fields of fire for every soldier in a squad, up through a platoon and beyond. “By the time it gets to [a] battalion commander,” Easley said, “they're able to say where their dead zones are in front of [the] defensive line. They'll know what their soldiers can touch right now, and they'll know what they can't touch right now.” Easley compared the overall effect to the data collection done by commercial companies through the sensors on smartphones — devices that build detailed pictures of the individuals carrying them. Fitting sensors to infantry, vehicles or drones can help build the data the Army needs to power AI. Another path involves creating synthetic data. While the Army has largely fought the same type of enemy for the past 18 years, preparing for the future means designing systems that can handle the full range of vehicles and weapons of a professional military. With insurgents unlikely to field tanks or attack helicopters at scale anytime soon, the Army may need to generate synthetic data to train an AI to fight a near-peer adversary. Faster, stronger, better, more autonomous “I want to proof the threat,” said Bruce Jette, the Army's assistant secretary for acquisition, logistics and technology, while speaking at a C4ISRNET event on artificial intelligence at AUSA. Jette then set out the kind of capability he wants AI to provide, starting from the perspective of a tank turret. “Flip the switch on, it hunts for targets, it finds targets, it classifies targets. That's a Volkswagen, that's a BTR [Russian-origin armored personnel carrier], that's a BMP [Russian-origin infantry fighting vehicle]. It determines whether a target is a threat or not. The Volkswagen's not a threat, the BTR is probably a threat, the BMP is a threat, and it prioritizes them. BMP is probably more dangerous than the BTR. And then it classifies which one's [an] imminent threat, one's pointing towards you, one's driving away, those type of things, and then it does a firing solution to the target, which one's going to fire first, then it has all the firing solutions and shoots it.” Enter Jette's ideal end state for AI: an armed machine that senses the world around it, interprets that data, plots a course of action and then fires a weapon. It is the observe–orient–decide–act cycle without a human in the loop, and Jette was explicit on that point. “Did you hear me anywhere in there say ‘man in the loop?,' ” Jette said. “Of course, I have people throwing their hands up about ‘Terminator,' I did this for a reason. If you break it into little pieces and then try to assemble it, there'll be 1,000 interface problems. I tell you to do it once through, and then I put the interface in for any safety concerns we want. It's much more fluid.” In Jette's end state, the AI of the vehicle is designed to be fully lethal and autonomous, and then the safety features are added in later — a precautionary stop, a deliberate calming intrusion into an already complete system. Jette was light on the details of how to get from the present to the thinking tanks of tomorrow's wars. But it is a process that will, by necessity, involve buy-in and collaboration with industry to deliver the tools, whether it comes as a gestalt whole or in a thousand little pieces. Learning machines, fighting machines Autonomous kill decisions, with or without humans in the loop, are a matter of still-debated international legal and ethical concern. That likely means that Jette's thought experiment tank is part of a more distant future than a host of other weapons. The existence of small and cheap battlefield robots, however, means that we are likely to see AI used against drones in the more immediate future. Before robots fight people, robots will fight robots. Before that, AI will mostly manage spreadsheets and maintenance requests. “There are systems now that can take down a UAS pretty quickly with little collateral damage,” Easley said. “I can imagine those systems becoming much more autonomous in the short term than many of our other systems.” Autonomous systems designed to counter other fast, autonomous systems without people on board are already in place. The aptly named Counter Rocket, Artillery, and Mortar, or C-RAM, systems use autonomous sensing and reaction to specifically destroy projectiles pointed at humans. Likewise, autonomy exists on the battlefield in systems like loitering munitions designed to search for and then destroy anti-air radar defense systems. Iterating AI will mean finding a new space of what is acceptable risk for machines sent into combat. “From a testing and evaluation perspective, we want a risk knob. I want the commander to be able to go maximum risk, minimum risk,” said Brian Sadler, a senior research scientist at the Army Research Laboratory. “When he's willing to take that risk, that's OK. He knows his current rules of engagement, he knows where he's operating, he knows if he uses some platforms; he's willing to make that sacrifice. In his work at the Vehicle Technology Directorate of the Army Combat Capabilities Development Command, Sadler is tasked with catching up the science of AI to the engineered reality of it. It is not enough to get AI to work; it has to be understood. “If people don't trust AI, people won't use it,” Tim Barton, chief technology officer at Leidos, said at the C4ISRNET event. Building that trust is an effort that industry and the Army have to tackle from multiple angles. Part of it involves iterating the design of AI tools with the people in the field who will use them so that the information analyzed and the product produced has immediate value. “AI should be introduced to soldiers as an augmentation system,” said Lt. Col. Chris Lowrance, a project manager in the Army's AI Task Force. “The system needs to enhance capability and reduce cognitive load.” Away from but adjacent to the battlefield, Sadler pointed to tools that can provide immediate value even as they're iterated upon. “If it's not a safety of life mission, I can interact with that analyst continuously over time in some kind of spiral development cycle for that product, which I can slowly whittle down to something better and better, and even in the get-go we're helping the analyst quite a bit,” Sadler said. “I think Project Maven is the poster child for this,” he added, referring to the Google-started tool that identifies objects from drone footage. Project Maven is the rare intelligence tool that found its way into the public consciousness. It was built on top of open-source tools, and workers at Google circulated a petition objecting to the role of their labor in creating something that could “lead to potentially lethal outcomes.” The worker protest led the Silicon Valley giant to outline new principles for its own use of AI. Ultimately, the experience of engineering AI is vastly different than the end user, where AI fades seamlessly into the background, becoming just an ambient part of modern life. If the future plays out as described, AI will move from a hyped feature, to a normal component of software, to an invisible processor that runs all the time. “Once we succeed in AI,” said Danielle Tarraf, a senior information scientist at the think tank Rand, “it will become invisible like control systems, noticed only in failure.”

All news