9 avril 2024 | International, Terrestre
How Patriot proved itself in Ukraine and secured a fresh future
What might have looked like an aging system not long ago now appears to be a workhorse that could be used for years to come.
2 avril 2020 | International, Aérospatial
By: Valerie Insinna
WASHINGTON — Boeing and the Air Force have finalized an agreement to fix the KC-46 aerial refueling tanker's most serious technical problem, Defense News has learned from multiple sources familiar with the matter.
The agreement puts an end to years of negotiations between the Air Force and aerospace giant over the nature and extent of redesign work needed to correct the Remote Vision System, the collection of cameras and sensors that provide boom operators the imagery needed to steer the boom into another aircraft and safely transfer fuel.
Perhaps more importantly, the deal paves a path that will allow the service to deploy the KC-46 in combat in the mid 2020s — something Air Force leaders have bristled against with the tanker in its current form.
The Air Force and Boeing have agreed on a two-phased roadmap to address RVS technical issues, said one source familiar with the agreement.
The first phase allows Boeing to continue providing incremental improvements to software and hardware that will fine-tune the imagery seen by the boom operator, the source said. The second phase — which will take years to complete — involves a comprehensive redesign of the RVS where its hardware and software will be almost completely replaced with new color cameras, advanced displays and improved computing technology.
Boeing and the Air Force both declined to comment on the matter.
Unlike legacy tankers, where boom operators can look out a window in the back of the aircraft and rely on visual cues to steer the boom, operators in the KC-46 are completely dependent on the imagery provided by the RVS.
Although Air Force operators say the system works in most conditions — and provides a safer way to offload fuel during nighttime conditions or bad weather — certain lighting conditions can cause the RVS imagery to appear warped and misleading, contributing to cases where the boom accidentally scrapes the surface of another aircraft. That could be a safety hazard for the pilot of the plane receiving gas, and it could also potentially scrape the stealth coating off a low observable jet, eroding its ability to evade radar detection.
Under the terms of Boeing's fixed-price firm contract and previous agreements with the service, the company will be financially responsible for paying for the entirety of the redesign effort. The company has already exceeded the $4.9 billion ceiling on the contract, and has paid more than $3.5 billion in cost overruns as technical problems have mounted.
Boeing is the system integrator for the RVS and designs its software, while the system's cameras and sensors are primarily designed by Collins Aerospace.
Air Force's acquisition executive Will Roper is expected to brief congressional staff on the deal this afternoon, sources said. Afterwards, the service is expected to release additional information about the deal.
Boeing delivered the first KC-46 tanker to McConnell Air Force Base, Kan., in January 2019, but the Air Force has withheld $28 million per aircraft upon delivery due to the RVS issues. So far, the company has delivered 33 tankers to the service.
9 avril 2024 | International, Terrestre
What might have looked like an aging system not long ago now appears to be a workhorse that could be used for years to come.
11 juin 2021 | International, Naval
CHARLOTTESVILLE, Va. — Northrop Grumman Corp. has been awarded a newly expanded role as systems integrator for C5ISR and control systems on the U.S. Coast Guard Offshore Patrol Cutter (OPC), by Eastern Shipbuilding Group (ESG), the prime contractor for the...
7 janvier 2019 | International, Aérospatial, C4ISR
BY PATRICK TUCKER An advisory board is drafting guidelines that may help shape worldwide norms for military artificial intelligence — and woo Silicon Valley to defense work. U.S. defense officials have asked the Defense Innovation Board for a set of ethical principles in the use of artificial intelligence in warfare. The principles are intended to guide a military whose interest in AI is accelerating — witness the new Joint Artificial Intelligence Center — and to reassure potential partners in Silicon Valley about how their AI products will be used. Today, the primary document laying out what the military can and can't do with AI is a 2012 doctrine that says a human being must have veto power over any action an autonomous system might take in combat. It's brief, just four pages, and doesn't touch on any of the uses of AI for decision support, predictive analytics, etc. where players like Google, Microsoft, Amazon, and others are making fast strides in commercial environments. “AI scientists have expressed concern about how DoD intends to use artificial intelligence. While the DoD has a policy on the role of autonomy in weapons, it currently lacks a broader policy on how it will use artificial intelligence across the broad range of military missions,” said Paul Scharre, the author of Army of None: Autonomous Weapons and the Future of War. Josh Marcuse, executive director of the Defense Innovation Board, said crafting these principles will help the department “safely and responsibly” employ new technologies. “I think it's important when dealing with a field that's emergent to think through all the ramifications,” he said. The Board, a group of Silicon Valley corporate and thought leaders chaired by former Google and Alphabet chairman Eric Schmidt, will make the list public at its June meeting. Defense Department leaders will take them under consideration. Marcuse believes that the Pentagon can be a leader not just in employing AI but in establishing guidelines for safe use — just as the military pioneered safety standards for aviation. “The Department of Defense should lead in this area as we have with other technologies in the past. I want to make sure the department is not just leading in developing AI for military purposes but also in developing ethics to use AI in military purposes,” he says. The effort, in part, is a response to what happened with the military's Project Maven, the Pentagon's flagship AI project with Google as its partner. The effort applied artificial intelligence to the vast store of video and and image footage that the Defense Department gathers to guide airstrikes. Defense officials emphasized repeatedly that the AI was intended only to cut down the workload of human analysts. But they also acknowledged that the ultimate goal was to help the military do what it does better, which sometimes means finding and killing humans. An employee revolt ensued at Google. Employees resigned en masse and the company said that they wouldn't renew the contract. Scharre, who leads the Technology and National Security Program at the Center for a New American Security, said, “One of the challenges for things like Project Maven, which uses AItechnology to process drone video feeds, is that some scientists expressed concern about where the technology may be heading. A public set of AI principles will help clarify DoD's intentions regarding artificial intelligence.” Full artcile: https://www.defenseone.com/technology/2019/01/pentagon-seeks-list-ethical-principles-using-ai-war/153940/