Back to news

February 12, 2018 | International, Aerospace, C4ISR

US Air Force requests $156.3 billion in FY19, plans to retire B-1, B-2 fleets

By: Aaron Mehta

ROME — The fiscal 2019 budget for the U.S. Air Force plans to grow the service from 55 to 58 combat squadrons over the next five years, while buying dozens of high-end aircraft and preparing to retire the B-1 and B-2 bomber fleets as the military retools for the high-end competition forseen by the Pentagon.

The National Defense Strategy, released in January, focused on the potential for great power competition between the U.S. and Russia or China. And in any such battle, the U.S. Air Force would play a critical role; hence, the service's request for $156.3 billion for FY19, a 6.6 percent overall increase from the FY18 request.

Click here for full coverage of President Trump's FY19 budget request!

In FY19, the Air Force is requesting 48 F-35A fighter jets, 15 KC-46A tankers and one more MC-130J aircraft. Ther service is also investing $2.3 billion in research and development in the B-21 Raider bomber, up from the $2 billion request in the yet-to-be-enacted FY18 budget.

The latter is notable, as the Air Force has formally announced it will be retiring the B-1 and B-2 bomber fleets once the B-21 — which will be dual-capable for both conventional and nuclear missions — starts to come online in the mid-2020s.

The budget request also calls for investing in new engines for the B-52 fleet to keep that aircraft going through 2050 — making it an almost 100-year-old design.

“If the force structure we have proposed is supported by the Congress, bases that have bombers now will have bombers in the future,” Air Force Secretary Heather Wilson said in a service release. “They will be B-52s and B-21s.”

The budget request also seeks to move forward with a new light-attack aircraft, likely either the Embraer-Sierra Nevada Corp. A-29 Super Tucano or the Textron AT-6, to provide a low-end capability.

Although that program seems at odds with the high-end challenge foreseen by the Defense Department, Susanna Blume of the Center for a New American Security believes it fits in nicely, as such an aircraft would remove the need to fly expensive, high-end aircraft for that mission.

Overall, the budget request calls for buying 258 F-35A fighters through the next five years. And in terms of space, the service is requesting $2 billion to fund five launches of the Evolved Expendable Launch Vehicle.

The service also seeks to increase funding for F-16 modernizations to speed upgrades with the active electronically scanned array antennas, radar warning systems and Link 16 systems.

Naval warfare reporter David B. Larter contributed to this report from Washington.

https://www.defensenews.com/smr/federal-budget/2018/02/12/air-force-requests-1563-billion-in-fy19-plans-to-retire-b-1-b-2-fleets/

On the same subject

  • Contracts for August 5, 2021

    August 6, 2021 | International, Aerospace, Naval, Land, C4ISR, Security

    Contracts for August 5, 2021

    Today

  • Intelligence Agencies Release AI Ethics Principles

    July 24, 2020 | International, C4ISR, Security

    Intelligence Agencies Release AI Ethics Principles

    Getting it right doesn't just mean staying within the bounds of the law. It means making sure that the AI delivers reports that accurate and useful to policymakers. By KELSEY ATHERTON ALBUQUERQUE — Today, the Office of the Director of National Intelligence released what the first take on an evolving set of principles for the ethical use of artificial intelligence. The six principles, ranging from privacy to transparency to cybersecurity, are described as Version 1.0, approved by DNI John Ratcliffe last month. The six principles are pitched as a guide for the nation's many intelligence especially, especially to help them work with the private companies that will build AI for the government. As such, they provide an explicit complement to the Pentagon's AI principles put forth by Defense Secretary Mark Esper back in February. “These AI ethics principles don't diminish our ability to achieve our national security mission,” said Ben Huebner, who heads the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.” The Pentagon's AI ethics principles came at the tail end of a long process set in motion by workers at Google. These workers called upon the tech giant to withdraw from a contract to build image-processing AI for Project Maven, which sought to identify objects in video recorded by the military. While ODNI's principles come with an accompanying six-page ethics framework, there is no extensive 80-page supporting annex, like that put forth by the Department of Defense. “We need to spend our time under framework and the guidelines that we're putting out to make sure that we're staying within the guidelines,” said Dean Souleles, Chief Technology Advisor at ODNI. “This is a fast-moving train with this technology. Within our working groups, we are actively working on many, many different standards and procedures for practitioners to use and begin to adopt these technologies.” Governing AI as it is developed is a lot like laying out the tracks ahead while the train is in motion. It's a tricky proposition for all involved — but the technology is evolving too fast and unpredictable to try to carve commandments in stone for all time. Here are the six principles, in the document's own words: Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties. Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes. Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias. Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties. Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence. Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector. The accompanying framework offers further questions for people to ask when programming, evaluating, sourcing, using, and interpreting information informed by AI. While bulk processing of data by algorithm is not a new phenomenon for the intelligence agencies, having a learning algorithm try to parse that data and summarize it for a human is a relatively recent feature. Getting it right doesn't just mean staying within the bounds of the law, it means making sure that the data produced by the inquiry is accurate and useful when handed off to the people who use intelligence products to make policy. “We are absolutely welcoming public comment and feedback on this,” said Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there's going to be aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some of the things that we are doing.” Internal legal review, as well as classified assessments from the Inspectors General, will likely be what makes the classified data processing AI accountable to policymakers. For the general public, as it offers comment on intelligence service use of AI, examples will have to come from outside classification, and will likely center on examples of AI in the private sector. “We think there's a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” said Souleles. He specifically pointed to the task of threat identification, using AI to spot malicious actors that seek to cause harm to networks, be they e-commerce giants or three-letter agencies. Depending on one's feelings towards the collection and processing of information by private companies vis-à-vis the government, it is either reassuring or ominous that when it comes to performing public accountability for spy AI, the intelligence community will have business examples to turn to. “There's many areas that I think we're going to be able to talk about going forward, where there's overlap that does not expose our classified sources and methods,” said Souleles, “because many, many, many of these things are really really common problems.” https://breakingdefense.com/2020/07/intelligence-agencies-release-ai-ethics-principles/

  • Milrem Robotics Led Consortium Awarded 30,6 MEUR by the European Commission to Develop a European Standardized Unmanned Ground System

    June 19, 2020 | International, Land

    Milrem Robotics Led Consortium Awarded 30,6 MEUR by the European Commission to Develop a European Standardized Unmanned Ground System

    June 17, 2020 - A consortium led by Milrem Robotics and composed of several major defence, communication and cybersecurity companies and high technology SMEs was awarded 30,6 MEUR from the European Commission's European Defence Industrial Development Programme (EDIDP) to develop a European standardized unmanned ground system. During the project, a modular and scalable architecture for hybrid manned-unmanned systems will be developed to standardize a European wide ecosystem for aerial and ground platforms, command, control and communication equipment, sensors, payloads, and algorithms. The prototype system will utilize an existing unmanned ground vehicle – Milrem Robotics' THeMIS – and a specific list of payloads. The outcome of the project will be demonstrated in operational environments and relevant climatic conditions as part of participating member states military exercises or at separate testing grounds. The total cost of the project, titled iMUGS (integrated Modular Unmanned Ground System), is 32,6 million euros of which 30,6 million will be provided by the European Commission. “Robotic and autonomous systems will tremendously enhance defence and military capabilities in the coming years all around the world. iMUGS is an excellent example of how Europe can utilize and develop high-end technologies as a joint effort while avoiding scattering activities and resources,” said Kuldar Väärsi, CEO of Milrem Robotics. “It is nice to see, that the European Defence Fund is efficiently consolidating the requirements of EU member states and the European industry's capabilities to increase defence capabilities and strategic autonomy. The European industry is determined and ready to provide efficient and deployable technologies already over the next three years in the course of this project,” Väärsi added. The project is led by Estonia and its technical requirements have also been agreed with Finland, Latvia, Germany, Belgium, France, and Spain who are planning on financing the remaining 2 MEUR of the projects budget. During the project operational know-how will be gathered and concepts for the combined engagement of manned and unmanned assets developed, while considering the ethical aspects applicable to robotics, artificial intelligence, and autonomous systems. State-of-the-art virtual and constructive simulation environments will also be set up. iMUGS will be a cooperation between 14 parties: Milrem Robotics (project coordinator), GT Cyber Technologies, Safran Electronics & Defense, NEXTER Systems, Krauss-Maffei Wegmann, Diehl Defence, Bittium Wireless, Insta DefSec, (Un)Manned, dotOcean, Latvijas Mobilais Telefons, GMV Aerospace and Defence, the Estonian Military Academy and Royal Military Academy of Belgium. Background The objectives of the EDIDP programme are to contribute to the strategic autonomy of the European Union and to strengthen the cooperation between Member States. The priorities include enabling high-end operations of military forces with special focus on intelligence and secured communications and cyber. Actions include development of next generation ground combat capabilities and solutions in Artificial Intelligence, Virtual Reality and Cyber technologies. View source version on Milrem Robotics: https://milremrobotics.com/milrem-robotics-led-consortium-awarded-306-meur-by-the-european-commission-to-develop-a-european-standardized-unmanned-ground-system/

All news