Back to news

March 6, 2019 | International, Aerospace, Land

UK Army robotics receive £66m boost

The Defence Secretary has committed £66m of defence's new multi-million-pound Transformation Fund to fast-track military robotic projects onto the battlefield this year.

It was announced today at the Autonomous Warrior Exploitation Conference at the Science Museum, Kensington that the British Army will benefit from:

  • New mini-drones, providing troops with an eye-in-the-sky to give them greater awareness to outmanoeuvre enemies on the battlefield.
  • Systems to fit Army fighting vehicles with remote-control capability, so they can be pushed ahead of manned vehicles and used to test the strength of enemy defences.
  • New autonomous logistics vehicles which will deliver vital supplies to troops in warzones, helping remove soldiers from dangerous resupply tasks so they can focus on combat roles.

Defence Secretary Gavin Williamson said:

This announcement is a clear demonstration of how our Armed Forces are reaping the benefits from our new multi-million Transformation Fund. Each of these new technologies will enhance our Army's capabilities whilst reducing the risk to our personnel and I'm delighted we will be revolutionising frontline technology by the end of the year.

The MOD has always embraced pioneering technology and this fund will ensure the UK stays at the forefront of global military capabilities and ahead of our adversaries.

The injection of funding from the new £160m Transformation Fund will see some of this equipment set to deploy to the likes of Estonia, Afghanistan and Iraq before the end of the year. The Defence Secretary will also look to make a further £340m available as part of the Spending Review.

The investment comes after the Army tested a range of projects as part of the biggest military robot exercise in British history at the end of last year, Exercise Autonomous Warrior.

Yesterday, the Defence Secretary visited 16 Air Assault Brigade in Colchester which will be among the recipients of the new battlefield technologies. He discussed how the new equipment will benefit troops on the ground to help increase their safety and combat effectiveness.

The Brigade is specially trained and equipped to deploy by parachute, helicopter and air-landing. Its core role is to maintain the Air Assault Task Force, a battlegroup held at high readiness to deploy worldwide for a full spectrum of missions.

Chief of the General Staff Sir Mark Carleton-Smith said:

Rapid adaptation is an essential ingredient for success on the battlefield. The fielding of the next generation of armoured fighting vehicles and ground-breaking robotic and autonomous systems will keep the British Army at the cutting edge of battlefield technology, improving our lethality, survivability and competitive advantage.

Assistant Head of Capability Strategy and Force Development, Colonel Peter Rowell said:

Robotic and autonomous systems make our troops more effective; seeing more, understanding more, covering a greater area and being more lethal. They unshackle them from the resupply loop. These are game-changing capabilities; and not just for combat operations. They are equally useful in humanitarian and disaster relief operations.

After securing an extra £1.8bn for defence and overseeing the Modernising Defence Programme, the Defence Secretary has dedicated millions of pounds to transforming defence, arming the British military with innovative technology through fast-tracking new projects.

The MOD is embracing transformation at an ever-faster rate and the Transformation Fund is focused on investments in truly high-tech innovation that will create the armed forces of the future.

https://www.gov.uk/government/news/army-robotics-receive-66m-boost

On the same subject

  • Senior officials shed light on contentious procurement process for new surveillance aircrafts

    October 17, 2023 | International, Aerospace

    Senior officials shed light on contentious procurement process for new surveillance aircrafts

    High-level bureaucrats on Tuesday provided an update on the government’s acquisition of a new fleet to replace its aging CP-140s, which has come under fire amidst rumours the feds are considering directly awarding the contract to Boeing.

  • US Space Force to launch more integrated units to boost efficiency

    February 27, 2024 | International, Aerospace

    US Space Force to launch more integrated units to boost efficiency

    Lt. Gen. David Miller says the service is weeks away from announcing plans to expand the construct beyond its initial pilot phase.

  • Intelligence Agencies Release AI Ethics Principles

    July 24, 2020 | International, C4ISR, Security

    Intelligence Agencies Release AI Ethics Principles

    Getting it right doesn't just mean staying within the bounds of the law. It means making sure that the AI delivers reports that accurate and useful to policymakers. By KELSEY ATHERTON ALBUQUERQUE — Today, the Office of the Director of National Intelligence released what the first take on an evolving set of principles for the ethical use of artificial intelligence. The six principles, ranging from privacy to transparency to cybersecurity, are described as Version 1.0, approved by DNI John Ratcliffe last month. The six principles are pitched as a guide for the nation's many intelligence especially, especially to help them work with the private companies that will build AI for the government. As such, they provide an explicit complement to the Pentagon's AI principles put forth by Defense Secretary Mark Esper back in February. “These AI ethics principles don't diminish our ability to achieve our national security mission,” said Ben Huebner, who heads the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.” The Pentagon's AI ethics principles came at the tail end of a long process set in motion by workers at Google. These workers called upon the tech giant to withdraw from a contract to build image-processing AI for Project Maven, which sought to identify objects in video recorded by the military. While ODNI's principles come with an accompanying six-page ethics framework, there is no extensive 80-page supporting annex, like that put forth by the Department of Defense. “We need to spend our time under framework and the guidelines that we're putting out to make sure that we're staying within the guidelines,” said Dean Souleles, Chief Technology Advisor at ODNI. “This is a fast-moving train with this technology. Within our working groups, we are actively working on many, many different standards and procedures for practitioners to use and begin to adopt these technologies.” Governing AI as it is developed is a lot like laying out the tracks ahead while the train is in motion. It's a tricky proposition for all involved — but the technology is evolving too fast and unpredictable to try to carve commandments in stone for all time. Here are the six principles, in the document's own words: Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties. Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes. Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias. Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties. Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence. Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector. The accompanying framework offers further questions for people to ask when programming, evaluating, sourcing, using, and interpreting information informed by AI. While bulk processing of data by algorithm is not a new phenomenon for the intelligence agencies, having a learning algorithm try to parse that data and summarize it for a human is a relatively recent feature. Getting it right doesn't just mean staying within the bounds of the law, it means making sure that the data produced by the inquiry is accurate and useful when handed off to the people who use intelligence products to make policy. “We are absolutely welcoming public comment and feedback on this,” said Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there's going to be aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some of the things that we are doing.” Internal legal review, as well as classified assessments from the Inspectors General, will likely be what makes the classified data processing AI accountable to policymakers. For the general public, as it offers comment on intelligence service use of AI, examples will have to come from outside classification, and will likely center on examples of AI in the private sector. “We think there's a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” said Souleles. He specifically pointed to the task of threat identification, using AI to spot malicious actors that seek to cause harm to networks, be they e-commerce giants or three-letter agencies. Depending on one's feelings towards the collection and processing of information by private companies vis-à-vis the government, it is either reassuring or ominous that when it comes to performing public accountability for spy AI, the intelligence community will have business examples to turn to. “There's many areas that I think we're going to be able to talk about going forward, where there's overlap that does not expose our classified sources and methods,” said Souleles, “because many, many, many of these things are really really common problems.” https://breakingdefense.com/2020/07/intelligence-agencies-release-ai-ethics-principles/

All news