25 janvier 2024 | International, Terrestre
17 novembre 2020 | International, Aérospatial
WASHINGTON — The National Reconnaissance launched a new intelligence satellite into orbit from Cape Canaveral Air Force Station, Florida, on Nov. 13, marking the American agency's fourth successful launch of the year.
“We're excited to be back at CCAFS with another successful launch alongside our partners at ULA [United Launch Alliance], the 45th Space Wing, and the U.S. Space Force Space and Missile Systems Center. The successful launch of NROL-101 is another example of the NRO's commitment to constantly evolving our crucial national security systems to support our defense and intelligence partners,” said Col. Chad Davis, director of NRO's Office of Space Launch.
NROL-101 was launched aboard a United Launch Alliance Atlas V rocket with help from the Space Force's Space and Missile Systems Center's Launch Enterprise. The Atlas family of rockets have been used for 668 successful launches since it was first introduced in 1957.
For this mission, ULA incorporated new Northrop Grumman Graphite Epoxy Motors 63 solid-fuel rocket boosters, which helped the first stage lift more weight by burning solid propellant. Each of the 66-foot rocket boosters contributed a maximum 371,550 pounds of thrust to help lift the rocket and its payload off the ground. Those boosters will be an important component for ULA's future generation of Vulcan Centaur launch vehicles.
This was the fourth successful NRO launch of the year. Previously, the agency had conducted two launches from New Zealand and one from NASA's Wallops Flight Facility in Virginia.
NRO does not usually reveal details of its satellites or their specific functions. In a statement, the agency simply noted that the classified national security payload was built by NRO in support of its overhead reconnaissance mission.
NRO's next scheduled launch is NROL-108, which is slated to launch from Cape Canaveral Air Force Station in December 2020.
 
					25 janvier 2024 | International, Terrestre
 
					17 septembre 2018 | International, C4ISR
By Editorial Board GOOGLE DECIDED after an employee backlash this summer that it no longer wanted to help the U.S. military craft artificial intelligence to help analyze drone footage. Now, the military is inviting companies and researchers across the country to become more involved in machine learning. The firms should accept the invitation. The Defense Department's Defense Advanced Research Projects Agency will invest up to $2 billion over the next five years in artificial intelligence, a significant increase for the bureau whose goal is promoting innovative research. The influx suggests the United States is preparing to start sprinting in an arms race against China. It gives companies and researchers who want to see a safer world an opportunity not only to contribute to national security but also to ensure a more ethical future for AI. The DARPA contracts will focus on helping machines operate in complex real-world scenarios. They will also tackle one of the central conundrums in AI: something insiders like to call “explainability.” Right now, what motivates the results that algorithms return and the decisions they make is something of a black box. That's worrying enough when it comes to policing posts on a social media site, but it is far scarier when lives are at stake. Military commanders are more likely to trust artificial intelligence if they know what it is “thinking,” and the better any of us understands technology, the more responsibly we can use it. There is a strong defense imperative to make AI the best it can be, whether to deter other countries from using their own machine-learning capabilities to target the United States, or to ensure the United States can effectively counter them when they do. Smarter technologies, such as improved target recognition, can save civilian lives, and allowing machines to perform some tasks instead of humans can protect service members. But patriotism is not the only reason companies should want to participate. They know better than most in government the potential these technologies have to help and to harm, and they can leverage that knowledge to maximize the former and minimize the latter. Because DARPA contracts are public, the work researchers do will be transparent in a way that Project Maven, the program that caused so much controversy at Google, was not. Employees aware of what their companies are working on can exert influence over how those innovations are used, and the public can chime in as well. DARPA contractors will probably develop products with nonlethal applications, like improved self-driving cars for convoys and autopilot programs for aircraft. But the killer robots that have many people worried are not outside the realm of technological possibility. The future of AI will require outlining principles that explain how what is possible may differ from what is right. If the best minds refuse to contribute, worse ones will. https://www.washingtonpost.com/opinions/silicon-valley-should-work-with-the-military-on-ai-heres-why/2018/09/12/1085caee-b534-11e8-a7b5-adaaa5b2a57f_story.html
 
					19 juillet 2021 | International, C4ISR
Washington DC (SPX) Jul 14, 2021 - DARPA has selected four industry and university research teams for the Invisible Headlights program, which seeks to determine if it's possible for autonomous vehicles to navigate in complete darknes