Back to news

September 6, 2018 | International, Naval

355-Ship Navy Will Mean Extending Vessels Past Planned Lifespans: Admiral

By Gina Harkins

Some of the Navy's ships could stay in service well beyond their scheduled lifespan as leaders look for ways to modernize existing vessels as part of a decades-long fleet buildup.

Navy leaders want to have 355 ships by 2030, but that doesn't mean that all of them will come new. Officials are studying ways to salvage some of the service's aging vessels as part of that plus-up -- and that doesn't come without challenges.

"[Operating] as an away-game Navy is very expensive, and this requires us to look at the lifespan of everything we own," Vice Adm. William Merz, deputy chief of Naval Operations for Warfare Systems, said Wednesday at conference hosted by Defense News.

Navy leaders plan to detail the kinds of capabilities they'll need in a 355-ship fleet in an extensive report expected to be released next year. Part of that process, Merz said, will include taking a look at what ships will still be relevant in a future fight.

That's an important factor in determining how much money to invest in refurbishing ships that have already been in service for decades. The Navy recently decided to extend the lives of some cruisers and destroyers, he said, because they're so effective.

Full article: https://www.military.com/daily-news/2018/09/05/355-ship-navy-will-mean-extending-vessels-past-planned-lifespans-admiral.html

On the same subject

  • New defense budget bill foresees US-Israel counter-drone cooperation

    August 14, 2018 | International, Aerospace

    New defense budget bill foresees US-Israel counter-drone cooperation

    By: Seth J. Frantzman JERUSALEM — For the first time, the National Defense Authorization Act includes a section on U.S.-Israel cooperation in countering unmanned aerial systems, in the fiscal 2019 version. The cooperation will identify “capability gaps” of the U.S. and Israel in countering UAVs and seek out projects to address those gaps to strengthen U.S. and Israeli security. The new cooperation envisions funding for research and development efforts and identifying costs that foresee close cooperation modeled on previous successful programs that Israel and the U.S. have collaborated on, including missile defense and anti-tunneling initiatives. Israel and the U.S. have been at the forefront of air defense cooperation for decades. U.S. Reps. Charlie Crist and Mike Johnson introduced in February a bill titled “United States-Israel Joint Drone Detection Cooperation Act.” Parts of the bill were included in the NDAA passed in both houses of Congress in July. “I am honored to have our bill included in the NDAA and to see it signed into law by President [Donald] Trump. This is an important step not only for our strongest ally in the Middle East but for the United States as well,” Johnson said in July. The president signed the NDAA into law on the afternoon of Aug. 13. The initiative foresees “joint research and development to counter unmanned aerial vehicles [which] will serve the national security interests of the United States and Israel.” Included as Section 1272 of the final NDAA presented to the president on Aug. 3, the cooperation contains five parts, including identification of the capability gaps that exist, identifying cooperative projects that would address the gaps, assessing the costs of the research and development, and assessing the costs of procuring and fielding the capabilities developed. Reports on the cooperation will be submitted to the congressional defense committees, the Senate Foreign Relations Committee and the House Foreign Affairs Committee. The threat of drones has increased in recent years. On Feb. 10 an Iranian-made drone entered Israeli airspace near the northern town of Beit Shean. It had flown from the T4 air base in Syria. Israel identified and tracked the drone from Syria and sent an Apache helicopter to shoot it down. The drone was revealed to be armed with explosives. Former Mossad chief Danny Yatom said in an interview in April that the drone was sophisticated and “an exact replica of the U.S. drone that fell in their territory,” referring to the American RQ-170 Sentinel, which was downed in Iran in 2011. Iran developed two drones based on the Sentinel, one called Shahed 171 and an armed version dubbed Saeqeh, which debuted in 2016. In 2012, Hezbollah used a drone to try to carry out surveillance of the Dimona nuclear reactor in southern Israel. “It's not the first time and it will not be the last,” warned Hezbollah leader Hassan Nasrallah. Conflict Armament Research reported in March 2017 that kamikaze drones using Iranian technology were being used by Houthi rebels in Yemen against Saudi Arabia and the United Arab Emirates. The UAE has sought to bring attention to this threat during the conflict in Yemen, in which a Riyadh-led coalition is fighting the Houthis. Drones were also used by the Islamic State group to attack U.S.-led coalition forces in Syria and Iraq. And Afghan officials reported an Iranian drone entered their airspace in August 2017. In September 2017, Israel used a Patriot missile to down a Hezbollah drone. Israel used Patriotmissiles twice to down Syrian UAVs near the Golan Heights demilitarized zone in July 2018. The U.S. reportedly used an F-15E Eagle to shoot down an Iranian-made Shahed 129 drone in June 2017 in Syria. The drone was heading for the U.S. base at Tanf, which is located in Syria near the Jordanian border. A systematic examination of the emerging drone threat is in the works. The U.S. Defense Department has been allocating resources to counter UAVs, with U.S. Central Command requesting up to $332 million over the next five years for efforts to counter drones. The U.S. Army has been looking for new missiles to defend against a variety of threats, including drones. This will include the Expanded Mission Area Missile and may include other Israel missiles such as the Tamir interceptor for use with a multimission launcher. https://www.defensenews.com/unmanned/2018/08/13/new-defense-budget-bill-foresees-us-israel-counter-drone-cooperation/

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    January 21, 2020 | International, C4ISR

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

  • Open source platforms, flexible airframes for new drones

    April 9, 2020 | International, Aerospace

    Open source platforms, flexible airframes for new drones

    Kelsey D. Atherton Designing a drone body is about settling on the right compromise. Multirotor drones excel at vertical lift and hover, while fixed wing drones are great at both distance and wide-open spaces. In February, Auterion Government Solutions and Quantum-Systems announced a two-pronged approach to the rotor- or fixed-wing drone market, with a pair of drones that use the same sensor packages and fuselage to operate as either the Scorpion Trirotor or the Vector fixed wing craft. “As we started to develop our tactical UAS Platform, our plan was only to develop a VTOL fixed wing solution (like our Vector),” said Florian Siebel, managing director of Quantum-Systems. “During the development process we decided to build a Tri-Copter Platform as well, as a result of many discussions with law enforcement agencies and Search and Rescue Units.” Adapting the fixed-wing fuselage to the tri-copter attachments means the drone can now operate in narrow spaces and harsh conditions. Scorpion, with the rotors, can fly for about 45 minutes, with a cruising speed of zero to 33 mph. Put the fixed wings back on for Vector, and the flight time is now two hours, with a cruising speed of 33 to 44 mph. The parts snap into place without any need for special tooling, and Auterion recommends the drone for missions in rain or snow. Both platforms share a gimbal EO/IR with 10x optical zoom, 720p EO video, 480p IR video, laser illuminator, IR laser ranger. Common between modes is also a tactical mapping tool using a 21 megapixel Sony UMC R10C camera. For the scorpion, there's also the option of a gimbaled electro-optical camera with a 30x optical zoom. Both drones are designed to fit in rucksacks that a person can carry one at a time. While many features are common across Vector and Scorpion, the plan is not to include both rotors or wings in the same kit. Once a team packs into the field with a drone on its back, that's the mode the drone can be used in. Auterion intends to ship the drones by the fourth quarter of 2020, with preorders available. Designing a drone body is about settling on the right compromise. Vector and Scorpion are built on top of open source code. This includes an operating system capable of programmable autopilot , as well as machine-vision collision prevention and obstacle detection and avoidance. Software for the ground station and cloud data management of the drone are also built on open source code. The Pentagon's Defense Innovation Unit awarded Auterion a $2 million contract last year to work on the PX4 software to help drive compatibility standards in the drone industry. As militaries across the world look to the enterprise sector for capable drones at smaller profile than existing military models, transparency in code and flexibility in airframe could become more widely adopted trends. In the meantime, there is Vector, and there is Scorpion. https://www.c4isrnet.com/unmanned/2020/03/25/open-source-platforms-flexible-airframes-for-new-drones

All news