6 mai 2019 | International, Aérospatial, Naval, Terrestre, C4ISR, Sécurité, Autre défense

DARPA: Expediting Software Certification for Military Systems, Platforms

Military systems are increasingly using software to support functionality, new capabilities, and beyond. Before a new piece of software can be deployed within a system however, its functional safety and compliance with certain standards must be verified and ultimately receive certification. As the rapid rate of software usage continues to grow, it is becoming exceedingly difficult to assure that all software considered for military use is coded correctly and then tested, verified, and documented appropriately.

“Software requires a certain level of certification – or approval that it will work as intended with minimal risks – before receiving approval for use within military systems and platforms,” said Dr. Ray Richards, a program manager in DARPA's Information Innovation Office (I2O). “However, the effort required to certify software is an impediment to expeditiously developing and fielding new capabilities within the defense community.”

Today, the software certification process is largely manual and relies on human evaluators combing through piles of documentation, or assurance evidence, to determine whether the software meets certain certification criteria. The process is time consuming, costly, and can result in superficial or incomplete evaluations as reviewers bring their own sets of expertise, experiences, and biases to the process. A lack of a principled means of decomposing evaluations makes it difficult to create a balanced and trustworthy process that applies equally to all software. Further, each subsystem and component must be evaluated independently and re-evaluated before it can be used in a new system. “Just because a subsystem is certified for one system or platform does not mean it is unilaterally certified for all,” noted Richards. This creates additional time delays and review cycles.

To help accelerate and scale the software certification process, DARPA developed the Automated Rapid Certification Of Software (ARCOS) program. The goal of ARCOS is to create tools and a process that would allow for the automated assessment of software evidence and provide justification for a software's level of assurance that is understandable. Taking advantage of recent advances in model-based design technology, “Big Code” analytics, mathematically rigorous analysis and verification, as well as assurance case languages, ARCOS seeks to develop a capability to automatically evaluate software assurance evidence to enable certifiers to rapidly determine that system risk is acceptable.

“This approach to reengineering the software certification process is well timed as it aligns with the DoD Digital Engineering Strategy, which details how the department is looking to move away from document-based engineering processes and towards design models that are to be the authoritative source of truth for systems,” said Richards.

To create this automated capability, ARCOS will explore techniques for automating the evidence generation process for new and legacy software; create a means of curating evidence while maintaining its provenance; and develop technologies for the automated construction of assurance cases, as well as technologies that can validate and assess the confidence of an assurance case argument. The evidence generation, curation, and assessment technologies will form the ARCOS tools and processes, working collectively to provide a scalable means of accelerating the pathway to certification.

Throughout the program's expected three phases, evaluations and assessments will occur to gauge how the research is progressing. ARCOS researchers will tackle progressively more challenging sets of software systems and associated artifacts. The envisioned evaluation progression will move from a single software module to a set of interacting modules and finally to a realistic military software system.

Interested proposers will have an opportunity to learn more during a Proposers Day on May 14, 2019, from 8:30AM to 3:30PM (EST) at the DARPA Conference Center, located at 675 N. Randolph Street, Arlington, Virginia, 22203. The purpose of the Proposers Day is to outline the ARCOS technical goals and challenges, and to promote an understanding of the BAA proposal requirements. For details about the event, including registration requirements, please visit: https://www.fbo.gov/index?s=opportunity&mode=form&id=6a8f03472cf43a3558456b807877f248&tab=core&_cview=0

Additional information will be available in the forthcoming Broad Agency Announcement, which will be posted to www.fbo.gov.

https://www.darpa.mil/news-events/2019-05-03

Sur le même sujet

  • Italy unveils weapons wish list, forecasts defense spending

    18 octobre 2023 | International, Terrestre

    Italy unveils weapons wish list, forecasts defense spending

    Now on the list are 21 High Mobility Artillery Rocket Systems — U.S.-made rocket launchers that have seen success on the battlefield in Ukraine.

  • Panel wants to double federal spending on AI

    2 avril 2020 | International, C4ISR

    Panel wants to double federal spending on AI

    Aaron Mehta A congressionally mandated panel of technology experts has issued its first set of recommendations for the government, including doubling the amount of money spent on artificial intelligence outside the defense department and elevating a key Pentagon office to report directly to the Secretary of Defense. Created by the National Defense Authorization Act in 2018, the National Security Commission on Artificial Intelligence is tasked with reviewing “advances in artificial intelligence, related machine learning developments, and associated technologies,” for the express purpose of addressing “the national and economic security needs of the United States, including economic risk, and any other associated issues.” The commission issued an initial report in November, at the time pledging to slowly roll out its actual policy recommendations over the course of the next year. Today's report represents the first of those conclusions — 43 of them in fact, tied to legislative language that can easily be inserted by Congress during the fiscal year 2021 budget process. Bob Work, the former deputy secretary of defense who is the vice-chairman of the commission, said the report is tied into a broader effort to move DoD away from a focus on large platforms. “What you're seeing is a transformation to a digital enterprise, where everyone is intent on making the DoD more like a software company. Because in the future, algorithmic warfare, relying on AI and AI enabled autonomy, is the thing that will provide us with the greatest military competitive advantage,” he said during a Wednesday call with reporters. Among the key recommendations: The government should “immediately double non-defense AI R&D funding” to $2 billion for FY21, a quick cash infusion which should work to strengthen academic center and national labs working on AI issues. The funding should “increase agency topline levels, not repurpose funds from within existing agency budgets, and be used by agencies to fund new research and initiatives, not to support re-labeled existing efforts.” Work noted that he recommends this R&D to double again in FY22. The commission leaves open the possibility of recommendations for increasing DoD's AI investments as well, but said it wants to study the issue more before making such a request. In FY21, the department requested roughly $800 million in AI developmental funding and another $1.7 billion in AI enabled autonomy, which Work said is the right ratio going forward. “We're really focused on non-defense R&D in this first quarter, because that's where we felt we were falling further behind,” he said. “We expect DoD AI R&D spending also to increase” going forward. The Director of the Joint Artificial Intelligence Center (JAIC) should report directly to the Secretary of Defense, and should continue to be led by a three-star officer or someone with “significant operational experience.” The first head of the JAIC, Lt. Gen. Jack Shanahan, is retiring this summer; currently the JAIC falls under the office of the Chief Information Officer, who in turn reporters to the secretary. Work said the commission views the move as necessary in order to make sure leadership in the department is “driving" investment in AI, given all the competing budgetary requirements. The DoD and the Office of the Director of National Intelligence (ODNI) should establish a steering committee on emerging technology, tri-chaired by the Deputy Secretary of Defense, the Vice Chairman of the Joint Chiefs of Staff, and the Principal Deputy Director of ODNI, in order to “drive action on emerging technologies that otherwise may not be prioritized” across the national security sphere. Government microelectronics programs related to AI should be expanded in order to “develop novel and resilient sources for producing, integrating, assembling, and testing AI-enabling microelectronics.” In addition, the commission calls for articulating a “national for microelectronics and associated infrastructure.” Funding for DARPA's microelectronics program should be increased to $500 million. The commission also recommends the establishment of a $20 million pilot microelectronics program to be run by the Intelligence Advanced Research Projects Activity (IARPA), focused on AI hardware. The establishment of a new office, tentatively called the National Security Point of Contact for AI, and encourage allied government to do the same in order to strengthen coordination at an international level. The first goal for that office would be to develop an assessment of allied AI research and applications, starting with the Five Eyes nations and then expanding to NATO. One issue identified early by the commission is the question of ethical AI. The commission recommends mandatory training on the limits of artificial intelligence in the AI workforce, which should include discussions around ethical issues. The group also calls for the Secretary of Homeland Security and the director of the Federal Bureau of Investigation to “share their ethical and responsible AI training programs with state, local, tribal, and territorial law enforcement officials,” and track which jurisdictions take advantage of those programs over a five year period. Missing from the report: any mention of the Pentagon's Directive 3000.09, a 2012 order laying out the rules about how AI can be used on the battlefield. Last year C4ISRNet revealed that there was an ongoing debate among AI leaders, including Work, on whether that directive was still relevant. While not reflected in the recommendations, Eric Schmidt, the former Google executive who chairs the commission, noted that his team is starting to look at how AI can help with the ongoing COVID-19 coronavirus outbreak, saying "“We're in an extraordinary time... we're all looking forward to working hard to help anyway that we can.” The full report can be read here. https://www.c4isrnet.com/artificial-intelligence/2020/04/01/panel-wants-to-double-federal-spending-on-ai/

  • L'US Air Force veut qu'un de ses pilotes affronte un avion piloté par une intelligence artificielle

    12 juin 2020 | International, Aérospatial

    L'US Air Force veut qu'un de ses pilotes affronte un avion piloté par une intelligence artificielle

    Des chercheurs américains spécialisés dans l'Intelligence Artificielle projettent de créer un avion de combat autonome capable d'abattre un avion de chasse piloté par un humain. L'US Air Force devrait organiser un tel combat en juillet 2021, selon Air Force Magazine. L'Air Force Research Laboratory (AFRL) travaille depuis 2018 sur un système automatisé basé sur des techniques d'Intelligence Artificielle qui puisse prendre le dessus sur un avion de chasse piloté par un humain lors d'un combat air-air. La technologie du projet, baptisé «Bigmoon shot», s'appuie sur le deep machine learning. Air Force Magazine et L'Usine Nouvelle du 12 juin

Toutes les nouvelles