15 mai 2018 | International, Aérospatial

CAE USA a remporté un contrat en tant que fournisseur de services de soutien aux instructeurs pour la Marine américaine

CAE a annoncé aujourd'hui que CAE USA a remporté un contrat auprès du Chef de la formation aéronavale (CNATRA) pour des services d'instructions contractuels (CIS) visant à soutenir le programme de formation au sol de la Marine américaine.

En vertu des dispositions de ce contrat CNATRA CIS de cinq ans, qui a été attribué en tant que contrat de base avec quatre options d'un an, CAE USA fournira des instructeurs pour des formations en classe et sur simulateur à cinq bases aéronavales (NAS) afin de soutenir la formation de base, intermédiaire et avancée des futurs pilotes de la Marine américaine.

« Nous sommes ravis d'avoir été choisis pour participer à ce programme hautement concurrentiel visant à soutenir la formation des pilotes d'avion de la Marine américaine », a déclaré Ray Duquette, président et directeur général de CAE USA. « Au cours des dernières années, nous avons pu démontrer nos capacités quant au programme T-44C de la Marine, en tant que fournisseur de calibre mondial en solutions et en services de formation les plus complets. CAE USA se réjouit d'intensifier son soutien à la formation au pilotage de base et avancée afin d'assurer la réussite des futurs pilotes navals de la Marine américaine.

CAE USA fournira des instructeurs en classe et sur simulateur aux cinq bases d'entraînement suivantes de la Marine américaine :

  • Base aéronavale de Whiting Field en Floride – Entraînement pour la phase de formation de base, utilisation de l'aéronef T-6B Texan;
  • Base aéronavale de Corpus Christi au Texas – Entraînement pour la phase de formation de base, utilisation de l'aéronef T-6B Texan;
  • Base aéronavale de Meridian au Mississippi – Entraînement pour la phase de formation intermédiaire et avancée sur avion à réaction, utilisation de l'aéronef T-45C Goshawk;
  • Base aéronavale de Kingsville au Texas – Entraînement pour la phase de formation intermédiaire et avancée sur avion à réaction, utilisation de l'aéronef T-45C Goshawk;
  • Base aéronavale de Pensacola en Floride – Entraînement pour la formation des officiers de vol naval (NFO).

Le programme CNATRA CIS procure des services de soutien aux instructeurs en classe et sur simulateur, pour la formation aéronautique navale de base, qui doit être suivie par tous les futurs pilotes de la Marine. Le programme CNATRA CIS assure également le soutien à la formation intermédiaire et avancée en entraînement d'attaque, qui est la voie à suivre pour les futurs pilotes de chasseurs ou pilotes « Tailhook »; le programme soutient en outre l'entraînement NFO, soit la formation de base pour maîtriser les systèmes de pilotage avancés à bord d'aéronefs navals. CAE USA soutient déjà le programme de formation intermédiaire et avancée relative aux appareils multimoteurs dans le cadre du programme de formation détenu et exploité par l'entreprise T-44C « Command Aircraft Crew » à la base d'entraînement de Corpus Christi. La formation des pilotes navals des aéronefs à voilure tournante est appuyée dans le cadre d'un programme de soutien distinct.

« Le programme de formation CNATRA CIS est un autre exemple d'impartition à laquelle l'armée américaine a recours pour confier à l'industrie une partie des services et du soutien nécessaires à la formation de son équipage » a déclaré M. Duquette, pilote naval à la retraite et ancien instructeur à la base de Kingsville. « Comme notre entreprise demeure centrée sur la formation, elle est bien placée pour développer un partenariat fructueux avec nos clients militaires en vue d'aider à la formation et la réussite des pilotes de demain ».

https://www.cae.com/fr/nouvelles-et-evenements/communique-de-presse/cae-usa-awarded-contract-to-provide-instructor-support-services-for-united-states-navy/

Sur le même sujet

  • Budget spat puts Boeing contract for AWACS upgrades at risk: sources

    27 août 2019 | International, Aérospatial

    Budget spat puts Boeing contract for AWACS upgrades at risk: sources

    Andrea Shalal WASHINGTON (Reuters) - A dispute over budgeting processes could delay NATO's efforts to finalize a $1 billion contract to extend the life of 14 aging Boeing E-3A surveillance aircraft, often called NATO's “eyes in the sky,” sources familiar with the program said. NATO officials have invited the 16 member nations in the Airborne Warning & Control System, or AWACS, program to an extraordinary meeting on Sept. 12 to mark the program's 40th anniversary and resolve the budget dispute, the sources said. Unless the issue is resolved soon, the contract will not be awarded to Boeing in time to be announced as planned at the Dec. 3-4 NATO summit in London, the sources said. “It's disappointing that a one-sided interpretation of the rules is putting this much-needed upgrade program at risk,” said one of the sources. The upgrades would keep the 1979/1980-era airplanes, with their distinctive radar domes on the fuselage, flying until 2035. NATO needs the planes to carry out missions such as air policing, evacuations and counter-terrorism operations. A second source said the dispute was not expected to kill the upgrade program outright, but could well push a contract award to Boeing off until next year, marking a setback for the U.S. contractor at a time when it still is struggling to get its 737 MAX commercial airplane back in the air. NAPMA, the NATO agency that manages the AWACS fleet, said in June it expected to finalize by December a $750 million contract with Boeing to extend the life of the aircraft through 2035, with $250 million more earmarked for design, spare parts and testing. But unanimous consent of member states is needed to proceed, and Norway has raised concerns about an uneven flow of funds to the program until its completion by 2027, the sources said. They said Oslo wants the biggest program states - the United States, Germany, Italy and the Netherlands - to transfer the bulk of their payments at the start, but that is not possible due to budgetary rules in those countries. In the United States, for instance, funding for weapons programs is generally authorized and distributed on an annual basis, subject to approval by the U.S. Congress. Ann-Kristin Salbuvik, spokeswoman for the Norwegian defense ministry, said Norway remained committed to the AWACS Final Life Extension Program and was prepared to finance its share of the program in coming years. But she said a decision to launch the program was contingent on approval by all member states, and the Boeing offer had to be “compliant, affordable and feasible.” Boeing spokeswoman Melissa Stewart on Thursday had no comment on the dispute, saying Boeing continued to work with NATO “to assess needs and present the best options and upgrades that will keep their AWACS fleet operational for years to come.” Once NAPMA presented its recommendations later this fall, member nations still have to agree on technical, financial and managerial aspects of the program, she said. A NATO official downplayed the risk to the upgrade program but acknowledged that it still required securing final signatures on multilateral agreements, confirmation of budget arrangements and negotiation of other “last-minute details.” “Despite the complexity of a $1 billion multinational program being conducted by 16 Allies, these preparations are on track. The plan remains to award the contract in December,” the official said. https://www.reuters.com/article/us-nato-boeing-awacs/budget-spat-puts-boeing-contract-for-awacs-upgrades-at-risk-sources-idUSKCN1VC2NN

  • Italy to buy Leopard combat tanks, upgrade Arietes

    13 juillet 2023 | International, Terrestre

    Italy to buy Leopard combat tanks, upgrade Arietes

    Four billion euros would be budgeted for the new buys from 2024.

  • Intelligence Agencies Release AI Ethics Principles

    24 juillet 2020 | International, C4ISR, Sécurité

    Intelligence Agencies Release AI Ethics Principles

    Getting it right doesn't just mean staying within the bounds of the law. It means making sure that the AI delivers reports that accurate and useful to policymakers. By KELSEY ATHERTON ALBUQUERQUE — Today, the Office of the Director of National Intelligence released what the first take on an evolving set of principles for the ethical use of artificial intelligence. The six principles, ranging from privacy to transparency to cybersecurity, are described as Version 1.0, approved by DNI John Ratcliffe last month. The six principles are pitched as a guide for the nation's many intelligence especially, especially to help them work with the private companies that will build AI for the government. As such, they provide an explicit complement to the Pentagon's AI principles put forth by Defense Secretary Mark Esper back in February. “These AI ethics principles don't diminish our ability to achieve our national security mission,” said Ben Huebner, who heads the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.” The Pentagon's AI ethics principles came at the tail end of a long process set in motion by workers at Google. These workers called upon the tech giant to withdraw from a contract to build image-processing AI for Project Maven, which sought to identify objects in video recorded by the military. While ODNI's principles come with an accompanying six-page ethics framework, there is no extensive 80-page supporting annex, like that put forth by the Department of Defense. “We need to spend our time under framework and the guidelines that we're putting out to make sure that we're staying within the guidelines,” said Dean Souleles, Chief Technology Advisor at ODNI. “This is a fast-moving train with this technology. Within our working groups, we are actively working on many, many different standards and procedures for practitioners to use and begin to adopt these technologies.” Governing AI as it is developed is a lot like laying out the tracks ahead while the train is in motion. It's a tricky proposition for all involved — but the technology is evolving too fast and unpredictable to try to carve commandments in stone for all time. Here are the six principles, in the document's own words: Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties. Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes. Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias. Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties. Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence. Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector. The accompanying framework offers further questions for people to ask when programming, evaluating, sourcing, using, and interpreting information informed by AI. While bulk processing of data by algorithm is not a new phenomenon for the intelligence agencies, having a learning algorithm try to parse that data and summarize it for a human is a relatively recent feature. Getting it right doesn't just mean staying within the bounds of the law, it means making sure that the data produced by the inquiry is accurate and useful when handed off to the people who use intelligence products to make policy. “We are absolutely welcoming public comment and feedback on this,” said Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there's going to be aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some of the things that we are doing.” Internal legal review, as well as classified assessments from the Inspectors General, will likely be what makes the classified data processing AI accountable to policymakers. For the general public, as it offers comment on intelligence service use of AI, examples will have to come from outside classification, and will likely center on examples of AI in the private sector. “We think there's a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” said Souleles. He specifically pointed to the task of threat identification, using AI to spot malicious actors that seek to cause harm to networks, be they e-commerce giants or three-letter agencies. Depending on one's feelings towards the collection and processing of information by private companies vis-à-vis the government, it is either reassuring or ominous that when it comes to performing public accountability for spy AI, the intelligence community will have business examples to turn to. “There's many areas that I think we're going to be able to talk about going forward, where there's overlap that does not expose our classified sources and methods,” said Souleles, “because many, many, many of these things are really really common problems.” https://breakingdefense.com/2020/07/intelligence-agencies-release-ai-ethics-principles/

Toutes les nouvelles