24 juillet 2020 | International, C4ISR, Sécurité

Intelligence Agencies Release AI Ethics Principles

Getting it right doesn't just mean staying within the bounds of the law. It means making sure that the AI delivers reports that accurate and useful to policymakers.

By

ALBUQUERQUE — Today, the Office of the Director of National Intelligence released what the first take on an evolving set of principles for the ethical use of artificial intelligence. The six principles, ranging from privacy to transparency to cybersecurity, are described as Version 1.0, approved by DNI John Ratcliffe last month.

The six principles are pitched as a guide for the nation's many intelligence especially, especially to help them work with the private companies that will build AI for the government. As such, they provide an explicit complement to the Pentagon's AI principles put forth by Defense Secretary Mark Esper back in February.

“These AI ethics principles don't diminish our ability to achieve our national security mission,” said Ben Huebner, who heads the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.”

The Pentagon's AI ethics principles came at the tail end of a long process set in motion by workers at Google. These workers called upon the tech giant to withdraw from a contract to build image-processing AI for Project Maven, which sought to identify objects in video recorded by the military.

While ODNI's principles come with an accompanying six-page ethics framework, there is no extensive 80-page supporting annex, like that put forth by the Department of Defense.

“We need to spend our time under framework and the guidelines that we're putting out to make sure that we're staying within the guidelines,” said Dean Souleles, Chief Technology Advisor at ODNI. “This is a fast-moving train with this technology. Within our working groups, we are actively working on many, many different standards and procedures for practitioners to use and begin to adopt these technologies.”

Governing AI as it is developed is a lot like laying out the tracks ahead while the train is in motion. It's a tricky proposition for all involved — but the technology is evolving too fast and unpredictable to try to carve commandments in stone for all time.

Here are the six principles, in the document's own words:

Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties.

Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes.

Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias.

Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties.

Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence.

Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector.

The accompanying framework offers further questions for people to ask when programming, evaluating, sourcing, using, and interpreting information informed by AI. While bulk processing of data by algorithm is not a new phenomenon for the intelligence agencies, having a learning algorithm try to parse that data and summarize it for a human is a relatively recent feature.

Getting it right doesn't just mean staying within the bounds of the law, it means making sure that the data produced by the inquiry is accurate and useful when handed off to the people who use intelligence products to make policy.

“We are absolutely welcoming public comment and feedback on this,” said Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there's going to be aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some of the things that we are doing.”

Internal legal review, as well as classified assessments from the Inspectors General, will likely be what makes the classified data processing AI accountable to policymakers. For the general public, as it offers comment on intelligence service use of AI, examples will have to come from outside classification, and will likely center on examples of AI in the private sector.

“We think there's a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” said Souleles.

He specifically pointed to the task of threat identification, using AI to spot malicious actors that seek to cause harm to networks, be they e-commerce giants or three-letter agencies. Depending on one's feelings towards the collection and processing of information by private companies vis-à-vis the government, it is either reassuring or ominous that when it comes to performing public accountability for spy AI, the intelligence community will have business examples to turn to.

“There's many areas that I think we're going to be able to talk about going forward, where there's overlap that does not expose our classified sources and methods,” said Souleles, “because many, many, many of these things are really really common problems.”

https://breakingdefense.com/2020/07/intelligence-agencies-release-ai-ethics-principles/

Sur le même sujet

  • Boeing begins involuntary layoffs, but defense biz to remain mostly untouched

    28 mai 2020 | International, Aérospatial

    Boeing begins involuntary layoffs, but defense biz to remain mostly untouched

    By: Valerie Insinna WASHINGTON — Boeing began making its first round of involuntary layoffs on Wednesday morning, announcing that it will slash the jobs of approximately 6,770 employees across the United States. Boeing's massive commercial business will take the brunt of the cuts, with the company's defense, space and security division only expected to shed less than 100 employees through involuntary layoffs this week. “While the deeper reductions are in areas that are most exposed to the condition of our commercial customers, the ongoing stability of our defense, space and related services businesses will help us limit overall impact, and we will continue hiring talent to support critical programs and meet our customers' evolving needs,” a Boeing spokesman said in a statement. Boeing plans to reduce its total headcount by 10 percent through natural turnover, voluntary layoffs and involuntary cuts — a measure made necessary by the ongoing impact of the COVID-19 pandemic, which has shook the travel industry and called into question commercial airlines' ability to pay for Boeing aircraft already on order. So far, about 5,520 U.S.-based employees have been approved for voluntary layoffs, with about 380 of that sum coming from Boeing's defense business. The approximately 6,770 U.S.-based employees that will be involuntarily laid off this week represents the largest portion of layoffs expected by the company. Those workers will receive severance pay, COBRA health care coverage and career transition services, Boeing CEO Dave Calhoun said in a message notifying employees about the cuts. “The several thousand remaining layoffs will come in much smaller additional tranches over the next few months,” a Boeing spokesman said. In his message to Boeing employees, Calhoun hinted that the situation is to improve as countries begin reopening businesses and more customers feel comfortable booking air travel. However, it will take years for Boeing to fully recover from the pandemic, he added. “The COVID-19 pandemic's devastating impact on the airline industry means a deep cut in the number of commercial jets and services our customers will need over the next few years, which in turn means fewer jobs on our lines and in our offices. We have done our very best to project the needs of our commercial airline customers over the next several years as they begin their path to recovery,” Calhoun wrote. “I wish there were some other way.” https://www.defensenews.com/industry/2020/05/27/boeing-begins-involuntary-layoffs-but-defense-biz-to-remain-mostly-untouched/

  • U.S. Cyber Command looks to grow its acquisition capacity

    14 septembre 2018 | International, C4ISR

    U.S. Cyber Command looks to grow its acquisition capacity

    By Lauren C. Williams The Defense Department's newest combatant command is nearly a decade old but still doesn't steer its own acquisitions. That could change in fiscal 2019, however, as U.S. Cyber Command staffs up its contracting office and seeks a bigger acquisition budget. "Acquisition authority is limited at the moment. It's capped at $75 million and has a sunset date, currently, of 2021," said Stephen Schanberger, command acquisition executive for U.S. Cyber Command during a panel at the Billington Cybersecurity Summit Sept. 6. "So the command is actively pursuing getting that increased on the ceiling amount as well as the sunset date." Cyber Command has only had acquisition authority for two fiscal years, but Congress extended that authority through 2025 in the fiscal year 2019 National Defense Authorization Act. That advances the authority four years from the original sunset date of 2021. Cyber Command awarded only one contract in fiscal 2017, Schanberger said, partly because it lacked a contract writing system and technical personnel to get things done. Things improved this year with $40 million in contract awards and Schanberger expects to reach the $75 million cap sometime in 2019. "We are really hamstrung at the moment in relying on the current [contracting] vehicles out there from others," he said. "And in some cases we've had to adjust our scope to match up to the contract versus waiting for them to put another whole contract vehicle or task order onto a contract." Schanberger seeks to more than triple Cyber Command's acquisition to $250 million to allow for multi-year contracts. Congressional scrutiny has been the main impediment to securing additional acquisition funds because the command needs to prove its contracting abilities, but Schanberger said increasing staff and getting things right will help. "Congress would like us to show that we actually can use our authority the way it's supposed to be and start to stand on the backbone of what it takes to be a contracting organization," particularly regarding contract types, use other transaction authorities, competitive bids versus sole source, and partnering with small businesses, he said. Schanberger told FCW he wasn't concerned about additional congressional scrutiny surrounding the Defense Department's use of other transaction authorities because "our efforts are nowhere near the big efforts that they're looking for." But overall, Cyber Command's contracting office is growing. Schanberger now leads a team of about five people, including himself, consisting of a contracting officer, specialist, and supporting contractors. He hopes to double the team's capacity by year's end. "We are in our infancy from an acquisition perspective, we are putting down the foundation of the personnel and the skills," he said, with the goal "to be able to activate, put together solicitation packages, plan our contracting strategy for [multiple] years, and be able to effectively implement and put out RFPs on the street without making a mess out it." Schanberger said they are looking at capabilities that can benefit all of the service components, such as analytic development. Cyber Command released a request for proposals for an analytic support program dubbed Rainfire on Sept. 4. "Once we get the skills in place, I think we'll be able to demonstrate to everyone around us that we can execute the authorities we have and grow them responsibly," he said. https://fcw.com/articles/2018/09/13/cybercom-aquisition-williams.aspx

  • HOW HACKED WATER HEATERS COULD TRIGGER MASS BLACKOUTS

    14 août 2018 | International, C4ISR

    HOW HACKED WATER HEATERS COULD TRIGGER MASS BLACKOUTS

    WHEN THE CYBERSECURITY industry warns about the nightmare of hackers causing blackouts, the scenario they describe typically entails an elite team of hackers breaking into the inner sanctum of a power utility to start flipping switches. But one group of researchers has imagined how an entire power grid could be taken down by hacking a less centralized and protected class of targets: home air conditioners and water heaters. Lots of them. At the Usenix Security conference this week, a group of Princeton University security researchers will present a study that considers a little-examined question in power grid cybersecurity: What if hackers attacked not the supply side of the power grid, but the demand side? In a series of simulations, the researchers imagined what might happen if hackers controlled a botnet composed of thousands of silently hacked consumer internet of things devices, particularly power-hungry ones like air conditioners, water heaters, and space heaters. Then they ran a series of software simulations to see how many of those devices an attacker would need to simultaneously hijack to disrupt the stability of the power grid. Their answers point to a disturbing, if not quite yet practical scenario: In a power network large enough to serve an area of 38 million people—a population roughly equal to Canada or California—the researchers estimate that just a one percent bump in demand might be enough to take down the majority of the grid. That demand increase could be created by a botnet as small as a few tens of thousands of hacked electric water heaters or a couple hundred thousand air conditioners. "Power grids are stable as long as supply is equal to demand," says Saleh Soltan, a researcher in Princeton's Department of Electrical Engineering, who led the study. "If you have a very large botnet of IoT devices, you can really manipulate the demand, changing it abruptly, any time you want." The result of that botnet-induced imbalance, Soltan says, could be cascading blackouts. When demand in one part of the grid rapidly increases, it can overload the current on certain power lines, damaging them or more likely triggering devices called protective relays, which turn off the power when they sense dangerous conditions. Switching off those lines puts more load on the remaining ones, potentially leading to a chain reaction. "Fewer lines need to carry the same flows and they get overloaded, so then the next one will be disconnected and the next one," says Soltan. "In the worst case, most or all of them are disconnected, and you have a blackout in most of your grid." Power utility engineers, of course, expertly forecast fluctuations in electric demand on a daily basis. They plan for everything from heat waves that predictably cause spikes in air conditioner usage to the moment at the end of British soap opera episodes when hundreds of thousands of viewers all switch on their tea kettles. But the Princeton researchers' study suggests that hackers could make those demand spikes not only unpredictable, but maliciously timed. The researchers don't actually point to any vulnerabilities in specific household devices, or suggest how exactly they might be hacked. Instead, they start from the premise that a large number of those devices could somehow be compromised and silently controlled by a hacker. That's arguably a realistic assumption, given the myriad vulnerabilities other security researchers and hackers have found in the internet of things. One talk at the Kaspersky Analyst Summit in 2016 described security flaws in air conditioners that could be used to pull off the sort of grid disturbance that the Princeton researchers describe. And real-world malicious hackers have compromised everything from refrigerators to fish tanks. Given that assumption, the researchers ran simulations in power grid software MATPOWER and Power World to determine what sort of botnet would could disrupt what size grid. They ran most of their simulations on models of the Polish power grid from 2004 and 2008, a rare country-sized electrical system whose architecture is described in publicly available records. They found they could cause a cascading blackout of 86 percent of the power lines in the 2008 Poland grid model with just a one percent increase in demand. That would require the equivalent of 210,000 hacked air conditioners, or 42,000 electric water heaters. The notion of an internet of things botnet large enough to pull off one of those attacks isn't entirely farfetched. The Princeton researchers point to the Mirai botnet of 600,000 hacked IoT devices, including security cameras and home routers. That zombie horde hit DNS provider Dyn with an unprecedented denial of service attack in late 2016, taking down a broad collection of websites. Building a botnet of the same size out of more power-hungry IoT devices is probably impossible today, says Ben Miller, a former cybersecurity engineer at electric utility Constellation Energy and now the director of the threat operations center at industrial security firm Dragos. There simply aren't enough high-power smart devices in homes, he says, especially since the entire botnet would have to be within the geographic area of the target electrical grid, not distributed across the world like the Mirai botnet. But as internet-connected air conditioners, heaters, and the smart thermostats that control them increasingly show up in homes for convenience and efficiency, a demand-based attack like the one the Princeton researchers describes could become more practical than one that targets grid operators. "It's as simple as running a botnet. When a botnet is successful, it can scale by itself. That makes the attack easier," Miller says. "It's really hard to attack all the generation sites on a grid all at once. But with a botnet you could attack all these end user devices at once and have some sort of impact." The Princeton researchers modeled more devious techniques their imaginary IoT botnet might use to mess with power grids, too. They found it was possible to increase demand in one area while decreasing it in another, so that the total load on a system's generators remains constant while the attack overloads certain lines. That could make it even harder for utility operators to figure out the source of the disruption. If a botnet did succeed in taking down a grid, the researchers' models showed it would be even easier to keepit down as operators attempted to bring it back online, triggering smaller scale versions of their attack in the sections or "islands" of the grid that recover first. And smaller scale attacks could force utility operators to pay for expensive backup power supplies, even if they fall short of causing actual blackouts. And the researchers point out that since the source of the demand spikes would be largely hidden from utilities, attackers could simply try them again and again, experimenting until they had the desired effect. The owners of the actual air conditioners and water heaters might notice that their equipment was suddenly behaving strangely. But that still wouldn't immediately be apparent to the target energy utility. "Where do the consumers report it?" asks Princeton's Soltan. "They don't report it to Con Edison, they report it to the manufacturer of the smart device. But the real impact is on the power system that doesn't have any of this data." That disconnect represents the root of the security vulnerability that utility operators need to fix, Soltan argues. Just as utilities carefully model heat waves and British tea times and keep a stock of energy in reserve to cover those demands, they now need to account for the number of potentially hackable high-powered devices on their grids, too. As high-power smart-home gadgets multiply, the consequences of IoT insecurity could someday be more than just a haywire thermostat, but entire portions of a country going dark. https://www.wired.com/story/water-heaters-power-grid-hack-blackout/

Toutes les nouvelles