26 septembre 2018 |
By Anna Desmarais
Using artificial intelligence at Canada's official points of entry can lead to serious human rights violations, according to a new report.
Released Wednesday by the University of Toronto's International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy, the report says the use of artificial intelligence (AI) at regular points of entry is “quite risky” without appropriate government oversight.
“We know that, in other contexts, AI is not neutral,” report author Petra Molnar told iPolitics. “It's basically like a recipe. If your recipe is biased, then the result that is going to come out of the algorithm is also going to be biased.”
What these technologies could do, according to the report, is decide whether a marriage is genuine, an application is complete, or whether someone entering the country is deemed “a risk” to public safety. If the government doesn't provide more oversight, such decisions could rely on appearance, religion, or travel patterns as “proxies” for more relevant data normally gathered by immigration officials.
This could compromise some quintessential human rights for immigrants and refugees at the border, including the right to equality and to be protected from discrimination under the law.
The report says AI machines could be taught algorithms for how to assess “red flags,” “risks,” and “frauds” based on pre-existing biases in some of the immigration and refugee system's current regulations. For example, the report said the Designated Country of Origin list, which classifies which countries are “safe” for refugee claimants, uses an “incomplete” definition of safety that does not take into account specific risks for minority groups, such as women or members of the LGBTQ community.
The use of AI technologies could mean cases are likely to be determined only based on these types of guidelines and might not include the discretion and empathy employed by immigration officials when reviewing the details of a refugee claim.
“Depending on how an algorithm is designed, it may result in indirect discrimination,” the report found. “The complexity of human migration is not easily reducible to an algorithm.”
If someone is triaged or flagged for early deportation, it could also affect their ability to apply for a visa, appeal a negative immigration ruling, or continue to move between borders.
AI technologies also bring up procedural-rights issues, such as how a potential immigrant or refugee claimant would challenge the outcome of his case at the border.
“When you introduce AI, if you don't agree with the decision, where do you appeal? And what kind of appeal are you crafting?” Molnar said. “These are all new questions we have to ask ourselves.”
The report found that the government has been experimenting with artificial intelligence since 2014. Immigration, Refugees and Citizenship Canada confirmed to the report's authors in June it was already using an automated response to “triage,” or separate, simple claims from complicated ones that need further review.
This summer, the government sent out an RFI (a preliminary procurement document) seeking an “Artificial Intelligence Solution” to provide legal support for migrants entering at formal points of entry.
These investments fit into the federal government's $125-million Pan-Canadian Artificial Intelligence Strategy to “develop global thought leadership on the economic, ethical, policy and legal implications” of AI research throughout the country.
Molnar said she heard from government officials that their use of AI is “preliminary” at best. What the government is considering, she continued, is using AI technologies only for preliminary screening.
After AI technologies have reviewed a case, Molnar said immigration officers should still be asked to review the decision and make any appropriate changes.
Molnar said it's still too soon to tell what AI could look like at the borders, but noted the technological changes could be vast.
“It can be as simple as an Excel sheet, all the way to totally autonomous robots in other sectors,” she continued. “In immigration, how this could manifest ... could include a triage system where a traveller might be designated a high risk or low risk, or streamed for high risk and low risk.”
To solve these possible human-rights infringements, the report suggests installing an independent, arms-length government-oversight body to “engage in all aspects of oversight,” before the government continues to develop these technologies.
This recommendation, Molnar said, is in line with the Treasury Board of Canada Secretariat's review into responsible use of AI throughout government offices. Among other recommendations, the board suggests more transparency from government offices about when AI technologies will be used during a discretionary decision-making process. The report notes this suggestion “is promising, from a human-rights perspective,” but the document is non-binding and is still subject to change.
Until the review body is created, the report suggests government freeze “all efforts to procure, develop or adopt” any new automated-decision-system technology before a government oversight process is in place.