Open Access Publisher and Free Library
HUMAN RIGHTS.jpeg

HUMAN RIGHTS

Human Rights-Migration-Trafficking-Slavery-History-Memoirs-Philosophy

Posts tagged Canadian immigration system
Deportability, humanitarianism and development: neoliberal deportation and the Global Assistance for Irregular Migrants program

By Corey Robinson

Offering return assistance and financial inducements to migrants and asylum-seekers, assisted voluntary return and reintegration (AVRR) programmes are critical to the management of migration. While AVRR programmes have emerged as an area of study in their own right, little attention has been paid to the role of these schemes in the transnational politics of anti-smuggling policy. Building on insights from border studies, migration studies and security studies, this article examines the Global Assistance for Irregular Migrants (GAIM) programme. The GAIM programme is an AVRR programme funded by the Canadian government and implemented by the International Organization for Migration (IOM), which targeted Sri Lankan nationals stranded following the disruption of smuggling ventures in West Africa. This article examines how the GAIM programme framed, rationalised and obscured the practice of neoliberal deportation as a humanitarian gesture in the interests of migrants themselves. It documents and conceptualises the humanitarian claims, narratives and representations mobilised by Canada and the IOM to explain and justify the return of stranded asylum-seekers. It argues that the GAIM programme can be analysed as a form of humanitarian securitisation, which obscures the politics of anti-smuggling policy, masks the violence of deportation and legitimises the return of stranded asylum-seekers.

Third World Quarterly Volume 43, 2022 - Issue 4

The New Jim Crow: Unmasking Racial Bias in AI Facial Recognition Technology within the Canadian Immigration System

By Gideon Christian

Facial recognition technology (FRT) is an artificial intelligence (AI)-based biometric technology that utilizes computer vision to analyze facial images and identify individuals by their unique facial features. This sophisticated AI technology uses advanced computer algorithms to generate a biometric template from a facial image. The biometric template contains unique facial characteristics represented by dots, which can be used to match identical or similar images in a database for identification purposes. The biometric template is often likened to a unique facial signature for each individual.

A significant rise in the deployment of AI-based FRT has occurred in recent years across the public and private sectors of Canadian society. Within the public sector, its application encompasses law enforcement in criminal and immigration contexts, among many others. In the private sector, it has been used for tasks such as exam proctoring in educational settings, fraud prevention in the retail industry, unlocking mobile devices, sorting and tagging of digital photos, and more. The widespread use of AI facial recognition in both the public and private sectors has generated concerns regarding its potential to perpetuate and reflect historical racial biases and injustices. The emergence of terms like “the new Jim Crow” and “the new Jim Code” draws a parallel between the racial inequalities of the post-US Civil War Jim Crow era and the racial biases present in modern AI technologies. These comparisons underscore the need for a critical examination of how AI technologies, including FRT, might replicate or exacerbate systemic racial inequities and injustices of the past.

This research paper seeks to examine critical issues arising from the adoption and use of FRT by the public sector, particularly within the framework of immigration enforcement in the Canadian immigration system. It delves into recent Federal Court of Canada litigation relating to the use of the technology in refugee revocation proceedings by agencies of the Canadian government. By delving into these legal cases, the paper will explore the implications of FRT on the fairness and integrity of immigration processes, highlighting the broader ethical and legal issues associated with its use in administrative processes.

The paper begins with a concise overview of the Canadian immigration system and the administrative law principles applicable to its decision-making process. This is followed by an examination of the history of integrating AI technologies into the immigration process more broadly. Focusing specifically on AI-based FRT, the paper will then explore the issues of racial bias associated with its use and discuss why addressing these issues is crucial for ensuring fairness in the Canadian immigration process. This discussion will lead to a critical analysis of Federal Court litigation relating to the use of FRT in refugee status revocation, further spotlighting the evidence of racial bias in the technology's deployment within the immigration system.

The paper will then proceed to develop the parallels between racial bias evident in contemporary AI-based FRT (the “new” Jim Crow) and racial bias of the past (the “old” Jim Crow). By focusing on the Canadian immigration context, the paper seeks to uncover the subtle, yet profound ways in which AI-based FRT, despite its purported neutrality and objectivity, can reinforce racial biases of the past. Through a comprehensive analysis of current practices, judicial decisions, and the technology's deployment, this paper aims to contribute to the ongoing dialogue about technology and race. It challenges the assumption that technological advancements are inherently equitable, urging a re-evaluation of how these tools are designed, developed, and deployed, especially in sensitive areas such as refugee status revocation, where the stakes for fairness and equity are particularly high.

69 McGill Law Journal 441 (October 2024)