Open Access Publisher and Free Library
03-crime prevention.jpg

CRIME PREVENTION

CRIME PREVENTION-POLICING-CRIME REDUCTION-POLITICS

Posts tagged facial recognition
“Colorblind” Policing: Facial Recognition Technology’s Interplay in the Fourth Amendment’s Race Problem

By Anne McNamara

During the height of the Civil Rights movement, the Supreme Court in Terry v. Ohio crafted the policing power to stop and search an individual without a warrant, without probable cause, and if the officer possesses a reasonable suspicion of criminal activity. Thirty years later, in Whren v. United States, the Court willfully blinded itself to the subjective motivations of an officer who initiate a Terry stop, requiring only a claim of some lawful reason to initiate a stop to adhere to the Fourth Amendments protections. Despite overwhelming evidence that the Court’s Fourth Amendment jurisprudence disparately affects Black people, the Court continuously asserts that the Equal Protection Clause (EPC)—not the Fourth Amendment—is the proper constitutional avenue for relief from race-motivated policing. Even a defendant who successfully overcomes the EPC’s practically insurmountable requirement of proving discriminatory intent is not afforded the exclusionary rule’s protection. Ultimately, the Court’s use of EPC as its suggested remedy provides little concrete relief for individuals subjected to pretextual stops. Against this backdrop of racially influenced law enforcement, the advent and development of Facial Recognition Technology (FRT) has fundamentally altered American policing over the past decade. FRT is an algorithmic code, created by private companies, capable of recognizing a person’s facial identity by comparing it to other faces that are located in a centralized database. Some critics of the police’s use of FRT warn of its disparate impact on people of color who already face higher instances of police surveillance. Further, critics caution that FRT algorithms have higher error rates in identifying people of color, that databases used are often overly saturated with people of color, and that the police’s unregulated, unrestrained use of FRT reinforces preconceived notions of “Black criminality.” Historically, federal courts have been reluctant to condemn police implementation of technological advances as violative of the Fourth Amendment. While the police are prohibited from using publicly unavailable technology to surveil the details of an individual’s home, technology deployed by law enforcement in a public space often escapes constitutional constraints. In some instances, however, defendants successfully challenge police use of advanced technology for surveillance purposes through the lens of mosaic theory, which assesses police behavior in the aggregate to determine whether prolonged periods of surveillance constitutes an invasion of privacy impermissible under the Fourth Amendment. In light of the Court’s silence regarding FRT, a handful of cities and states have enacted laws that curb or completely ban police use of FRT. On the federal level, the preceding Congress proposed two bills: one seeking to require probable cause for police to deploy the technology, the other seeking to implement a complete federal ban of FRT and to disincentivize state and local use by withholding certain funding. This Note first surveys the Fourth Amendment jurisprudence that created a legal justice system that is willfully ignorant of an officer’s potential racial motivations.18 Then, this Note discusses the police’s implementation of FRT and how it further infringes upon Black people’s liberties and dignities under the guise of “neutral” technology. Next, this Note explores the Court’s reasoning in evolving technology and surveillances cases—with a particular emphasis on mosaic theory—and discusses state and proposed federal statutory approaches to FRT regulation. Then, this Note argues that the most dangerous uses of FRT are the least likely to be recognized and curbed by the Supreme Court due to its longstanding refusal to allow the constitution to check unrestrained police behavior, leaving Black people defenseless against FRT’s role in increasing the structural inequalities embedded in our legal system. This Note concludes by calling for a comprehensive federal ban on police use of FRT that adequately incentivizes state and local law enforcement to enact similar bans.

SUFFOLK UNIVERSITY LAW REVIEW [Vol. LVI:731 , 26p.

Facial Recognition Technology: Considerations for use in Policing

By Nessa Lynch & Andrew Chen

Embedded facial recognition capabilities are becoming more common across a wide range of technologies, so it’s important Police understand the parameters and potential consequences of the use of this kind of technology.

Dr Nessa Lynch (an Associate Professor at Victoria University of Wellington) and Dr Andrew Chen (a Research Fellow at the University of Auckland) are two of New Zealand’s leading experts and academic researchers in the field of facial recognition technology. Over a six-month period Dr Lynch and Dr Chen were commissioned to explore the current and possible future uses of facial recognition technology and what it means for policing in New Zealand communities

The scope of their work included:

  • defining facial recognition technology

  • categorising the spectrum of use and its potential effect on individual and collective rights and interests

  • exploring what Police currently does in this space, and what planned and unused capability exists within the organisation

  • providing insights and evidence into international practice and operational advantages for public safety and crime control, as well as Treaty of Waitangi, ethics, privacy and human rights implications

  • producing a paper with advice and recommendations on the safe and appropriate use of facial recognition technology in New Zealand policing.

Wellington, NZ: New Zealand Police, 2021. 84p.

Facial Recognition Technology: Towards and Legal and Ethical Framework

By Liz Campbell, Nessa Lynch, Joe Purshouse, Marcin Betkier

  The use of automated facial recognition technology (FRT) is becoming commonplace globally and in New Zealand. FRT involves the use of an algorithm to match a facial image to one already stored in a system, is used in automated passport control and other border control measures, as a biometric identifier in the banking, security and access contexts, and on social media platforms and various other consent-based applications

  This report contributes to the understanding of how and when this rapidly emerging hnology should be used and how it should be regulated. It is centred in what has been described as the ‘second wave’ of algorithmic accountability – While the first wave of algorithmic accountability focuses on improving existing systems, a second wave of research has asked whether they should be used at all—and, if so, who gets to govern them.3 This project seeks to address the regulation gap through ascertaining how FRT can and should be regulated in New Zealand. While the benefits that might be offered by FRT surveillance are increasingly observable, its effect on civil liberties is subtler, but certainly pernicious. Given the potential for FRT to be used as a key identity and access management tool in the future, there are pertinent questions around how images are being collected and stored now by the private sector. Where are these images being stored? Who has access to this data? What else might the images be used for? Without a detailed appraisal of the benefits of state FRT surveillance, and an understanding of the ethical issues raised by its use, any framework for the regulation of this activity cannot hope to engender public confidence that its use is fair and lawful.

Monash University, 2020. 118p.