•  
  •  
 

Document Type

Article

Abstract

When the pandemic forced schools to shift to remote education, school administrators worried that unsupervised exams would lead to widespread cheating. Many turned to online proctoring technologies that use facial recognition, algorithmic profiling, and invasive surveillance to detect and deter academic misconduct. It was an “epic fail.”. Intrusive and unproven remote proctoring systems turned out to be inaccurate, unfair—and often ineffectual. The software did not account for foreseeable student diversity, leading to misidentification and false flags that disadvantaged test-takers from marginalized communities. Educators implemented proctoring software without sufficient transparency, training, and oversight. As a result, students suffered privacy, academic, reputational, pedagogical, and psychological harms. Online proctoring problems prompted significant public backlash but no systemic reform. Students have little recourse under existing legal frameworks, including current biometric privacy, consumer protection, and antidiscrimination laws. Student privacy laws like the Family Educational Rights and Privacy Act (FERPA) also offer minimal protection against schools’ education technology. However, FERPA’s overlooked rights of review, explanation, and contestation offer a stop-gap solution to promote algorithmic accountability and due process. The article recommends a moratorium on online proctoring technologies until companies can demonstrate that they are accurate and fair. It also calls for schools to reject software that relies on prolonged surveillance and pseudoscientific automated profiling. Finally, it recommends technical, institutional, and pedagogical measures to mitigate proctoring problems in the absence of systemic reform.

Erratum

The editors recognize that the header date for the article is January 2023, which reflects the date final editing occurred.

Share

COinS