Date of Award
Department of Computer Science
The rise of toxicity and hate speech on social media has become a cause for concern due to their effects on politics and the growth of extremist internet communities. The tools currently used to identify and eliminate harmful content have received widespread criticism from both the public and the academic community for their inaccuracies and biases. In our research, we set out to audit the performance of Perspective API, a toxicity detector created by research teams at Google and Jigsaw, on the language of users across a variety of demographic categories. We draw from Crenshaw's framework of intersectionality to discuss the unique harms that result from existing at the intersections of marginalization and examine existing computational models of disparate impact and proxy discrimination. In addition, we conduct A/B testing on Amazon's Mechanical Turk, a popular crowd-sourcing platform for data annotation within research communities, to identify and discuss biases that arise from human demographic prediction.
Jiang, Jiachen, "A Critical Audit of Accuracy and Demographic Biases within Toxicity Detection Tools" (2020). Dartmouth College Undergraduate Theses. 207.