Date of Award

5-31-2017

Document Type

Thesis (Undergraduate)

Department or Program

Department of Computer Science

First Advisor

Hany Farid

Abstract

Algorithms have recently become prevalent in the criminal justice system. Tools known as recidivism prediction instruments (RPIs) are being used all over the country to assess the likelihood that a criminal defendant will reoffend at some point in the future. In June of 2016, researchers at ProPublica published an analysis claiming an RPI called COMPAS was biased against black defendants. This claim sparked a nation-wide debate as to how fairness of an algorithm should be measured, and exposed the many ways that algorithms are not necessarily fair. Algorithms are used in the criminal justice system because they are regarded as more accurate and less biased than human predictions; however, there does not exist a contemporary comparison of the performance of human and algorithmic recidivism predictions. To address this, we set out to determine if COMPAS is more accurate than human prediction, and to identify how the racial biases of human recidivism predictions compare to the racial biases of the COMPAS algorithm. After establishing a baseline performance of human prediction, we explore whether incorporating human judgment into algorithms can enhance prediction accuracy.

Comments

Originally posted in the Dartmouth College Computer Science Technical Report Series, number TR2017-822.

COinS