Date of Award

6-1-2020

Document Type

Thesis (Undergraduate)

Department or Program

Department of Computer Science

First Advisor

Lorenzo Torresani

Abstract

We propose an algorithm for training neural networks in noisy label scenarios that up-weighs per-example gradients that are more similar to other gradients in the same minibatch. Our approach makes no assumptions about the amount or type of label noise, does not use a held-out validation set of clean examples, makes relatively few computations, and only modifies the minibatch gradient aggregation module in a typical neural network training workflow. For CIFAR-10 classification with varying levels of label noise, our method successfully up-weighs clean examples and de-prioritizes noisy examples, showing consistent improvement over a vanilla training baseline. Our results open the door to potential future work involving per-example gradient comparisons.

Comments

Originally posted in the Dartmouth College Computer Science Technical Report Series, number TR2020-899.

Share

COinS