Date of Award

6-5-1998

Document Type

Thesis (Undergraduate)

Department or Program

Department of Computer Science

First Advisor

Jay Aslam

Abstract

A weak PAC learner is one which takes labeled training examples and produces a classifier which can label test examples more accurately than random guessing. A strong learner (also known as a PAC learner), on the other hand, is one which takes labeled training examples and produces a classifier which can label test examples arbitrarily accurately. Schapire has constructively proved that a strong PAC learner can be derived from a weak PAC learner. A performance boosting algorithm takes a set of training examples and a weak PAC learning algorithm and generates a strong PAC learner. Our research attempts to solve the problem of learning a multi-valued function and then boosting the performance of this learner. We implemented the AdaBoost.M2 boosting algorithm. We developed a problem-general weak learning algorithm, capable of running under AdaBoost.M2, for learning a multi-valued function of uniform length bit sequences. We applied our learning algorithms to the problem of classifying handwritten digits. For training and testing data, we used the MNIST dataset. Our experiments demonstrate the underlying weak learner's ability to achieve a fairly low error rate on the testing data, as well as the boosting algorithm's ability to reduce the error rate of the weak learner.

Comments

Originally posted in the Dartmouth College Computer Science Technical Report Series, number PCS-TR98-341.

Share

COinS