Date of Award


Document Type

Thesis (Undergraduate)


Department of Computer Science

First Advisor

Jay Aslam


Bounds have been proven for both training and testing error for the boosting algorithm AdaBoost, but in practice neither seem to produce a particularly tight bound. In this paper we share some observations of these bounds from empirical results, and then explore some properties of the algorithm with an eye towards finding an improved bound for the performance of AdaBoost. Based on our empirical evidence, the error of a hypothesis which labels examples probabilistically based upon the confidence of the vote of the weak hypotheses forms a tighter bound for the training error.


Originally posted in the Dartmouth College Computer Science Technical Report Series, number TR2001-394.