Document Type

Technical Report

Publication Date

1-1-1987

Technical Report Number

PCS-TR88-139

Abstract

When we look at a familiar object from a novel viewpoint, we are usually able to recognize it. In this thesis, we address the problem of learning to recognize objects under transformations associated with viewpoint. Our vision model combines a hierarchical representation of shape features with an explicit representation of the transformation. Shape features are represented in a layered pyramid-shaped subnetwork, while the transformation is explicitly represented in an auxiliary subnetwork. The two connectionist networks are conjunctively combined to allow object- centered shape features to be computed in the upper layers of the network. A simulation of a 2-D translation subnetwork demonstrates the ability to learn to recognize shapes in different locations in an image, such that those same shapes can be recognized in novel locations. Two new learning methods are presented, which provide improved behavior over previous backpropagation methods. Both methods involve ciompetitive interactions among clusters of nodes. The new learning methods demonstrate improved learning over the generalized delta rule when applied to a number of network tasks. In the first method, called error modification, competition is based on the error signals computed from the gradient of the output error. The result of this competition is a set of midified error signals representing a contrast enhanced version of the original errors. The error modification method reduces the occurrence of network configurations that correspond to local error minima. In the second method, called error augmentation, competition is based on that activations of the nodes in the cluster. Network changes resulting from this competition augment those specified by the error gradient computation. This competition is implemented by the trace comparison rule, a new self-organizing mechanism that is effective in developing highly discriminating features within the cluster. The error augmentation method improves learning in the lower network layers when backpropagged error is weak.

Share

COinS