Date of Award

1996

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Electrical and Computer Engineering

First Advisor

Subhash C. Kak

Abstract

Perhaps the most popular approach for solving classification problems is the backpropagation method. In this approach, a feedforward network is initially built by an intelligent guess regarding the architecture and the number of hidden nodes and then trained using an iterative algorithm. The major problem with this approach is the slow rate of convergence of the training. To overcome this difficulty, two different modular approaches have been investigated. In the first approach, the classification task is reduced to sub-tasks, where each sub-task is solved by a sub-network. An algorithm is presented for building and training each sub-network using any of the single-node learning rules such as the perceptron rule. Simulation results of a digit recognition task show that this approach reduces the training time compared to that of backpropagation while achieving comparable generalization. This approach has the added benefit that it develops the structure of the network while learning proceeds as opposed to the approach of backpropagation where the structure of the network is guessed and is fixed prior to learning. The second approach investigates a recently developed technique for training feedforward networks called corner classification. Training using corner classification is a single step method unlike the computationally intensive iterative method of backpropagation. In this dissertation, modifications are made to the corner classification algorithm in order to improve its generalization capability. Simulation results of the digit recognition task show that the generalization capabilities of the networks of the corner classification are comparable to those of the backpropagation networks. But the learning speed of the corner classification is far ahead of the iterative methods. Designing the network by corner classification involves a large number of hidden nodes. In this dissertation, a pruning procedure is developed that eliminates some of the redundant hidden nodes. The pruned network has generalization comparable to that of the unpruned method.

ISBN

9780591204179

Pages

107

DOI

10.31390/gradschool_disstheses.6309

Share

COinS