Date of Award
Doctor of Philosophy (PhD)
Electrical and Computer Engineering
Subhash C. Kak
Perhaps the most popular approach for solving classification problems is the backpropagation method. In this approach, a feedforward network is initially built by an intelligent guess regarding the architecture and the number of hidden nodes and then trained using an iterative algorithm. The major problem with this approach is the slow rate of convergence of the training. To overcome this difficulty, two different modular approaches have been investigated. In the first approach, the classification task is reduced to sub-tasks, where each sub-task is solved by a sub-network. An algorithm is presented for building and training each sub-network using any of the single-node learning rules such as the perceptron rule. Simulation results of a digit recognition task show that this approach reduces the training time compared to that of backpropagation while achieving comparable generalization. This approach has the added benefit that it develops the structure of the network while learning proceeds as opposed to the approach of backpropagation where the structure of the network is guessed and is fixed prior to learning. The second approach investigates a recently developed technique for training feedforward networks called corner classification. Training using corner classification is a single step method unlike the computationally intensive iterative method of backpropagation. In this dissertation, modifications are made to the corner classification algorithm in order to improve its generalization capability. Simulation results of the digit recognition task show that the generalization capabilities of the networks of the corner classification are comparable to those of the backpropagation networks. But the learning speed of the corner classification is far ahead of the iterative methods. Designing the network by corner classification involves a large number of hidden nodes. In this dissertation, a pruning procedure is developed that eliminates some of the redundant hidden nodes. The pruned network has generalization comparable to that of the unpruned method.
Hashemian, Parvin, "Modular Approaches for Designing Feedforward Neural Networks to Solve Classification Problems." (1996). LSU Historical Dissertations and Theses. 6309.