Date of Award

1994

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Engineering Science (Interdepartmental Program)

First Advisor

Subhash Kak

Abstract

In this dissertation, a feedback neural network model has been proposed. This network uses a second order method of convergence based on the Newton-Raphson method. This neural network has both discrete as well as continuous versions. When used as an associative memory, the proposed model has been called the polynomial neural network (PNN). The memories of this network can be located anywhere in an n dimensional space rather than being confined to the corners of the latter. A method for storing memories has been proposed. This is a single step method unlike the currently known computationally intensive iterative methods. An energy function for the polynomial neural network has been suggested. Issues relating to the error-correcting ability of this network have been addressed. Additionally, it has been found that the attractor basins of the memories of this network reveal a curious fractal topology, thereby suggesting a highly complex and often unpredictable nature. The use of the second order neural network as a function optimizer has also been shown. While issues relating to the hardware realization of this network have only been addressed briefly, it has been indicated that such a network would have a large amount of hardware for its realization. This problem can be obviated by using a simplified model that has also been described. The performance of this simplified model is comparable to that of the basic model while requiring much less hardware for its realization.

Pages

95

DOI

10.31390/gradschool_disstheses.5721

Share

COinS