ASA 124th Meeting New Orleans 1992 October

3aSP9. On-line learning without forgetting for language acquisition.

A. Agrawal

R. J. Mammone

Rutgers Univ., CAIP Ctr., Core Bldg., Frelinghuysen Rd., P.O. Box 1390, Piscataway, NJ 08855

In this paper, a new method for on-line training that overcomes interference in neural networks is presented. The language acquisition problem is modeled as a system of linear equations as first given by Gorin et al. [A. Gorin et al., ``Adaptive Acquisition of Language,'' in Neural Networks, Theory and Applications, edited by R. J. Mammone and Y. Y. Zeevi (Academic, New York), p. 125]. The new method encodes data patterns or words that have been learned previously as a set of convex constraints. On-line training is then implemented using the row action projection (RAP) algorithm to readjust the weights subject to these convex constraints. The RAP algorithm asymptotically converges to the pseudoinverse solution in the case of simple linear constraints. The computational complexity of conventional methods of calculating the pseudoinverse make them unsuitable for on-line applications. The previously learned patterns are viewed as long-term memory while the on-line data is considered to reside in short-term memory. Memory retention is obtained in the new approach by storing the patterns that are close to the feature space boundaries in long-term memory. These stored patterns are then used as convex constraints for any subsequent on-line learning. In contrast to previous block learning strategies, the new approach learns the meaning of words sequentially. This enables the on-line training of a connectionist network. Computer simulations are presented that demonstrate the memory retention capabilities of the new on-line training algorithm.