A hybrid linear/nonlinear training algorithm for feedforward neural networks

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.

Original languageEnglish
Pages (from-to)669-684
Number of pages16
JournalIEEE Transactions on Neural Networks
Volume9
Issue number4
DOIs
Publication statusPublished - 1998

Keywords

  • Feedforward neural networks
  • Off-line training algorithms
  • Second-order methods

Fingerprint

Dive into the research topics of 'A hybrid linear/nonlinear training algorithm for feedforward neural networks'. Together they form a unique fingerprint.

Cite this