Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Much work has been undertaken to demonstrate the advantages of analogue VLSI for implementing neural architectures. This paper attempts to address the issues concerning 'in-situ' learning with analogue VLSI multi-layer perceptron (MLP) networks. In particular, the authors propose that 'chip-in-the-loop' learning is, at the very least, necessary to overcome typical analogue process variations and the authors argue that MLPs containing analogue circuits with 8 bit precision can be successfully trained provided they have digital representations of the weights of at least 12 bits. The authors demonstrate that weight perturbation, with careful choice of the perturbation size, gives improved results over backpropagation, at the cost of increased training time. Indeed, the authors go on to show why weight perturbation is possibly the only sensible way to implement MLP 'on-chip' learning. The authors have designed a set of analogue VLSI chips specifically to see if their theoretical results on learning work in practice. Although these chips are experimental, it is their intention to use them to solve 'real world' problems which have relatively low input dimensionality, such as the task of speaker identification.

Original publication

DOI

10.1109/ICMNN.1994.593184

Type

Conference paper

Publication Date

01/01/1994

Pages

67 - 76