A Model of a Neuron.

[A simple model] As complicated as the biological neuron is, it may be simulated by a very simple model [left]. The inputs each have a weight that they contribute to the neuron, if the input is active. The neuron can have any number of inputs; neurons in the brain can have as many as a thousand inputs. Each neuron also has a threshold value. If the sum of all the weights of all active inputs is greater than the threshold, then the neuron is active. For example, consider the case where both inputs are active. The sum of the input's weights is 0. Since 0 is smaller than 0.5, the neuron is off. The only condition which would activate this neuron is if the top input were active and bottom one were inactive. This single neuron and its input weighting performs the logical expression A and not B.

[Bias neuron] There is a variation on this model that sets the threshold to 0 on all neurons, and adds an extra input that is always active. The extra input is weighted to account for the missing threshold [right]. The two models are mathematically identical. The advantage of the second version is that it simplifies the math involved in automatic learning and implementation, since there is only one type of variable to keep track of. Both of these simple models accurately simulate the most important aspects of the biological neuron, though they do leave out some features such as temporal summation. A more complicated model could easily account for these, but for most requirements the simple models suffice.

The previous examples showed that A and not B was solvable with a single neuron. This is a fairly obscure logical construct, and it leads to the question of what else a single neuron is capable of. The easiest way to find out is to play with a neural network computer program. The BrainBox program is a Windows application that allows one to watch and modify neural networks as they execute. It doesn't take long to find that of the 16 two-input logical functions, 14 of them can be constructed with a single neuron (XOR and XNOR both require two neurons).

Since neurons are functionally complete, this means that in addition to logic, they are also capable of storing and retrieving data from `memory'. A neural network can store data in two formats. Permanent data (long term memory) may be designed into the weightings of each neuron. An example of this is the self-teaching network that will be discussed later. [Single bit memory] Temporary data (short term memory) can be actively circulated in a loop, until it is needed again [left]. In this example, activating the top input briefly will activate the neuron. Since the output of the neuron feeds back to itself, there is a self-sustaining loop that keeps the neuron firing even when the top input is no longer active. Activating the lower input suppresses the looped input, and the node stops firing. The stored binary bit is continuously accessible by looking at the output. This configuration is called a latch. While it works perfectly in this model, a biological neuron would not behave quite this way. After firing, a biological neuron has to rest for a thousandth of a second before it can fire again. Thus one would have to link several neurons together in a duty-cycle chain to achieve the same result.

--------------------------------

Previous: The Biological Neuron.
Next: Medium Independence.

Last modified: September 21, 1998
By: Neil Fraser (neil@vv.carleton.ca)