There had been some theories of what neural networks could… could do, and the answer was that… neural networks are like logic in the sense that you make… you can make little boxes that can compute simple properties of… of input signals and represent them in their output in some way, like one neuron will recognize whether two inputs come at approximately the same time, and say yes, both, and so that corresponds to saying that for certain two statements are true at the same time, something like that.
So there’s a rule… and another kind of neuron will produce an output if either input is excited, and so that's… that’s called an or, and then there’s a not where you’ll have a neuron that will produce outputs unless a signal comes in, and then will turn it off. And it turns out with those three kinds of things you can build any kind of more complicated machine you can think of. And so that was sort of known rather early by Russell and Whitehead and people like that around the turn of the… beginning of the 20th century.
So when I’m going to school in the 1950s a lot has been known about logic and simple machines for 50 years, but then in the 1930s and '50s a lot more was learned about machines by Alan Turing who invented the modern theory of computers in the… 1936 was his outstanding paper, and Claude Shannon who invented the… a great theory of the amount of information carried by signals in… published I think just about 1950; he had also invented logical theories before that. And Shannon became a close friend of ours fairly early in 1952 in fact.