I also think there's something that I'd like to do a different than what people are doing now, and this comes out of work that I did with simulated evolution. Because if you look at, for instance, how that go program was made, there was some human that fiddled with the parameters of the network, how many layers did it have, how many examples did it have, how much control did this part of it have over that part. And so it was sort of... it wasn't completely designed by learning, it was designed by a combination of human... I won't call it engineering, but human manipulation and learning. And for us, that was done by a process of evolution. So I think that one of the most interesting things would be to recreate that process of evolution inside the computer so that it was the computer that's doing all that fiddling with the parameters and the connections of things and so on. And I think that's an incredibly promising area, but what it means is that we could evolve an artificial intelligence without really quite understanding how it works. We already really don't quite understand how the neural networks work in detail. I mean, we know in principle, for instance, how they recognise objects, but if you ask how do they recognise any particular object, we'd be hard-put to explain it.
So for instance, when I did some early work in evolving computer programs that did things, I evolved programs that could sort numbers. And it evolved programs that were very, very efficient at sorting numbers, much more efficient than I could write. And I could see all the lines of code in those programs, but I couldn't explain how they worked. I couldn't tell you a story that explained why should this sequence of instructions sort numbers perfectly. And yet, they did. So you can create something without necessarily understanding it. And in fact, it may not be understandable in the sense that you can tell a story about it. And I think we're starting to realise that about neural systems. For example, we know a little tiny flatworm [sic: roundworm] called C. elegans, a nematode called C. elegans, and we know how to... we know its neural system. It can learn, it has simple behaviour. And we know every nerve in its neural system and how it's connected, and yet we really can't explain to you how it works. And that just has a few hundred neurons in it. So I'm pretty sure that even if I mapped the human brain and I told you exactly every single neuron and exactly how it was connected, you wouldn't be able to explain how the brain worked, even with complete knowledge of its connections. And so I think understandability maybe mostly only works for simple things or engineered things or social things, because it's... we understand things by turning them into causes and effects, which is a kind of a storytelling technique, and not everything is simple enough to tell an understandable story about.