So I had this slightly... sort of... not exactly adversarial relationship, but then I really started thinking about it, and when I was making the supercomputers and people were using... the intelligence people started using my supercomputers, and I got to know what they were doing. And I started thinking about it. And I said, am I for this or against this? And I realised wars almost always happen because somebody misestimates something. Somebody misunderstands something. And that probably having more information about what people are doing is better. And we're less likely to get into wars. And so I decided, you know, I'm for intelligence. I'm... I don't want to work on weapons systems, but I do want to work on people having a clearer understanding of things. And so I started helping out in various ways with the supercomputing and doing that. And then I got really involved in Thinking Machines, got really involved in... well, Thinking Machines got involved in providing the computer, but then later, Applied Minds got really involved in trying to build some of the software. And I got very concerned about this problem of how we put together a picture out of all the information that we're getting. And I realised really we gather lots of information, but it tends to not come together in the right way to make a big picture, and I got to see the whole process of how we gathered the information and how bad it was at putting together the picture. And I started worrying about this, because I realised we might actually have the information but not be able to act on it, and we might fool ourselves into getting scared. And in fact, I had a slide deck prepared that I used to go to the intelligence agencies with, which was a scenario where there was one joker who was putting bad information in the system, but it sort of came up in different ways, through different pipes, and came back together so that it looked like we were getting this information from lots of different sources and that we convinced ourselves, in my scenario, that Iran was building a nuclear weapon when it really wasn't. And that that would be a scenario where we could get into a war just because of our inability to kind of trace the way that information flowed through the system and combined through the system. And so I worked really hard with the intelligence agencies to try to put in better system for bringing together intelligence and so on. And it was a frustrating experience. And I actually built software systems for them and... One thing I did is, for example, I built... you know, they believed in sort of Bayesian reasoning of evidence weighting, but I was like, 'Yes, you say you believe in it, but the way that you weight evidence, the way your analysts do this, is nothing like Bayesian.' And they were like, 'Well, would you give us a tool?' So I actually built them a tool where you could put in the probability of things and then it would tell the probability of it. And they tried using the tool, and they said, 'Oh, we don't want to use that.' And I was like, 'Why not?' And it's, like, 'Well, it gives us the wrong answer.' And I'm like, 'What do you mean, it gives the wrong answer?' And it's like, 'Well, the analysts looked at it and they don't believe the answer it gave.' And I was like, 'Yes, that's because they weren't doing Bayesian. You know, they don't have good intuitions as to how probabilities combine to create probabilities, so the fact that it contradicts their intuition is why you need the tool.' But they didn't use the tool. And I made lots of systems for trying to put different kinds of information together where it could be compared, and it was very hard to get these systems in, for lots of different reasons.