Well, I don't want to pick these pieces in detail because it would be very, very tedious, but I would like to mention one fact, however, which impressed itself upon me. There is this notion of white noise. White noise is fluctuations a la Laplace in which every fluctuation is independent of the others and they have a zero mean and well-defined variants. When you see a graph of white noise it is just is zigzag zigzag of a very tight nature. Now, 1/f noise manifests itself by a spectrum that is flat or a correlation which is zero, except for zero lag. And in the '50s or '60s, I forget exactly which date, when spectral analysis became in fashion very widely, many people did spectral analysis of prices. They found that they were white, a result that was absolutely in contradiction with intuition because everybody knew there's a great deal of structure in prices, but the spectrum did not reveal this structure. Well, the fault was the tool, not the intuition of the people working with it. They thought the tool meant more than it did. They said, "Under white conditions when the series is Gaussian, whiteness means independence. Well, we know it is not Gaussian, but we probably aren't too far from being Gaussian. Let's assume that it still means independence." That loose thinking is lethal. That's what gets people into binds, and spectral analysis became totally discredited in economics and in finance, for a long time because it was giving such counter-intuitive results. I would say it is an indictment of the profession because if a tool gives a counter-intuitive example, it's not a matter of abandoning the tool. One must understand what the counter-intuitive character of this result tells us about what we're dealing with. But let's not dwell on that. This blind spot of spectral analysis became very apparent in my work on a third class of 1/f noises, which are a completely, extraordinarily, dependent in structure, however are white. Whiteness does not necessarily imply independence. Whiteness is compatible with a very great deal of structure. So my work in the '60s consisted very systematically in exploring several special but well-defined and, how to say, tractable collections of models representing a great deal of variability, variability very far from these kind of mild fluctuations around a straight line, of various sorts. And what I was always led by is the idea that within this, how should I say, extraordinary arbitrariness of modelling you could think of any number of models, any number of formulas. Most formulas have only one virtue: they're easy to write, other formulas have the virtue of being, well, I would say robust - and the overwhelming criterion in my thinking all that time was emphasis upon phenomena which correspond to fixed points of certain natural transformations. I think it's something that must be discussed at some time else, but this emphasis on 1/f, on fixed points is quite essential. Fixed points amazingly enough did provide us with very good pictures of what reality was like.