HOME  |  CONTACT US  |
 
A New Kind of Science

by Stephen Wolfram
1280 pages,
ISBN: 1579550088


Post Your Opinion
The Queen is Dead, Long Live the Queen!
by Lila Kari

If one were to sum up the twelve-hundred-page opus magnum of Stephen Wolfram A New Kind of Science in one sentence, that sentence could be: "Computation is the Queen of Sciences."

This being in obvious contradiction with the view held by many since Plato, that mathematics is the queen of sciences and of the universe itself, it warrants some elaboration. The main aim of mathematics is to sum up the laws governing the universe and express them in succinct, simple equations with immense explanatory and predictive power. The whole platonic idea of studying the abstract notion of a straight line in lieu of its incarnations as the horizon, the edge of a table, the trajectory of a raindrop and countless others is based on the faith that mathematics can achieve a magical data-compression of the infinity down to a little equation.

Stephen Wolfram argues differently. His claim is that, while mathematics and its close-twin, theoretical physics, have indeed achieved tremendous success historically, this has been largely due to the fact that they have tackled essentially simple phenomena. As an example, Fermat's Last Theorem (1630), which has taken mathematicians centuries to solve, is in fact a quite modest statement: The equation x^n + y^n = z^n has no non-zero integer solutions for x, y and z when n > 2. Stephen Wolfram argues that, as soon as the situation we aim to model gets anywhere beyond such trivial statements, the power of mathematics breaks down and nothing can be predicted anymore. His explanation is that

"[...] if meaningful general predictions are to be possible, it must at some level be the case that the system making the predictions be able to outrun the system it is trying to predict."

However, one of the conclusions of this book is that

" [...] systems one uses to make predictions cannot be expected to do computations that are any more sophisticated than the computations that occur in all sort of systems whose behavior we might try to predict."

In other words, according to Wolfram, whenever dealing with non-trivial phenomena no mystic shortcuts such as beautiful simple mathematical equations can be found: the only way to predict the behaviour of the system is to go through roughly as many computation steps as it took the system to evolve.

While this sounds scientifically pessimistic, because it basically says we cannot predict any complex phenomenon by abstracting its main features and finding the math behind it, Wolfram offers a glimpse of hope. One of his main ideas, put forward countless times throughout the book, is that incredibly complex behaviour can be produced by programs with extremely simple rules.

Our mistaken intuition that complex behaviour has to be the result of complex systems is caused, according to Wolfram, by our teleologic "engineering" view of science wherein one starts with the "output of a process and tries to design a predictable system to produce it." This is often very difficult and many times the result of this reverse engineering is a terribly complex system. However, nature is not under the constraint of a predetermined output/result and has thus much more freedom of exploring the computational abilities of simple programs. As a result, if one studies nature as well as its incarnation (writes Wolfram) as purposeless programs one notices how complex random-like behaviour can arise from systems with just a couple of very simple rules. According to Stephen Wolfram one just needs to explore this world of purposeless simple programs (most of which mean, in his book, cellular automata) to find the secrets of the universe. Mathematics, with its implicit nature, will not do. Only explicit computations will provide the answer. In other words, Mathematics is Dead, Long Live Computation!

A small parenthesis is in order while talking about complex behaviour as defined in this book. Complexity and randomness are extensively used as terms but never clearly defined. The well-known notion of Kolmogorov complexity defines the complexity of a string as the length of the smallest program able to generate it. However, this definition is summarily brushed aside as inadequate, even though it is clear that by adopting it the entire hypothesis "simple programs generate complex behaviours" is undermined (by definition, if there exists a simple program to generate a sequence, that sequence cannot be complex). Instead, vague notions such as "appears in many respects completely random" or "apparently random" are used. By the way, the lack of references to existing literature is at times quite disconcerting: why not call a spadeùin this case, Kolmogorovùa spade?

Stylistically, a serious distraction is the number of sentences that start with "and", "but" or "so". On page 364 for example, out of 13 sentences 9 start with these words, which corresponds pretty well to the overall impression that more than 75% of sentences in the book have this in common. But I digress.

After illustrating his point on the complexity-generating simple programs, the book goes on to explain the alleged ramifications of this self-titled "discovery" into various domains such as physics, biology, perception and analysis, and the notion of computation. For example, Wolfram presents convincing cellular automata modelling crystal growth and fluid flow (given the blatant lack of references to sources in previous chapters one is left wondering which of these, if any, represent an original contribution).

An intriguing shot is taken at evolutionary biology. The main point, which runs contrary to existing theories, is that the role of natural selection in evolution is over-rated:

[..] natural selection can achieve little when confronted with complex behaviour. For in effect it is being asked to predict what changes would need to be made in an underlying program in order to produce or enhance a certain form of overall behaviour. Yet one of the main conclusions of this book is that even given a particular program, it can be very difficult to see what the behavior of the program will be. And to go backwards from behaviour to programs is a still much more difficult task.

As a natural conclusion of this argument, Wolfram proposes that

[...] indeed one suspects that in fact the vast majority of features of biological organisms do not correspond to anything close to optimal solutions: rather, they represent solutions that were fairly easy to find, but are good enough not to cause fatal problems for the organism.

A second stab is taken at fundamental physics. Stephen Wolfram argues that many of the fundamental laws of physics could in fact be untrue, or more exactly, true only in very limited contexts. As an example he gives the Second Law of Thermodynamics which seems to be contradicted by biological organisms whose entropy is decreasing. He argues that instead of trying to find some simple laws the universe is based on, laws which may not exist, one could instead explore and try to find the (allegedly very simple) rules of the program that governs the universe. If this program, which could even have a single underlying rule, were to be found, then from it all the complexity we see in the universe could emerge together with the known laws.

Additional arguments are made in favour of the hypothesis that, in spite of all current views, space and time are discrete in nature, and therefore best studied by computational means as opposed to continuous differential equations and similar tools.

Lastly, the implications that the "discoveries" in the book have on the very notion of computation are addressed. The main purpose of this analysis is the reiterated thesis that standard mathematical analysis is useful only for simple cases, while for the complex ones only a theory of computation can shed light.

After explaining the notion of a Universal Turing Machine, (the difference between a Universal Turing Machine and a Turing Machine is essentially that between a general purpose programmable computer and a particular computer program) the author goes on to propose the following: "[...] all processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations."

While this is an interesting idea, it is hardly new, even though Stephen Wolfram makes it sound so. What is interesting though, in this analysis of the notion of computation, is the author's statement that "[...] there is nothing fundamental that requires a computation to have any such definite purpose." In other words, he, among other (unreferenced) voices, wants to free the notion of computation from the rigid static input/output paradigm.

Stephen Wolfram goes on to propose what he calls the Principle of Computational Equivalence (PCE) with the claim that it is a "new Law of Nature", having "vastly richer implications than "[...] any single collection of laws in science." After this rather strong statement, several formulations of PCE are given, among which that "[...] there is essentially just one highest level of computational sophistication, and this is achieved by almost all processes that do not seem obviously simple."

While this is an intriguing idea, several objections to it can be offered. One: Its first part, the claim that there is an upper limit to what we call computation is a well-known statement that goes under the name of Church-Turing thesis. Consequently, to call this a "new Law of Nature" discovered by the author is at best misleading. Two: The argument that universal computational power is achieved by all processes that are not obviously simple seems rich in meaning until one realizes that "not obviously simple" is actually not defined and the author uses vague terms to describe such processes, such as "non-repetitive and non-nested." Three: The author fails to produce a single example in which this "new law" like the other "discoveries" in the book has any predictive or explanatory power that justifies them being called "Laws of Nature" as opposed to mere philosophical musings. Instead, the author makes numerous claims about the vast implications of PCE, but one starts to be skeptical when among them one finds statements such as "Godel's Incompleteness Theorem is in some sense a consequence of PCE."

Another claimed consequence of PCE is that "[...] among other things this means that universality can be expected to occur not only in many kinds of abstract systems but also in all sorts of systems in nature." This too would be intriguing if it were not for the fact that L.Adleman1 proved already in 1994 that sophisticated computation can be achieved with DNA molecules, and L.F. Landweber and L.Kari2 showed that DNA recombination processes happening in unicellular protozoa have universal computational power. In other words, while true, this is hardly news.

Nor is it new that universality is thought to be fairly common. To conclude, the book makes for interesting reading in spite of the lack of references, the continuous self-aggrandizement, and the fact that it seems that the same ideas could have easily been expressed in one tenth of the space. As for the idea that time and space are discrete and Computation is the Queen of Sciences, I would not strongly argue against it. But then, I am not without a bias: I am a computer scientist myself. ò

Lila Kari works in the Department of Computer Science at University of Western Ontario.

1. L.Adleman. Molecular computation of solutions to combinatorial problems. "Science" v.266, Nov.1994, 1021û 1024.)
2. L.F. Landweber, L. Kari. Universal molecular computation in ciliates. In "Evolution as Computation", L.F. Landweber, E. Winfree, Eds., Springer Verlag, 1999. To appear.)

footer

Home First Novel Award Past Winners Subscription Back Issues Timescroll Advertizing Rates
Amazon.ca/Books in Canada Bestsellers List Books in Issue Books in Department About Us