Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
This will be a somewhat different Saturday Night Science. While most installments discuss a topic in science, technology, or mathematics, often in conjunction with a book, this month’s edition describes a project on which I’ve been working, off and on, since 1988 (mostly off, but with intense bursts of effort from time to time), which has just been re-released in a new, completely overhauled, updated, and extended edition concurrent with the publication of this article. CelLab, or Cellular Automata Laboratory, is a Web resource which allows you to explore a frontier of computing: cellular automata. But what are cellular automata, and why should you explore them?
What are Cellular Automata?
Cellular automata are massively parallel computer systems. For decades, most computers had a single processor or CPU (Central Processing Unit), which executed programs in a serial manner: one instruction at a time. Today, many desktop and notebook computers use chips that implement “multiple cores”, which are just multiple CPUs on a single chip. For example, the notebook machine on which I’m typing this has…hmm…I forget, let’s see…
…right—eight processors. That means it can work on eight streams of instructions at once. Few programs are designed to exploit this capability—it’s difficult to do, although I was writing such software in 1972—but it means that if you’re running a lengthy computationally-intensive task such as, for example, compressing a video file, that won’t slow down other programs running on the same machine, as they can use different processor cores. But still, even if you have a machine with 64 processor cores, like some servers, that’s not really massive, not when compared to the billions of bytes of high-speed memory and trillions of bytes of file storage typically found on such machines. Since the early days of electronic computing, system architects realized that one obvious way to make machines faster would be to deploy a large number of identical processors and get them all working together on the same problem at the same time. The challenge was developing algorithms which could exploit such an architecture, and for decades software technology lagged behind our ability to build hardware. But patient research eventually pays off, and all of today’s most powerful supercomputers are highly parallel machines: the Sunway TaihuLight machine in China has 40,960 processor chips, each of which contains 256 computing cores, for a total of more than ten million processors. Such machines are big, complicated, and expensive to purchase and operate (TaihuLight consumes more than 15 megawatts of electricity—comparable to 12,000 typical U.S. households), and exploiting all that power requires complex software and specialists to create it. Maybe there’s a better way.
In the late 1940s, at the dawn of the age of electronic computers, two pioneers in the field, John von Neumann and Stanislaw Ulam, both mathematicians with a long list of achievements in numerous fields, envisioned a new kind of computer which was inherently parallel—a cellular automaton. Rather than have a single, or a few, or even ten million complicated processors each independently working their way through serial lists of instructions, why not have a extremely large array of very simple processors, all running the same program, working on a modest amount of local data and communicating with their immediate neighbors? Now von Neumann and Ulam weren’t thinking about actually building such a machine with the vacuum tube technology of the time: their interest was theoretical, in particular whether it was possible, in principle, to build a machine which would be capable of replicating itself (von Neumann eventually described such a machine in 1952, and since then much simpler examples have been found, including one we’ll encounter a bit later). Here is a more detailed history of cellular automata and the origins of CelLab from the User Guide. But why should we explore such systems today? Here is my answer from 1989.
Why Cellular Automata? (From 1989)
Physics is local. The two great pillars of Twentieth Century science, general relativity and quantum mechanics, can be viewed as supplanting the mysticism of “action at a distance” and “force fields,” by such mundane, self-evident, and intuitive mechanisms as the Riemann curvature tensor, virtual gluons, and the Higgs field. Both of these theories describing the universe in the large and the very small (albeit in mathematically incompatible ways), tell us that all the complex fabric of events we observe are consequences of individual particles locally responding to conditions directly affecting them, whether moving along geodesics in curved spacetime or undergoing interactions through particle exchange. Both theories have withstood all experimental tests to date, including many thought impossible when they were originally propounded.
A cellular automaton (CA) is a mechanism for modeling systems with local interactions. A cellular automaton is a regular lattice of cells with local state, which interact with their neighbors subject to a uniform rule that governs all cells. The neighborhood (the set of cells whose state can affect a given cell at one instant) can be classified by the dimensionality of the automaton (most experimentation is done with one- or two-dimensional automata), and by the geometric fashion in which cells are interconnected.
The rule is the “program” that governs the behavior of the system. All cells apply the rule, over and over, and it is the recursive application of the rule that leads to the remarkable behavior exhibited by many cellular automata. When experimenting with cellular automata, one is primarily engaged in defining new rules which lead to interesting or useful behavior. CelLab provides tools for the person engaged in such experiments. It allows you to create a rich set of rules and experiment with their behavior without requiring the purchase of expensive special-purpose hardware.
Cellular automata appear to be abstract and devoid of practical applications, much as was said of computer graphics not long ago. If you want to model a universe which seems to be made up of particles which interact locally, there are two basic ways to go about it. The first is to create a huge array of numbers that represents the interacting items, then find the biggest number cruncher you can lay your hands on and set it gnawing away at the problem. The supercomputer boom, fueled by applications of this approach to weather forecasting, computational fluid dynamics in the transonic and hypersonic regimes, plasma dynamics, and an almost endless list of other applications testifies to the effectiveness of this approach.
But maybe there’s another way. Until recently, cellular automata were primarily a theoretical tool. The price of a cellular automaton with uniform edge size increases as the nth power of its size, where n is the dimensionality of the cellular automaton. This gets out of hand rapidly, even if you’re only working with two dimensional cellular automata. Therefore, although they may be the way the universe is really assembled and therefore worthy of study, no one would consider building one!
Hardware realizations of cellular automata, such as the CAM-6 board, have been built. The CAM-6 was not a true cellular system; it emulated one by using a fast RAM array and a look-up table, but it permitted exploration of a rich set of cellular automata with performance adequate to study their behavior in detail. The CAM-6 was a highly effective tool but, at $1,500, an expensive one. It was priced out of the reach of many creative people who should be exploring cellular automata. It was the desire to make cellular automata experimentation available at a low price to a large number of people that spurred the development of CelLab.
For cellular automata need only to find a concrete, compelling application to a real-world problem to burst into silicon and totally change the way we think about computing. Consider this: inside the computer you’re using now are ranks and ranks of RAM chips. A 256K×1 static RAM chip has a memory cell consisting of four to six transistors, connected in rows and columns to circuitry on the periphery of the chip. Even when the computer is running flat-out, you’re using precisely one cell at a time. This is the classic bottleneck in the von Neumann computer architecture (John von Neumann was very aware of this problem; in fact, he and Stanislaw Ulam invented cellular automata precisely as a tool for modeling complex systems), which has led to proposals such as Backus’ functional programming, neural systems, and many other architectural proposals, such as SIMD machines, which seem to be more effective in generating Ph.D.s than numbers.
If a two-dimensional cellular automaton with 256K cells were realized in silicon, it could compute 262,144 times faster than a serial processor accessing data bit-by-bit from a memory array. Yet, engineered for volume production, made in comparable volumes, and given time to slide down the learning curve, it need cost no more than a RAM chip. This is the potential of cellular automata. The beauty of two-dimensional cellular automata is that they map perfectly into our semiconductor manufacturing technology: they need the things it does best as opposed to, say, neural systems where the number of connections exceeds the capability of two layers of metal.
If there is merit in Edward Fredkin‘s suggestion that the fine-grained structure of the universe is really a cellular automaton, then cellular automata machines will play the role of particle accelerators in exploring this level of reality.
Some of the brightest minds of our century have been involved with cellular automata because they comprehended what cellular automata can do. John von Neumann, Stanislaw Ulam, John Horton Conway, Stephen Wolfram, and Edward Fredkin do not spend their time on nonsense. With the Rudy Rucker CelLab, you can begin to explore the potential that attracted those men to create and research this new way to compute. And perhaps you will discover something that will add your name to the list.
There’s plenty to discover. CelLab lets you program 256^216 distinct CA rules, which is a number larger than 10^157,826. These numbers are “effectively infinite”. Roughly 10^17 seconds are thought to have elapsed since the big bang ushered in the universe. If you had been around since then, creating, testing, and evaluating one rule per second, you still wouldn’t have made a dent in this number, any more than a buck makes a dent in a trillion dollars. Take away one dollar and you still have about a trillion. Even with enough time, you’d have a lot of trouble writing down the results of your exhaustive search for rules, as the universe is believed only to have on the order of 10^80 particles in it, so you’d run out of places to make notes even if you turned the entire universe into a cosmic all-encompassing Post-it® note. If the still-unconfirmed Grand Unified Theories are correct, by the time 10^40 seconds have passed, more than half of the protons will have evaporated into little poofs of energy and leptons, taking with them the fruits of your labors, and leaving what’s left of you with 10^157,826 bottles of beer still on the wall.
So get started, people! The human mind works a lot better than blind search (one imagines a middle manager reporting, “Nothing much yet—we need more monkeys, more typewriters.”). CelLab unleashes your creativity in a virtually infinite domain, where your discoveries may be not only interesting or rewarding, but may create a whole new world. The challenges in cellular automata are clear: how to realize them in hardware, how to apply them to useful tasks, and how to make money doing it. CelLab is a tool for exploring all three.
A Gallery of Cellular Automata
Now, all of this may seem very abstract and geeky, so let’s see what cellular automata can do. CelLab simulates a system of 64,000 processors, arranged in a two-dimensional 320×200 array. Each processor has eight bits (one byte) of internal storage, and can communicate with its eight immediate neighbors (usually named for the points of the compass: north, northeast, east, southeast, south, southwest, west, northwest). Every processor runs the same program, called the “rule”, and all 64,000 processors update simultaneously in successive “generations”. Rather than doing one thing at a time, chewing through a long list of data, the processors and data are co-located and update all at once. This is how physical systems work, so it is an excellent model for them.
The following videos illustrate simulations, all of which are performed by CelLab‘s array of 64,000 processors, each running a simple rule (often just a few lines of program code). You can view the embedded videos on YouTube, but much better is to click the link beneath the video to run the CelLab simulation on your own machine. The images will be sharper, and in most cases (assuming you have a modern, fast computer and Web browser such as Chrome or Brave) more smoothly updated. Recall, when you run these simulations, you aren’t watching a pre-recorded movie: you’re actually seeing a 64,000 processor computer being simulated right before your eyes within your Web browser. Now, on to the simulations. I won’t say much about them here because the title cards in the videos and simulations explain what’s going on.
Learn more: the Vote rule.
Learn more: the TimeTun rule.
John von Neumann first invented a self-reproducing rule in 1952. It was complicated and used 29 states per cell. The rule shown here was published by Christopher Langton in 1984. It looks at only four neighbors and has just seven states per cell. The “DNA” which circulates inside the ring is copied and passed to the tip, where it controls which cells are put down at the tip’s location. On an infinite substrate, the rule would reproduce without bound, but in our limited-size world with wrap-around at the edges, will eventually collide with itself and break.
Learn more: the Langton rule.
Learn more: the Perfume rules.
Computational Fluid Dynamics
Note: the following simulation is extremely computationally-intense. If you’re running on a mobile platform or have a slower computer and/or browser, it may run so slowly it seems like your computer has locked up. Just press the “Stop” button and wait a while, then close the window or tab and come back here and watch the video, which was recorded on a 3 GHz notebook computer using the Chrome browser.
Learn more: the Wind rule. Includes attribution information to the developer of the simulation upon which this rule is based.
Learn more: the Bbm rule.
Cellular automata are excellent for ecological and sociological modeling because organisms and individuals mostly react to influences in their immediate vicinity.
Learn more: the Soot rule.
The Belousov-Zhabotinsky reaction is a cyclic nonequilibrium chemical reaction which proceeds in waves of excitation. This is a common phenomenon in nature and appears in a wide variety of cellular automata. It has been seen in systems as far afield from chemistry as the feeding behavior of giant tube worms on the ocean floor.
These rules (with the exception of computational fluid dynamics) and many others are included in the CelLab Demo, which you can run on your computer.
Cellular Automata Programming
Programming a cellular automaton consists of defining the rule which every cell applies in each generation. Given the complex behavior illustrated in the demonstrations you’ve just seen, you might think these rules are complicated and intricate programs. In fact, most are just a few lines long. What gives cellular automata their power and ability to model so many natural processes is that they work the same way nature does. When air flows around the wing of an airplane, each gas molecule knows nothing about the shape of the wing, how fast the plane is moving, or what’s going on a few centimetres away. Its future motion is affected only by its present motion and that of its immediate neighbors, with which it is constantly colliding. Cellular automata programs work the same way: by examining just the cell’s state and that of its neighbors. What you see on the screen as the automaton evolves is the emergent behavior from these myriad individual very simple computations.
Recall how the voting rule demonstrated in the Annealing section above turned a random starting condition into an organic-like pattern of domains. Try running it yourself: press “Start” and the random map will quickly turn into interdigitated red and black regions which fizz around the boundaries and gradually smooth out small structures into larger ones. Press the “Random” button while it’s running: the process will start over from a new random start, producing a completely different large-scale morphology. You’ll never see the same pattern twice: there are 2^64000, or around 8×10^19,265, possible patterns and, as I noted above, there are only 10^80 particles in the observable universe. Certainly, this must be a large, complicated program, right? Wrong—this is the entire Vote rule.
Now take a look at one of the gas diffusion rules. Here particles of gas, initially concentrated in the two flasks, propagate along diagonals and, as they collide with one another and the walls of the containers, become randomized and gradually escape into the open. Here is the entire program for this rule (leaving out programming boilerplate).
I won’t get into the details, but on every generation a cell tests whether it’s part of a wall, in which case it does nothing, and otherwise looks at itself and its four neighbors, detects an impending collision, and updates itself accordingly—that’s it. All of the complex action you see when you run the rule emerges spontaneously from these simple conditions, applied in parallel by every cell.
CelLab was originally developed in 1988 and 1989 by Rudy Rucker, then a professor of mathematics and computer science at San Jose State University in California, and John Walker, who had just escaped from management and returned to programming and new product development at Autodesk, Inc. Rudy had been experimenting with the CAM-6 hardware cellular automata simulator, and had developed MS-DOS-based text mode cellular automata programs. I first met Rudy at the Hackers’ Conference in October 1987, and was extremely impressed by his demos of the CAM-6 board and various cellular automata rules, but found the board’s price (US$1500, which is around twice that in present-day dollars) and the fact that it had to be programmed in the rather arcane Forth language off-putting. I began to wonder whether it might be possible to build a software-only emulator which would run fast enough to run interesting experiments with cellular automata. After an extended burst of some of the sneakiest assembly-language programming I have ever done in my life, I was able to build an emulator which could run (with sufficiently clever rule definitions) almost all the programs the CAM-6 board could at a rate better than one generation per second on an IBM PC/AT (a 6 MHz 80286 machine). But, by 1988, 80386 machines were becoming widespread, and these were up to ten times faster than the PC/AT, and the 80486 was on the horizon (introduced in 1989) which would run faster still. With these chips, my software simulator could begin to approach the speed of the dedicated CAM-6 hardware. (Note that all of this code was written using the basic 8086 instruction set, and did not exploit any of the features of the newer chips.)
In June 1989, Autodesk shipped what by then was called “Rudy Rucker’s Cellular Automata Laboratory” at a retail price of US$59.95. The program included Rudy’s MS-DOS text mode programs and my graphics mode simulator, and allowed users to develop new rules by writing programs in Pascal, C, or Basic. The program was featured in A. K. Dewdney’s column in Scientific American, and announced in a small advertisement in the same issue. Sales were modest, but repaid the development costs and were consistently profitable thereafter. Rudy and I, along with some enthusiastic users, developed additional rules which worked with the program, and these were distributed on CompuServe and bulletin board systems.
In 1994, Autodesk management decided to “concentrate on its core competency” and closed the Autodesk Research Lab which was responsible for the product, another title in the series already shipping, and two under development. Rights reverted to Rudy and me, and after shopping the product to several publishers without success, we decided to make it available for free on the Internet, which was just beginning to become a thing for the general computer-literate public. In 1995, the renamed CelLab was launched on the Web, and in 1997 a Microsoft Windows version of the simulator was released.
There things stood for twenty years. As is so often the fate of software developed for closed, proprietary platforms, the MS-DOS and 16-bit Windows programs developed for CelLab stopped working on “improved” versions of Microsoft operating systems—death by Microsoft. Updating them to work on current versions of Windows would amount to almost a complete re-write, and that would only be jumping back onto the treadmill of constantly revising them for each successive release from the masters of strategic incompatibility. Further, it would limit the audience for the programs to users of a legacy, technologically inferior platform which is continually losing market share to better alternatives. (If you’d like to visit CelLab Classic, go right ahead—we never delete anything—but you’re unlikely to find a machine which will run it.)
So now it’s finished, again, or at least at large once again, or something. Ideally, CelLab will never be done, not as long as folks continue to use it to explore the world of cellular automata and share their discoveries with other pioneers on this frontier of computing.
Ricochet readers, you’re the first to know. Happy hacking, and be sure to post your discoveries in the comments. This software is 28 years old. Can we keep this comment thread going as long?