Your friend Jim George thinks you'd be a great addition to Ricochet, so we'd like to offer you a special deal: You can become a member for no initial charge for one month!
Ricochet is a community of like-minded people who enjoy writing about and discussing politics (usually of the center-right nature), culture, sports, history, and just about every other topic under the sun in a fully moderated environment. We’re so sure you’ll like Ricochet, we’ll let you join and get your first month for free. Kick the tires: read the always eclectic member feed, write some posts, join discussions, participate in a live chat or two, and listen to a few of our over 50 (free) podcasts on every conceivable topic, hosted by some of the biggest names on the right, for 30 days on us. We’re confident you’re gonna love it.


Love this post. Thanks for making it understandable.
When viewing the video on annealing, I thought, it looks like gerrymandered congressional districts. Imagine my delight when later in the post I read this:
This contrary behavior prevents the rule from quickly freezing up into a gerrymandered configuration and, instead, causes it to continue to build large regions of consensus which smooth out irregularities and consolidate, precisely as occurs with crystal structure when annealing metals. What if political elections worked this way? How would the electoral map differ? What would be the optimal strategies for contenders?
❤️ masters of strategic incompatibility ❤️
Thanks for this John. I have been fascinated by Cellular Automata since I discovered them (in Pascal programming class in 1982).
I look forward to playing with this fantastic toy you have produced.
When I was a kid I read about “Life” in Scientific American. I got a big sheet of paper, drew a grid on it, and used checkers to manually calculate generations! It was fascinating but slow. No home computers back then.
It was great to finally see it run automatically when I did get a computer of my own.
I bought the original program after reading the A. K. Dewdney article… I can’t believe it’s been 28 years.
I didn’t know that it moved to the web… thanks for doing that!
So practical terms, once we get cellular automata figured out we will go way beyond Moore’s law? Moore’s law is the idea that computers double in power every two years. Tech people have been predicting that the rate of progress will be slowing down for some time but it hasn’t yet.
If I understand cellular automata, there is a possibility that Moore’s law will accelerate rather than slow down?
Moore’s law (strictly interpreted) speaks only of the number of transistors you can manufacture in an integrated circuit doubling around every two years. Since the cost of manufacturing an integrated circuit is more or less constant regardless of the number of transistors it incorporates, and the number of transistors is a good proxy for the computing power of the circuit, this leads to the common, but less rigorous, interpretation of Moore’s law that the compute power you can buy for constant money doubles about every two years.
Cellular automata provide a way to use the transistors we can manufacture in a way which may be much more efficient than devoting them to serial execution of instructions. As I noted in the article, imagine if every cell in your RAM chips were a simple computer: it could compute millions or billions of times faster than accessing those cells one at a time the way conventional computers do. Interestingly, some of the algorithms used to accelerate computing on Graphics Processing Units (GPUs) have much in common with cellular automata.
Working with cellular automata is very different than conventional programming. It isn’t so much a matter of figuring out a logical process and writing it down but rather doing botany: finding automata among the almost infinite variety which exist that do things that are useful and then adapting them to your needs. This is something largely foreign to engineering practice, and helps to explain why the community of cellular automata researchers remains so small, and so weird.
Will cellular automata come about before Quantum chips?
Nobody knows. I am an extreme sceptic when it comes to quantum computing. It can certainly be demonstrated (and has been) for very simple cases, but practical applications it will require an isolation of the quantum computer from its environment (interactions with which will cause its pure state to decohere before the computation is complete) that be extremely difficult to achieve and require conditions such as super-high vacuum and cryogenic temperatures which are hard to produce outside the laboratory. I have other reasons to believe there are theoretical limits on quantum computing which few researchers acknowledge based upon the Bekenstein bound, but these are far less restrictive than the practical constraints.
Cellular automata, on the other hand, are simply a different way to build computers using the technologies with which we’re already familiar and adept. For that matter, you could use quantum computing to build a cellular automaton, and it may be easier then trying to use it to implement a serial algorithm because quantum mechanics is local, just like cellular automata.
If you use an image processing program, you’re already using a number of cellular algorithms, just implemented on a serial computer and hidden from you. So, in that sense, we already have cellular automata.
I’ve developed a method of quantum image rendering. It renders all potential images and then collapses them into the image you want when you look at it.
The implementation is proving a bit tricker than I thought, though.
Don’t get discouraged, Damocles. The correct rendering of the image exists in parallel with all of the erroneous ones. Your implementation just SEEMS tricky to you. As soon as you cease to observe the trickiness, it will cease to exist.
Great article!
As a non-mathematician and non-physicist, I read the entire post and ran two of the videos on one of my computers. Even more than standing on a mountaintop high in the Rockies, trying to scratch out an off-the-bottom-of-the-foodchain understanding of the vastness of what I was looking at expanded the boundaries of what I might contemplate. Even in my solo practice of crafting the best outcomes for my clients.
Thank you, John, for the long-lasting tsunami of ideas that is sure to grow from my experience here.
Cellular automata, on the other hand, are simply a different way to build computers using the technologies with which we’re already familiar and adept. For that matter, you could use quantum computing to build a cellular automaton, and it may be easier then trying to use it to implement a serial algorithm because quantum mechanics is local, just like cellular automata.
What would life be life if you had nearl infinite computing capacity?
Thanks for another wonderful article. Like a lot nerd kids, I got my intro to CA through Conway’s Game Of Life. I judged each new computer system or generation of graphics tech by how well it make my home-coded version work.
I never really got into optimizing it. I figured it was technology’s job to make me more efficient.
Now, one the rare occasions when I delve into the assembly output of a modern optimizing compiler, I’m glad I didn’t specialize in writing super efficient assembly code. My career would have been creatively-destroyed a long time ago.
The assembly output actually makes it much easier to deal with embedding assembler. The MS C++ allows that sort of embedding, and if you hard code in an easy example of what you want to do, you can use the assembly output as a base, and then replace hard coded literals with variables. I used that trick several times.
I have the precisely the same reaction. I love the way you’ll see an optimising compiler use something in a register and you think, “where did he load that?” and discover it’s been sitting there for a couple of hundred instructions since it was computed as an intermediate result from something much earlier in the code. Compilers are just much better at that kind of bookkeeping than human programmers.
As an old-timer, I often find myself hand-optimising common subexpressions, lifting code out of loops, and doing explicit progressive indexing of arrays but, invariably, if I just write it out in the most obvious (and easiest-to-understand) way, the compiler does just as good a job of optimising the code as I would have by hand.
Here, for the amusement of any 80×86 assembly language geeks reading this thread, is the inner loop of the original 1989 MS-DOS CelLab. This is one of the two sneakiest and trickiest pieces of code I have ever written (the other was the low-level scheduler and interrupt handler for a Univac 1108 operating system).
A few remarks about this code. Yes, I use SS, the stack segment register, to point to data: the old cell map. The cell map is 65,044 bytes long (322×202, including toroidal wrap-around cells), so I have enough room after it within the 64K segment for a stack large enough to handle any interrupts which may occur while the code is running. This lets me use BP as a data pointer register. Since addressing through BP implies the stack segment, I don’t need as segment override on the four references through BP in the inner loop. The state of the neighbours is assembled counting on the fact that the state map is rotated one bit to the right, so the low-order bit, which is the one visible to the evaluator, can be shifted into the AH register where they’re being accumulated with a single byte ROL by 1 instruction. We end up with the neighbour bits in BX in the wrong order, so BX is used to index a look-up table, positioned at the start of the code segment and addressed with a CS: segment override, to look up a value with the bits in the right order. Finally, the entire inner loop is expanded in-line by a rept/endm loop for every cell in the row to avoid the need to decrement a loop counter and perform a conditional jump back to the top. (The rept is right before the comment at the top—I forgot to include it when I clipped the code to show here.)
Here is the original comment describing this code.
I have just posted two new CelLab rules. Click on the name to read a description of the rule in the manual, complete with a link to run it on your computer in WebCA.
This is excellent, John! I learned of cellular automata from Conway’s Game of Life, and I’ve read just a little more of them through Stephen Wolfram, though not in depth. A few years ago on a whim, I programmed a version of Game of Life to run in IRAF, an image processing environment for astrophysics, starting with a telescope image as the starting point. It was clunky and unsatisfying, but I discovered Python very soon afterwards, which is now very popular for astronomy programming, and perhaps I’ll update my implementation of Life.
I’m eager to find an application for CA in my field, but so much of what we do in extragalactic astrophysics (say, merging and formation of galaxies and quasars) is based on gravity, which is a long-range force, rather than a nearest-neighbors’ effect. Well, you could model gravitational wells with it and have dark matter and visible matter move around in the wells to simulate galaxy formation. But I wonder if there could be applications to image processing or classification of galaxy types using CA, which would be of use to me.
For my introductory physics class and my thermodynamics course, I’m sure I’ll use the fluid dynamics code for demonstrations or “labs.” That is fantastic. I’ve been playing with it and really am impressed with how smoothly it runs.
Yikes! I almost had a seizure looking at CellLab’s crazy page header…
Ahhh… A stop button…
Here is another new rule for CelLab. Click on the rule name for a description in the user guide, along with a link you can click to run the rule in the Web simulator.
When started with an all-zero map, the head starts by tracing out a lacy pattern exhibiting symmetries, but then, as the pattern grows, appears to be following a random walk, occasionally adding to the borders of the pattern. After around 10,000 generations, however, the head will begin to create a “highway” which extends in a diagonal direction in a cycle of 104 generations. This is an example of spontaneous emergence of order after a long period of apparently chaotic behavior. If run on an infinite map, the highway would extend without bound, but with a wrap-around map, it will eventually collide with the original random pattern, producing interesting interactions. All starting configurations which have been tested eventually produce a highway, but it has not been proved that every possible configuration does so. It has, however, been proved that the pattern always grows without bound in some manner. Try starting the rule on the square pattern and watch how it evolves into a lattice of ordered highways and burl-like intersections.
Here is background on Langton’s Ant from Wikipedia and Wolfram MathWorld.
Ever since John von Neumann posed the question in 1949 whether it would be possible to design a machine which could reproduce itself, then proceeded to find a solution in 1952, expressed as a cellular automaton which used 29 states per cell, an ongoing challenge has been to find simpler automata, both in terms of states per cell and the number of cells in the initial structure, which are capable of reproduction. (See the discussion of von Neumann’s automaton in the Origins of CelLab chapter for additional details.)
In 1984, Christopher Langton described a self-reproducing automaton [Langton84] that used just 8 states per cell and an 86 cell initial pattern which replicates itself every 151 generations; see the Langton rule for an implementation. This was further simplified in 1989 by John Byl [Byl89], whose design requires 12 cells in 6 states and reproduces every 25 generations; this is implemented in our Byl rule. A further simplification was discovered by H.-H. Chou, J. A. Reggia, et al. [Reggia et al.93] in 1993, and implemented as the ChouReg rule. Its initial pattern is only 5 cells, with the replicator using 8 states and reproducing itself every 15 generations.
For the purpose of these rules, reproduction is defined as “self-directed replication”: reproducing the initial structure as directed by information contained within the structure itself. This distinguishes such rules from the kind of blind “reproduction” of rules which simply copy cells without bound and is analogous to biological reproduction: an organism replicates itself by using the instructions in its genetic code to assemble a copy of itself, including the genome.
Reproduction, Evolution, Abiogenesis, and Sex
Earlier Cellular Automata Laboratory rules have addressed self-reproduction, but this was always precise replication, not the messy recombination of genetic material of actual biological reproduction. Can a cellular automaton simulate biology and even address the origin of life and the reason sexual reproduction has become the norm in terrestrial biology, even though it is less energy efficient and haphazard (consider the time you’ve spent in singles’ bars!)? ¡Si, se puede!
Evoloops
Evoloops were invented by Hiroki Sayama [Sayama98] as a generalization of Langton’s self-reproducing automaton (implemented in CelLab as the Langton rule) incorporating death and evolution. When Langton’s rule is run in a finite space, such as our wrap-around world, the reproducing structures eventually collide with one another, breaking the reproduction process. When Evoloops collide, they create a state which cannot occur when the structures are reproducing in open space, and this state is used to trigger dissolution or death of the colliding structure, analogous to death of an organism due to exhaustion of resources (in this case, room to expand). Collision of structures may also result in mutation, as new valid structures (which may or may not be capable of reproduction) are created.
When started from a single self-reproducing structure (a generalization of Langton’s automaton), it replicates until, due to wrap-around, copies begin to collide with one another. Then things get interesting. Some structures become invalid and die, others may be static structures or oscillators which do not reproduce, while yet others, different from the original, may be able to reproduce themselves. Now competition and selection kick in. A smaller self-reproducing structure replicates more quickly, so in a given time (measured in terms of generations of the cellular automaton), it will produce more offspring. Its larger ancestors not only are slower to reproduce, but are larger “targets” for collisions which may disrupt them. As the simulation runs (and you’ll want to let it run for quite a while—interesting things are still happening 50,000 generations after the start), you’ll generally see large progenitors be displaced by smaller descendants which, in turn, are supplanted by even smaller and faster-replicating successors, until the smallest possible viable replicator comes to dominate the world. You’ll also observe structures which do not reproduce or attempt to reproduce but create non-viable offspring: these are eventually replaced by the successful replicators.
Precisely the same phenomenon is observed with bacteria. When in competition for finite resources, the fastest-reproducing organism will usually prevail, simply by outnumbering and starving the others. An extreme example of this is Spiegelman’s Monster where, under extreme laboratory conditions, an organism which originally had a genome of 4,500 bases selected itself down to just 218 bases. There is no randomness in the Evoloop simulation—it is entirely deterministic. From the same start, you will always get the same result.
Another related experiment, EvoloopsAB, demonstrates the process of abiogenesis, or the origin of life from non-living precursors. Here, we define life as a structure capable of self-reproduction. The EvoloopsAB rule is identical to Evoloops, except that it starts with an empty (all zero) map and contains three sites which “seed” the map with a random “DNA” sequence generated by a stochastic state machine. These sequences grow from the seeding sites and turn and interact based upon their random sequences, but are not capable of reproduction. But eventually, as they and the structures they spawn collide and interact, a replicator will be “discovered” which can continue to reproduce independently of the seeds. You’ll often see replicators appear, make one or a few copies, and then be wiped out by collision with a growing seed or some other structures spawned by the seeds. But eventually (since the seeds are random, the results will differ on every run), one or more replicators will become established and come to dominate the map. As before, smaller, faster replicators out-compete their larger cousins and, even if they aren’t the first to appear, will usually become the most common.
Complete details of the definition of Evoloops and an analysis of their behavior is given in Hiroki Sayama’s Ph.D. thesis [PDF].
Sexyloop
Sexyloop [Oros&Nehaniv07] is an extension of Evoloops by Nicholas Oros which adds the ability of replicators to exchange genetic information. F-sexyloop [Oros&Nehaniv09] builds upon the mechanism for genetic transfer of the simpler M1 and M2 variants of Sexyloop by adding a “sex gene”, which must be present in order for the sending organism to transfer its genome to the receiver. This gene is represented by a cell in state 9, and can be transferred to the receiver along with other genetic material. The sex gene thus behaves like the F factor plasmid in bacteria which facilitates the transfer of genetic material during conjugation. If a connection is made by an organism which lacks the F gene, the connection will persist until one or both organisms die, but no gene transfer will occur.
The experiment starts with a single replicator containing a sex gene. As the replicators collide and interact, their genomes recombine. Organisms which end up without a sex gene cannot transfer their genome to others. As with Evoloops, smaller and faster-replicating mutants usually come to dominate the map. You will see many odd things: sterile organisms mating incessantly without issue, networks of mating organisms destroyed by faster-propagating competitors, and mutants with curious behavior, such as extending long tentacles across the map. Over time, the population will become dominated by organisms which carry the sex gene. Our standard 320×200 cell map is somewhat small to demonstrate the rich behavior of this rule. If you have a fast computer and high-resolution screen, run the rule in the double-wide simulator to observe its evolution in a world with four times as many cells.
The requirement for a sex gene to be present seems to constrain the tendency to rapidly select for the smallest and quickest-reproducing organisms—the very smallest replicating patterns are too small to contain a sex gene and hence infect others with their genome. Compare the SexyloopM2 variant, which performs gene transfer without the need for a sex gene: it converges much more rapidly on a population of the smallest and fastest replicators.
This is a 12 state rule which, like Evoloops, uses the vonn4 user evaluator to examine four bits of state of the cell and its four neighbors. The rule is completely deterministic—there is no random input: from a given start, the result will always be the same.
Run Evoloops evolution experiment
Run Evoloops abiogenesis experiment
Run Sexyloop simulation
Color is used to represent the number of grains in each cell: grey for none, blue for 1, yellow for 2, and red for 3. Since a cell with four grains immediately topples, no cell can contain more than three grains. As the pile grows, you will see how the addition of a single grain can cause cascades of all sizes. While you might expect a smoothly growing structure, in fact the depth of the sand in the pile exhibits a complex fractal pattern that emerges as the pile grows. The edges of the map consume any grains which reach them: they limit the growth of the pile.
If you’re patient and have a high-resolution screen, try running Sand in the double-wide simulator—it will produce intricate mandala patterns. The Sand rule is entirely implemented within the sand user evaluator. This is an interesting mathematical model which has proved useful in analyzing emergent processes in a variety of fields. It does not, however, accurately model the behavior of actual piles of sand.
Run the Sandpile simulation
Run the Sandpile simulation (double-wide)