Contributor Post Created with Sketch. Recommended by Ricochet Members Created with Sketch. Saturday Night Science: Cellular Automata

 

This will be a somewhat different Saturday Night Science. While most installments discuss a topic in science, technology, or mathematics, often in conjunction with a book, this month’s edition describes a project on which I’ve been working, off and on, since 1988 (mostly off, but with intense bursts of effort from time to time), which has just been re-released in a new, completely overhauled, updated, and extended edition concurrent with the publication of this article. CelLab, or Cellular Automata Laboratory, is a Web resource which allows you to explore a frontier of computing: cellular automata. But what are cellular automata, and why should you explore them?

What are Cellular Automata?

Cellular automata are massively parallel computer systems. For decades, most computers had a single processor or CPU (Central Processing Unit), which executed programs in a serial manner: one instruction at a time. Today, many desktop and notebook computers use chips that implement “multiple cores”, which are just multiple CPUs on a single chip. For example, the notebook machine on which I’m typing this has…hmm…I forget, let’s see…

Cpuinfo output: 8 processors…right—eight processors. That means it can work on eight streams of instructions at once. Few programs are designed to exploit this capability—it’s difficult to do, although I was writing such software in 1972—but it means that if you’re running a lengthy computationally-intensive task such as, for example, compressing a video file, that won’t slow down other programs running on the same machine, as they can use different processor cores. But still, even if you have a machine with 64 processor cores, like some servers, that’s not really massive, not when compared to the billions of bytes of high-speed memory and trillions of bytes of file storage typically found on such machines. Since the early days of electronic computing, system architects realized that one obvious way to make machines faster would be to deploy a large number of identical processors and get them all working together on the same problem at the same time. The challenge was developing algorithms which could exploit such an architecture, and for decades software technology lagged behind our ability to build hardware. But patient research eventually pays off, and all of today’s most powerful supercomputers are highly parallel machines: the Sunway TaihuLight machine in China has 40,960 processor chips, each of which contains 256 computing cores, for a total of more than ten million processors. Such machines are big, complicated, and expensive to purchase and operate (TaihuLight consumes more than 15 megawatts of electricity—comparable to 12,000 typical U.S. households), and exploiting all that power requires complex software and specialists to create it. Maybe there’s a better way.

In the late 1940s, at the dawn of the age of electronic computers, two pioneers in the field, John von Neumann and Stanislaw Ulam, both mathematicians with a long list of achievements in numerous fields, envisioned a new kind of computer which was inherently parallel—a cellular automaton. Rather than have a single, or a few, or even ten million complicated processors each independently working their way through serial lists of instructions, why not have a extremely large array of very simple processors, all running the same program, working on a modest amount of local data and communicating with their immediate neighbors? Now von Neumann and Ulam weren’t thinking about actually building such a machine with the vacuum tube technology of the time: their interest was theoretical, in particular whether it was possible, in principle, to build a machine which would be capable of replicating itself (von Neumann eventually described such a machine in 1952, and since then much simpler examples have been found, including one we’ll encounter a bit later). Here is a more detailed history of cellular automata and the origins of CelLab from the User Guide. But why should we explore such systems today? Here is my answer from 1989.

Why Cellular Automata? (From 1989)

Physics is local. The two great pillars of Twentieth Century science, general relativity and quantum mechanics, can be viewed as supplanting the mysticism of “action at a distance” and “force fields,” by such mundane, self-evident, and intuitive mechanisms as the Riemann curvature tensor, virtual gluons, and the Higgs field. Both of these theories describing the universe in the large and the very small (albeit in mathematically incompatible ways), tell us that all the complex fabric of events we observe are consequences of individual particles locally responding to conditions directly affecting them, whether moving along geodesics in curved spacetime or undergoing interactions through particle exchange. Both theories have withstood all experimental tests to date, including many thought impossible when they were originally propounded.

A cellular automaton (CA) is a mechanism for modeling systems with local interactions. A cellular automaton is a regular lattice of cells with local state, which interact with their neighbors subject to a uniform rule that governs all cells. The neighborhood (the set of cells whose state can affect a given cell at one instant) can be classified by the dimensionality of the automaton (most experimentation is done with one- or two-dimensional automata), and by the geometric fashion in which cells are interconnected.

The rule is the “program” that governs the behavior of the system. All cells apply the rule, over and over, and it is the recursive application of the rule that leads to the remarkable behavior exhibited by many cellular automata. When experimenting with cellular automata, one is primarily engaged in defining new rules which lead to interesting or useful behavior. CelLab provides tools for the person engaged in such experiments. It allows you to create a rich set of rules and experiment with their behavior without requiring the purchase of expensive special-purpose hardware.

Cellular automata appear to be abstract and devoid of practical applications, much as was said of computer graphics not long ago. If you want to model a universe which seems to be made up of particles which interact locally, there are two basic ways to go about it. The first is to create a huge array of numbers that represents the interacting items, then find the biggest number cruncher you can lay your hands on and set it gnawing away at the problem. The supercomputer boom, fueled by applications of this approach to weather forecasting, computational fluid dynamics in the transonic and hypersonic regimes, plasma dynamics, and an almost endless list of other applications testifies to the effectiveness of this approach.

But maybe there’s another way. Until recently, cellular automata were primarily a theoretical tool. The price of a cellular automaton with uniform edge size increases as the nth power of its size, where n is the dimensionality of the cellular automaton. This gets out of hand rapidly, even if you’re only working with two dimensional cellular automata. Therefore, although they may be the way the universe is really assembled and therefore worthy of study, no one would consider building one!

Hardware realizations of cellular automata, such as the CAM-6 board, have been built. The CAM-6 was not a true cellular system; it emulated one by using a fast RAM array and a look-up table, but it permitted exploration of a rich set of cellular automata with performance adequate to study their behavior in detail. The CAM-6 was a highly effective tool but, at $1,500, an expensive one. It was priced out of the reach of many creative people who should be exploring cellular automata. It was the desire to make cellular automata experimentation available at a low price to a large number of people that spurred the development of CelLab.

For cellular automata need only to find a concrete, compelling application to a real-world problem to burst into silicon and totally change the way we think about computing. Consider this: inside the computer you’re using now are ranks and ranks of RAM chips. A 256K×1 static RAM chip has a memory cell consisting of four to six transistors, connected in rows and columns to circuitry on the periphery of the chip. Even when the computer is running flat-out, you’re using precisely one cell at a time. This is the classic bottleneck in the von Neumann computer architecture (John von Neumann was very aware of this problem; in fact, he and Stanislaw Ulam invented cellular automata precisely as a tool for modeling complex systems), which has led to proposals such as Backus’ functional programming, neural systems, and many other architectural proposals, such as SIMD machines, which seem to be more effective in generating Ph.D.s than numbers.

If a two-dimensional cellular automaton with 256K cells were realized in silicon, it could compute 262,144 times faster than a serial processor accessing data bit-by-bit from a memory array. Yet, engineered for volume production, made in comparable volumes, and given time to slide down the learning curve, it need cost no more than a RAM chip. This is the potential of cellular automata. The beauty of two-dimensional cellular automata is that they map perfectly into our semiconductor manufacturing technology: they need the things it does best as opposed to, say, neural systems where the number of connections exceeds the capability of two layers of metal.

If there is merit in Edward Fredkin‘s suggestion that the fine-grained structure of the universe is really a cellular automaton, then cellular automata machines will play the role of particle accelerators in exploring this level of reality.

Some of the brightest minds of our century have been involved with cellular automata because they comprehended what cellular automata can do. John von Neumann, Stanislaw Ulam, John Horton Conway, Stephen Wolfram, and Edward Fredkin do not spend their time on nonsense. With the Rudy Rucker CelLab, you can begin to explore the potential that attracted those men to create and research this new way to compute. And perhaps you will discover something that will add your name to the list.

There’s plenty to discover. CelLab lets you program 256^216 distinct CA rules, which is a number larger than 10^157,826. These numbers are “effectively infinite”. Roughly 10^17 seconds are thought to have elapsed since the big bang ushered in the universe. If you had been around since then, creating, testing, and evaluating one rule per second, you still wouldn’t have made a dent in this number, any more than a buck makes a dent in a trillion dollars. Take away one dollar and you still have about a trillion. Even with enough time, you’d have a lot of trouble writing down the results of your exhaustive search for rules, as the universe is believed only to have on the order of 10^80 particles in it, so you’d run out of places to make notes even if you turned the entire universe into a cosmic all-encompassing Post-it® note. If the still-unconfirmed Grand Unified Theories are correct, by the time 10^40 seconds have passed, more than half of the protons will have evaporated into little poofs of energy and leptons, taking with them the fruits of your labors, and leaving what’s left of you with 10^157,826 bottles of beer still on the wall.

So get started, people! The human mind works a lot better than blind search (one imagines a middle manager reporting, “Nothing much yet—we need more monkeys, more typewriters.”). CelLab unleashes your creativity in a virtually infinite domain, where your discoveries may be not only interesting or rewarding, but may create a whole new world. The challenges in cellular automata are clear: how to realize them in hardware, how to apply them to useful tasks, and how to make money doing it. CelLab is a tool for exploring all three.

A Gallery of Cellular Automata

Now, all of this may seem very abstract and geeky, so let’s see what cellular automata can do. CelLab simulates a system of 64,000 processors, arranged in a two-dimensional 320×200 array. Each processor has eight bits (one byte) of internal storage, and can communicate with its eight immediate neighbors (usually named for the points of the compass: north, northeast, east, southeast, south, southwest, west, northwest). Every processor runs the same program, called the “rule”, and all 64,000 processors update simultaneously in successive “generations”. Rather than doing one thing at a time, chewing through a long list of data, the processors and data are co-located and update all at once. This is how physical systems work, so it is an excellent model for them.

The following videos illustrate simulations, all of which are performed by CelLab‘s array of 64,000 processors, each running a simple rule (often just a few lines of program code). You can view the embedded videos on YouTube, but much better is to click the link beneath the video to run the CelLab simulation on your own machine. The images will be sharper, and in most cases (assuming you have a modern, fast computer and Web browser such as Chrome or Brave) more smoothly updated. Recall, when you run these simulations, you aren’t watching a pre-recorded movie: you’re actually seeing a 64,000 processor computer being simulated right before your eyes within your Web browser. Now, on to the simulations. I won’t say much about them here because the title cards in the videos and simulations explain what’s going on.

Annealing

Run on your computer.

Learn more: the Vote rule.

Reversibility

Run on your computer.

Learn more: the TimeTun rule.

Self-Reproduction

Run on your computer.

John von Neumann first invented a self-reproducing rule in 1952. It was complicated and used 29 states per cell. The rule shown here was published by Christopher Langton in 1984. It looks at only four neighbors and has just seven states per cell. The “DNA” which circulates inside the ring is copied and passed to the tip, where it controls which cells are put down at the tip’s location. On an infinite substrate, the rule would reproduce without bound, but in our limited-size world with wrap-around at the edges, will eventually collide with itself and break.

Learn more: the Langton rule.

Gas Diffusion

Run on your computer.

Learn more: the Perfume rules.

Computational Fluid Dynamics

Note: the following simulation is extremely computationally-intense. If you’re running on a mobile platform or have a slower computer and/or browser, it may run so slowly it seems like your computer has locked up. Just press the “Stop” button and wait a while, then close the window or tab and come back here and watch the video, which was recorded on a 3 GHz notebook computer using the Chrome browser.

Run on your computer.

Learn more: the Wind rule. Includes attribution information to the developer of the simulation upon which this rule is based.

Billiard-Ball Computing

Run on your computer.

Learn more: the Bbm rule.

Ecological Models

Run on your computer.

Cellular automata are excellent for ecological and sociological modeling because organisms and individuals mostly react to influences in their immediate vicinity.

Learn more: the Wator, Mite, and BraiLife rules.

Accretion

Run on your computer.

Learn more: the Soot rule.

Belousov-Zhabotinsky Reaction

Run on your computer.

The Belousov-Zhabotinsky reaction is a cyclic nonequilibrium chemical reaction which proceeds in waves of excitation. This is a common phenomenon in nature and appears in a wide variety of cellular automata. It has been seen in systems as far afield from chemistry as the feeding behavior of giant tube worms on the ocean floor.

Learn more: the Hodge, Zhabo, and RainZha rules.

Sublimation

Run on your computer.

Learn more: the Sublime and Earthgas rules.

Computing

Run on your computer.

Learn more: the LogicBanks, and Turmite rules.

These rules (with the exception of computational fluid dynamics) and many others are included in the CelLab Demo, which you can run on your computer.

Cellular Automata Programming

Programming a cellular automaton consists of defining the rule which every cell applies in each generation. Given the complex behavior illustrated in the demonstrations you’ve just seen, you might think these rules are complicated and intricate programs. In fact, most are just a few lines long. What gives cellular automata their power and ability to model so many natural processes is that they work the same way nature does. When air flows around the wing of an airplane, each gas molecule knows nothing about the shape of the wing, how fast the plane is moving, or what’s going on a few centimetres away. Its future motion is affected only by its present motion and that of its immediate neighbors, with which it is constantly colliding. Cellular automata programs work the same way: by examining just the cell’s state and that of its neighbors. What you see on the screen as the automaton evolves is the emergent behavior from these myriad individual very simple computations.

Recall how the voting rule demonstrated in the Annealing section above turned a random starting condition into an organic-like pattern of domains. Try running it yourself: press “Start” and the random map will quickly turn into interdigitated red and black regions which fizz around the boundaries and gradually smooth out small structures into larger ones. Press the “Random” button while it’s running: the process will start over from a new random start, producing a completely different large-scale morphology. You’ll never see the same pattern twice: there are 2^64000, or around 8×10^19,265, possible patterns and, as I noted above, there are only 10^80 particles in the observable universe. Certainly, this must be a large, complicated program, right? Wrong—this is the entire Vote rule.Cellular Automata Rule: Vote

If you’re unfamiliar with JavaScript programming, don’t be intimidated. What the rule does is count how many cells, among the cell and its eight immediate neighbors, are in state 1. This will be a number between 0 and 9, which we call the NineSum. Now consider this the result of a little election, conducted by each cell in its immediate vicinity. If the ones win by 6–9 votes, the cell goes with the majority and becomes a one. If the zeroes prevail with a count of 0–3, similarly the cell becomes a zero. If the vote is close, however, the cell becomes a little cranky, and goes with the minority: with a vote of 4, it becomes a one, and with a vote of 5 it becomes zero. This contrary behavior prevents the rule from quickly freezing up into a gerrymandered configuration and, instead, causes it to continue to build large regions of consensus which smooth out irregularities and consolidate, precisely as occurs with crystal structure when annealing metals. What if political elections worked this way? How would the electoral map differ? What would be the optimal strategies for contenders?

Now take a look at one of the gas diffusion rules. Here particles of gas, initially concentrated in the two flasks, propagate along diagonals and, as they collide with one another and the walls of the containers, become randomized and gradually escape into the open. Here is the entire program for this rule (leaving out programming boilerplate).Cellular Automata Rule: PerfumeM

I won’t get into the details, but on every generation a cell tests whether it’s part of a wall, in which case it does nothing, and otherwise looks at itself and its four neighbors, detects an impending collision, and updates itself accordingly—that’s it. All of the complex action you see when you run the rule emerges spontaneously from these simple conditions, applied in parallel by every cell.

CelLab History

(This is a summary based upon the detailed history in the “Origins” chapter of the User Guide.)

CelLab was originally developed in 1988 and 1989 by Rudy Rucker, then a professor of mathematics and computer science at San Jose State University in California, and John Walker, who had just escaped from management and returned to programming and new product development at Autodesk, Inc. Rudy had been experimenting with the CAM-6 hardware cellular automata simulator, and had developed MS-DOS-based text mode cellular automata programs. I first met Rudy at the Hackers’ Conference in October 1987, and was extremely impressed by his demos of the CAM-6 board and various cellular automata rules, but found the board’s price (US$1500, which is around twice that in present-day dollars) and the fact that it had to be programmed in the rather arcane Forth language off-putting. I began to wonder whether it might be possible to build a software-only emulator which would run fast enough to run interesting experiments with cellular automata. After an extended burst of some of the sneakiest assembly-language programming I have ever done in my life, I was able to build an emulator which could run (with sufficiently clever rule definitions) almost all the programs the CAM-6 board could at a rate better than one generation per second on an IBM PC/AT (a 6 MHz 80286 machine). But, by 1988, 80386 machines were becoming widespread, and these were up to ten times faster than the PC/AT, and the 80486 was on the horizon (introduced in 1989) which would run faster still. With these chips, my software simulator could begin to approach the speed of the dedicated CAM-6 hardware. (Note that all of this code was written using the basic 8086 instruction set, and did not exploit any of the features of the newer chips.)

In June 1989, Autodesk shipped what by then was called “Rudy Rucker’s Cellular Automata Laboratory” at a retail price of US$59.95. The program included Rudy’s MS-DOS text mode programs and my graphics mode simulator, and allowed users to develop new rules by writing programs in Pascal, C, or Basic. The program was featured in A. K. Dewdney’s column in Scientific American, and announced in a small advertisement in the same issue. Sales were modest, but repaid the development costs and were consistently profitable thereafter. Rudy and I, along with some enthusiastic users, developed additional rules which worked with the program, and these were distributed on CompuServe and bulletin board systems.

In 1994, Autodesk management decided to “concentrate on its core competency” and closed the Autodesk Research Lab which was responsible for the product, another title in the series already shipping, and two under development. Rights reverted to Rudy and me, and after shopping the product to several publishers without success, we decided to make it available for free on the Internet, which was just beginning to become a thing for the general computer-literate public. In 1995, the renamed CelLab was launched on the Web, and in 1997 a Microsoft Windows version of the simulator was released.

There things stood for twenty years. As is so often the fate of software developed for closed, proprietary platforms, the MS-DOS and 16-bit Windows programs developed for CelLab stopped working on “improved” versions of Microsoft operating systems—death by Microsoft. Updating them to work on current versions of Windows would amount to almost a complete re-write, and that would only be jumping back onto the treadmill of constantly revising them for each successive release from the masters of strategic incompatibility. Further, it would limit the audience for the programs to users of a legacy, technologically inferior platform which is continually losing market share to better alternatives. (If you’d like to visit CelLab Classic, go right ahead—we never delete anything—but you’re unlikely to find a machine which will run it.)

In 2017, I had converted a number of resources on my Fourmilab site to replace applets written in the Java language, which is rapidly being abandoned as a means to develop interactive Web pages, with the native facilities of HTML5 and JavaScript, which are standards supported by most modern Web browsers on both desktop and mobile platforms. By running entirely within the browser, there is no need for the user to download or install any software on their own machine, nor any worry about that software becoming incompatible with future operating systems the user might install. A Web-based application will run on any machine or operating system on which a standards-compliant browser is available, and that’s just about all of them.

The question was, would JavaScript and HTML5 canvas graphics support be fast enough to run the very demanding computation and graphics support which CelLab required? When it was originally written in 1988 and 1989, CelLab required every assembly language trick and tweak in the twisted minds of clever programmers to squeeze out acceptable performance, but in the intervening years, typical personal computers, including mobile platforms, have become around five hundred times faster. That can make up for a lot of inefficiency in programming language implementations. A prototype was built, and the results were remarkable: without resorting to any off-the-deep-end tricks or techniques which might risk incompatibility, JavaScript and HTML5 could deliver stunning performance: up to 100 generations per second for most cellular automata rules on a modest laptop computer with no special graphics hardware.

Several months later, WebCA is the result. It runs all of the rules which the original MS-DOS and Windows graphics mode cellular automata simulators ran, and many new rules have been added. It is compatible with compiled rule definitions, patterns, and color palette files used by the earlier programs. Besides running in the browser on a multitude of platforms, the other major innovation in WebCA is that rule definition programs and custom user evaluators can be written in JavaScript, supported directly inside the browser, and do not require users to install a separate development environment for a programming language such as C, Pascal, Basic, or Java. Now, to explore cellular automata on your own, you need nothing more than a rudimentary text editor (such as the “notepad” programs included with all operating systems) and your Web browser.

So now it’s finished, again, or at least at large once again, or something. Ideally, CelLab will never be done, not as long as folks continue to use it to explore the world of cellular automata and share their discoveries with other pioneers on this frontier of computing.

Ricochet readers, you’re the first to know. Happy hacking, and be sure to post your discoveries in the comments. This software is 28 years old. Can we keep this comment thread going as long?

There are 25 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Jules PA Member

    Love this post. Thanks for making it understandable.

    When viewing the video on annealing, I thought, it looks like gerrymandered congressional districts. Imagine my delight when later in the post I read this:

    This contrary behavior prevents the rule from quickly freezing up into a gerrymandered configuration and, instead, causes it to continue to build large regions of consensus which smooth out irregularities and consolidate, precisely as occurs with crystal structure when annealing metals. What if political elections worked this way? How would the electoral map differ? What would be the optimal strategies for contenders?

    • #1
    • June 3, 2017, at 11:21 AM PDT
    • 6 likes
  2. Jules PA Member

    ❤️ masters of strategic incompatibility ❤️

    • #2
    • June 3, 2017, at 11:25 AM PDT
    • 3 likes
  3. MeandurΦ Member
    MeandurΦ Joined in the first year of Ricochet Ricochet Charter Member

    Thanks for this John. I have been fascinated by Cellular Automata since I discovered them (in Pascal programming class in 1982).

    I look forward to playing with this fantastic toy you have produced.

    • #3
    • June 3, 2017, at 12:46 PM PDT
    • 1 like
  4. Matt Bartle Member
    Matt Bartle Joined in the first year of Ricochet Ricochet Charter Member

    When I was a kid I read about “Life” in Scientific American. I got a big sheet of paper, drew a grid on it, and used checkers to manually calculate generations! It was fascinating but slow. No home computers back then.

    It was great to finally see it run automatically when I did get a computer of my own.

    • #4
    • June 3, 2017, at 1:06 PM PDT
    • 5 likes
  5. Damocles Inactive

    I bought the original program after reading the A. K. Dewdney article… I can’t believe it’s been 28 years.

    I didn’t know that it moved to the web… thanks for doing that!

    • #5
    • June 3, 2017, at 1:19 PM PDT
    • 3 likes
  6. Henry Castaigne Member

    So practical terms, once we get cellular automata figured out we will go way beyond Moore’s law? Moore’s law is the idea that computers double in power every two years. Tech people have been predicting that the rate of progress will be slowing down for some time but it hasn’t yet.

    If I understand cellular automata, there is a possibility that Moore’s law will accelerate rather than slow down?

    • #6
    • June 3, 2017, at 2:31 PM PDT
    • Like
  7. John Walker Contributor
    John Walker

    Henry Castaigne (View Comment):
    So practical terms, once we get cellular automata figured out we will go way beyond Moore’s law? Moore’s law is the idea that computers double in power every two years. Tech people have been predicting that the rate of progress will be slowing down for some time but it hasn’t yet.

    If I understand cellular automata, there is a possibility that Moore’s law will accelerate rather than slow down?

    Moore’s law (strictly interpreted) speaks only of the number of transistors you can manufacture in an integrated circuit doubling around every two years. Since the cost of manufacturing an integrated circuit is more or less constant regardless of the number of transistors it incorporates, and the number of transistors is a good proxy for the computing power of the circuit, this leads to the common, but less rigorous, interpretation of Moore’s law that the compute power you can buy for constant money doubles about every two years.

    Cellular automata provide a way to use the transistors we can manufacture in a way which may be much more efficient than devoting them to serial execution of instructions. As I noted in the article, imagine if every cell in your RAM chips were a simple computer: it could compute millions or billions of times faster than accessing those cells one at a time the way conventional computers do. Interestingly, some of the algorithms used to accelerate computing on Graphics Processing Units (GPUs) have much in common with cellular automata.

    Working with cellular automata is very different than conventional programming. It isn’t so much a matter of figuring out a logical process and writing it down but rather doing botany: finding automata among the almost infinite variety which exist that do things that are useful and then adapting them to your needs. This is something largely foreign to engineering practice, and helps to explain why the community of cellular automata researchers remains so small, and so weird.

    • #7
    • June 3, 2017, at 2:49 PM PDT
    • 4 likes
  8. Henry Castaigne Member

    John Walker (View Comment):
    Working with cellular automata is very different than conventional programming. It isn’t so much a matter of figuring out a logical process and writing it down but rather doing botany: finding automata among the almost infinite variety which exist that do things that are useful and then adapting them to your needs. This is something largely foreign to engineering practice, and helps to explain why the community of cellular automata researchers remains so small, and so weird.

    Will cellular automata come about before Quantum chips?

    • #8
    • June 3, 2017, at 3:09 PM PDT
    • 1 like
  9. John Walker Contributor
    John Walker

    Henry Castaigne (View Comment):

    John Walker (View Comment):
    Working with cellular automata is very different than conventional programming. It isn’t so much a matter of figuring out a logical process and writing it down but rather doing botany: finding automata among the almost infinite variety which exist that do things that are useful and then adapting them to your needs. This is something largely foreign to engineering practice, and helps to explain why the community of cellular automata researchers remains so small, and so weird.

    Will cellular automata come about before Quantum chips?

    Nobody knows. I am an extreme sceptic when it comes to quantum computing. It can certainly be demonstrated (and has been) for very simple cases, but practical applications it will require an isolation of the quantum computer from its environment (interactions with which will cause its pure state to decohere before the computation is complete) that be extremely difficult to achieve and require conditions such as super-high vacuum and cryogenic temperatures which are hard to produce outside the laboratory. I have other reasons to believe there are theoretical limits on quantum computing which few researchers acknowledge based upon the Bekenstein bound, but these are far less restrictive than the practical constraints.

    Cellular automata, on the other hand, are simply a different way to build computers using the technologies with which we’re already familiar and adept. For that matter, you could use quantum computing to build a cellular automaton, and it may be easier then trying to use it to implement a serial algorithm because quantum mechanics is local, just like cellular automata.

    If you use an image processing program, you’re already using a number of cellular algorithms, just implemented on a serial computer and hidden from you. So, in that sense, we already have cellular automata.

    • #9
    • June 3, 2017, at 3:30 PM PDT
    • Like
  10. Damocles Inactive

    John Walker (View Comment):

    If you use an image processing program, you’re already using a number of cellular algorithms, just implemented on a serial computer and hidden from you. So, in that sense, we already have cellular automata.

    I’ve developed a method of quantum image rendering. It renders all potential images and then collapses them into the image you want when you look at it.

    The implementation is proving a bit tricker than I thought, though.

    • #10
    • June 3, 2017, at 3:47 PM PDT
    • 3 likes
  11. Mark Camp Member

    Damocles (View Comment):

    John Walker (View Comment):

    If you use an image processing program, you’re already using a number of cellular algorithms, just implemented on a serial computer and hidden from you. So, in that sense, we already have cellular automata.

    I’ve developed a method of quantum image rendering. It renders all potential images and then collapses them into the image you want when you look at it.

    The implementation is proving a bit tricker than I thought, though.

    Don’t get discouraged, Damocles. The correct rendering of the image exists in parallel with all of the erroneous ones. Your implementation just SEEMS tricky to you. As soon as you cease to observe the trickiness, it will cease to exist.

    • #11
    • June 3, 2017, at 7:41 PM PDT
    • 6 likes
  12. Mark Camp Member

    Great article!

    • #12
    • June 3, 2017, at 7:44 PM PDT
    • Like
  13. Del Mar Dave Member
    Del Mar Dave Joined in the first year of Ricochet Ricochet Charter Member

    As a non-mathematician and non-physicist, I read the entire post and ran two of the videos on one of my computers. Even more than standing on a mountaintop high in the Rockies, trying to scratch out an off-the-bottom-of-the-foodchain understanding of the vastness of what I was looking at expanded the boundaries of what I might contemplate. Even in my solo practice of crafting the best outcomes for my clients.

    Thank you, John, for the long-lasting tsunami of ideas that is sure to grow from my experience here.

    • #13
    • June 3, 2017, at 10:49 PM PDT
    • Like
  14. Henry Castaigne Member

    Cellular automata, on the other hand, are simply a different way to build computers using the technologies with which we’re already familiar and adept. For that matter, you could use quantum computing to build a cellular automaton, and it may be easier then trying to use it to implement a serial algorithm because quantum mechanics is local, just like cellular automata.

    What would life be life if you had nearl infinite computing capacity?

    • #14
    • June 3, 2017, at 11:30 PM PDT
    • Like
  15. jauchter Member
    jauchter Joined in the first year of Ricochet Ricochet Charter Member

    Thanks for another wonderful article. Like a lot nerd kids, I got my intro to CA through Conway’s Game Of Life. I judged each new computer system or generation of graphics tech by how well it make my home-coded version work.

    I never really got into optimizing it. I figured it was technology’s job to make me more efficient.

    Now, one the rare occasions when I delve into the assembly output of a modern optimizing compiler, I’m glad I didn’t specialize in writing super efficient assembly code. My career would have been creatively-destroyed a long time ago.

    • #15
    • June 4, 2017, at 11:09 AM PDT
    • 1 like
  16. Judge Mental, Secret Chimp Member

    jauchter (View Comment):
    Thanks for another wonderful article. Like a lot nerd kids, I got my intro to CA through Conway’s Game Of Life. I judged each new computer system or generation of graphics tech by how well it make my home-coded version work.

    I never really got into optimizing it. I figured it was technology’s job to make me more efficient.

    Now, one the rare occasions when I delve into the assembly output of a modern optimizing compiler, I’m glad I didn’t specialize in writing super efficient assembly code. My career would have been creatively-destroyed a long time ago.

    The assembly output actually makes it much easier to deal with embedding assembler. The MS C++ allows that sort of embedding, and if you hard code in an easy example of what you want to do, you can use the assembly output as a base, and then replace hard coded literals with variables. I used that trick several times.

    • #16
    • June 4, 2017, at 11:17 AM PDT
    • 1 like
  17. John Walker Contributor
    John Walker

    jauchter (View Comment):
    Now, one the rare occasions when I delve into the assembly output of a modern optimizing compiler, I’m glad I didn’t specialize in writing super efficient assembly code. My career would have been creatively-destroyed a long time ago.

    I have the precisely the same reaction. I love the way you’ll see an optimising compiler use something in a register and you think, “where did he load that?” and discover it’s been sitting there for a couple of hundred instructions since it was computed as an intermediate result from something much earlier in the code. Compilers are just much better at that kind of bookkeeping than human programmers.

    As an old-timer, I often find myself hand-optimising common subexpressions, lifting code out of loops, and doing explicit progressive indexing of arrays but, invariably, if I just write it out in the most obvious (and easiest-to-understand) way, the compiler does just as good a job of optimising the code as I would have by hand.

    Here, for the amusement of any 80×86 assembly language geeks reading this thread, is the inner loop of the original 1989 MS-DOS CelLab. This is one of the two sneakiest and trickiest pieces of code I have ever written (the other was the low-level scheduler and interrupt handler for a Univac 1108 operating system).

    Inner loop of MS-DOS CelLab evaluator, 1989

    A few remarks about this code. Yes, I use SS, the stack segment register, to point to data: the old cell map. The cell map is 65,044 bytes long (322×202, including toroidal wrap-around cells), so I have enough room after it within the 64K segment for a stack large enough to handle any interrupts which may occur while the code is running. This lets me use BP as a data pointer register. Since addressing through BP implies the stack segment, I don’t need as segment override on the four references through BP in the inner loop. The state of the neighbours is assembled counting on the fact that the state map is rotated one bit to the right, so the low-order bit, which is the one visible to the evaluator, can be shifted into the AH register where they’re being accumulated with a single byte ROL by 1 instruction. We end up with the neighbour bits in BX in the wrong order, so BX is used to index a look-up table, positioned at the start of the code segment and addressed with a CS: segment override, to look up a value with the bits in the right order. Finally, the entire inner loop is expanded in-line by a rept/endm loop for every cell in the row to avoid the need to decrement a loop counter and perform a conditional jump back to the top. (The rept is right before the comment at the top—I forgot to include it when I clipped the code to show here.)

    Here is the original comment describing this code.

    BEWARE!!! Objects in memory are subtler than they appear. Many instructions in this routine perform several independent functions and apparently odd coding practices take advantage of very fine points of the machine architecture, such as maximising the overlap of the prefetch buffer and shortening instructions by using variants to reduce memory references in the inner loop. The code between the labels NBYTE and OGDONE is executed 64,000 times per generation of the automaton, and each memory reference costs about 30 milliseconds per generation. Do not attempt to tweak this code unless you first understand how it subtly interacts with the construction of the lookup table and are armed with the CPU hardware manuals and prepared to count clock cycles. Note that some of the tradeoffs in this routine depends on matters such as the fast cache size on machines so equipped. The inline expansion of the main loop, which prevents flushing of the on-chip instruction cracker should not exceed the machine’s fast cache size. Expansion beyond 32 inline copies yields negligible speed increases. This code is among the trickiest code I have ever written modify at your own risk.

    In addition, please note that one of nefarious tricks used herein is addressing the old state table by pointing the stack segment at it, then using BP to implicitly force use of the SS register without a segment override prefix. To allow the code to run without locking out interrupts, this routine assumes that it will be passed a full 65536 byte segment for the state table, of which 65044 bytes are used by the actual state vector. The remaining 492 bytes are used as the runtime stack to service interrupts. The instructions that swap the stack segment and pointer, identified by “VVV” and “^^^” must remain together. Inserting an instruction between them will result in random machine crashes.

    The lookup table argument is also non-straightforward. To contain a full 65536 states, the lookup table must start at a zero offset address within the segment passed as the LUTAB argument. The C code that calls this routine contains compensating skullduggery to obtain a table with the required addressing.

    • #17
    • June 4, 2017, at 12:19 PM PDT
    • 4 likes
  18. John Walker Contributor
    John Walker

    I have just posted two new CelLab rules. Click on the name to read a description of the rule in the manual, complete with a link to run it on your computer in WebCA.

    • Forest: Simulates the growth of trees in a forest and the propagation of forest fires set by lightning. You can see how infrequent fires leads to buildup of fuel and an increased risk of cataclysmic conflagrations. Demonstrates the power law scaling of fires: lots of small ones and a few very large ones.
    • Bootperc: Illustrates the statistical mechanics process of bootstrap percolation and how percolation models have a critical density below which percolation almost never occurs and above which it almost always happens. Percolation theory explains why it is almost impossible to win the old Windows Minesweeper game when the density of mines is set above the critical value.
    • #18
    • June 12, 2017, at 1:09 PM PDT
    • 1 like
  19. Tim H. Member

    This is excellent, John! I learned of cellular automata from Conway’s Game of Life, and I’ve read just a little more of them through Stephen Wolfram, though not in depth. A few years ago on a whim, I programmed a version of Game of Life to run in IRAF, an image processing environment for astrophysics, starting with a telescope image as the starting point. It was clunky and unsatisfying, but I discovered Python very soon afterwards, which is now very popular for astronomy programming, and perhaps I’ll update my implementation of Life.

    I’m eager to find an application for CA in my field, but so much of what we do in extragalactic astrophysics (say, merging and formation of galaxies and quasars) is based on gravity, which is a long-range force, rather than a nearest-neighbors’ effect. Well, you could model gravitational wells with it and have dark matter and visible matter move around in the wells to simulate galaxy formation. But I wonder if there could be applications to image processing or classification of galaxy types using CA, which would be of use to me.

    For my introductory physics class and my thermodynamics course, I’m sure I’ll use the fluid dynamics code for demonstrations or “labs.” That is fantastic. I’ve been playing with it and really am impressed with how smoothly it runs.

    • #19
    • June 13, 2017, at 5:06 AM PDT
    • 2 likes
  20. Owen Findy Member

    Yikes! I almost had a seizure looking at CellLab’s crazy page header…

    • #20
    • June 16, 2017, at 12:31 PM PDT
    • 3 likes
  21. Owen Findy Member

    Owen Findy (View Comment):
    Yikes! I almost had a seizure looking at CellLab’s crazy page header…

    Ahhh… A stop button…

    • #21
    • June 16, 2017, at 12:32 PM PDT
    • 3 likes
  22. John Walker Contributor
    John Walker

    Here is another new rule for CelLab. Click on the rule name for a description in the user guide, along with a link you can click to run the rule in the Web simulator.

    Langton's AntLangton’s Ant was invented by Christopher Langton in 1986. It is a two-dimensional Turing machine with a head (ant) that moves on a map of cells which can be in one of two states. In each generation, the head moves to an adjacent cell, inverting the state of the cell it departs. The head can move in one of the four directions in the von Neumann neighborhood; the direction it moves is set by the current state of the head. Upon moving to a new cell, the head adjusts its direction by turning clockwise if the cell’s state is zero and counterclockwise if it is one.

    When started with an all-zero map, the head starts by tracing out a lacy pattern exhibiting symmetries, but then, as the pattern grows, appears to be following a random walk, occasionally adding to the borders of the pattern. After around 10,000 generations, however, the head will begin to create a “highway” which extends in a diagonal direction in a cycle of 104 generations. This is an example of spontaneous emergence of order after a long period of apparently chaotic behavior. If run on an infinite map, the highway would extend without bound, but with a wrap-around map, it will eventually collide with the original random pattern, producing interesting interactions. All starting configurations which have been tested eventually produce a highway, but it has not been proved that every possible configuration does so. It has, however, been proved that the pattern always grows without bound in some manner. Try starting the rule on the square pattern and watch how it evolves into a lattice of ordered highways and burl-like intersections.

    Here is background on Langton’s Ant from Wikipedia and Wolfram MathWorld.

    • #22
    • June 21, 2017, at 4:06 AM PDT
    • 3 likes
  23. John Walker Contributor
    John Walker

    Byl's self-reproducing cellular automatonAnother day, another two new cellular automata rules: both self-reproducing automata which are successive simplifications of Langton’s machine, which is, itself, a simplification of von Neumann’s 1952 design.

    Ever since John von Neumann posed the question in 1949 whether it would be possible to design a machine which could reproduce itself, then proceeded to find a solution in 1952, expressed as a cellular automaton which used 29 states per cell, an ongoing challenge has been to find simpler automata, both in terms of states per cell and the number of cells in the initial structure, which are capable of reproduction. (See the discussion of von Neumann’s automaton in the Origins of CelLab chapter for additional details.)

    In 1984, Christopher Langton described a self-reproducing automaton [Langton84] that used just 8 states per cell and an 86 cell initial pattern which replicates itself every 151 generations; see the Langton rule for an implementation. This was further simplified in 1989 by John Byl [Byl89], whose design requires 12 cells in 6 states and reproduces every 25 generations; this is implemented in our Byl rule. A further simplification was discovered by H.-H. Chou, J. A. Reggia, et al. [Reggia et al.93] in 1993, and implemented as the ChouReg rule. Its initial pattern is only 5 cells, with the replicator using 8 states and reproducing itself every 15 generations.

    For the purpose of these rules, reproduction is defined as “self-directed replication”: reproducing the initial structure as directed by information contained within the structure itself. This distinguishes such rules from the kind of blind “reproduction” of rules which simply copy cells without bound and is analogous to biological reproduction: an organism replicates itself by using the instructions in its genetic code to assemble a copy of itself, including the genome.

    • #23
    • June 22, 2017, at 6:04 AM PDT
    • 2 likes
  24. John Walker Contributor
    John Walker

    Reproduction, Evolution, Abiogenesis, and Sex

    Earlier Cellular Automata Laboratory rules have addressed self-reproduction, but this was always precise replication, not the messy recombination of genetic material of actual biological reproduction. Can a cellular automaton simulate biology and even address the origin of life and the reason sexual reproduction has become the norm in terrestrial biology, even though it is less energy efficient and haphazard (consider the time you’ve spent in singles’ bars!)? ¡Si, se puede!

    Evoloops

    Evoloops were invented by Hiroki Sayama [Sayama98] as a generalization of Langton’s self-reproducing automaton (implemented in CelLab as the Langton rule) incorporating death and evolution. When Langton’s rule is run in a finite space, such as our wrap-around world, the reproducing structures eventually collide with one another, breaking the reproduction process. When Evoloops collide, they create a state which cannot occur when the structures are reproducing in open space, and this state is used to trigger dissolution or death of the colliding structure, analogous to death of an organism due to exhaustion of resources (in this case, room to expand). Collision of structures may also result in mutation, as new valid structures (which may or may not be capable of reproduction) are created.

    When started from a single self-reproducing structure (a generalization of Langton’s automaton), it replicates until, due to wrap-around, copies begin to collide with one another. Then things get interesting. Some structures become invalid and die, others may be static structures or oscillators which do not reproduce, while yet others, different from the original, may be able to reproduce themselves. Now competition and selection kick in. A smaller self-reproducing structure replicates more quickly, so in a given time (measured in terms of generations of the cellular automaton), it will produce more offspring. Its larger ancestors not only are slower to reproduce, but are larger “targets” for collisions which may disrupt them. As the simulation runs (and you’ll want to let it run for quite a while—interesting things are still happening 50,000 generations after the start), you’ll generally see large progenitors be displaced by smaller descendants which, in turn, are supplanted by even smaller and faster-replicating successors, until the smallest possible viable replicator comes to dominate the world. You’ll also observe structures which do not reproduce or attempt to reproduce but create non-viable offspring: these are eventually replaced by the successful replicators.

    Precisely the same phenomenon is observed with bacteria. When in competition for finite resources, the fastest-reproducing organism will usually prevail, simply by outnumbering and starving the others. An extreme example of this is Spiegelman’s Monster where, under extreme laboratory conditions, an organism which originally had a genome of 4,500 bases selected itself down to just 218 bases. There is no randomness in the Evoloop simulation—it is entirely deterministic. From the same start, you will always get the same result.

    Another related experiment, EvoloopsAB, demonstrates the process of abiogenesis, or the origin of life from non-living precursors. Here, we define life as a structure capable of self-reproduction. The EvoloopsAB rule is identical to Evoloops, except that it starts with an empty (all zero) map and contains three sites which “seed” the map with a random “DNA” sequence generated by a stochastic state machine. These sequences grow from the seeding sites and turn and interact based upon their random sequences, but are not capable of reproduction. But eventually, as they and the structures they spawn collide and interact, a replicator will be “discovered” which can continue to reproduce independently of the seeds. You’ll often see replicators appear, make one or a few copies, and then be wiped out by collision with a growing seed or some other structures spawned by the seeds. But eventually (since the seeds are random, the results will differ on every run), one or more replicators will become established and come to dominate the map. As before, smaller, faster replicators out-compete their larger cousins and, even if they aren’t the first to appear, will usually become the most common.

    Complete details of the definition of Evoloops and an analysis of their behavior is given in Hiroki Sayama’s Ph.D. thesis [PDF].

    Sexyloop

    Sexyloop [Oros&Nehaniv07] is an extension of Evoloops by Nicholas Oros which adds the ability of replicators to exchange genetic information. F-sexyloop [Oros&Nehaniv09] builds upon the mechanism for genetic transfer of the simpler M1 and M2 variants of Sexyloop by adding a “sex gene”, which must be present in order for the sending organism to transfer its genome to the receiver. This gene is represented by a cell in state 9, and can be transferred to the receiver along with other genetic material. The sex gene thus behaves like the F factor plasmid in bacteria which facilitates the transfer of genetic material during conjugation. If a connection is made by an organism which lacks the F gene, the connection will persist until one or both organisms die, but no gene transfer will occur.

    The experiment starts with a single replicator containing a sex gene. As the replicators collide and interact, their genomes recombine. Organisms which end up without a sex gene cannot transfer their genome to others. As with Evoloops, smaller and faster-replicating mutants usually come to dominate the map. You will see many odd things: sterile organisms mating incessantly without issue, networks of mating organisms destroyed by faster-propagating competitors, and mutants with curious behavior, such as extending long tentacles across the map. Over time, the population will become dominated by organisms which carry the sex gene. Our standard 320×200 cell map is somewhat small to demonstrate the rich behavior of this rule. If you have a fast computer and high-resolution screen, run the rule in the double-wide simulator to observe its evolution in a world with four times as many cells.

    The requirement for a sex gene to be present seems to constrain the tendency to rapidly select for the smallest and quickest-reproducing organisms—the very smallest replicating patterns are too small to contain a sex gene and hence infect others with their genome. Compare the SexyloopM2 variant, which performs gene transfer without the need for a sex gene: it converges much more rapidly on a population of the smallest and fastest replicators.

    This is a 12 state rule which, like Evoloops, uses the vonn4 user evaluator to examine four bits of state of the cell and its four neighbors. The rule is completely deterministic—there is no random input: from a given start, the result will always be the same.

    Run Evoloops evolution experiment
    Run Evoloops abiogenesis experiment
    Run Sexyloop simulation

    • #24
    • June 24, 2017, at 5:16 PM PDT
    • 2 likes
  25. John Walker Contributor
    John Walker

    Sandpile simulation in Cellular Automata LaboratoryA new rule implements the Bak-Tang-Wiesenfeld sandpile model [BakTang&Wiesenfeld87]. In each generation, a single grain of sand falls on the cell at the center of the map. When the pile of sand in any cell reaches a height of four grains, it becomes unstable and topples, with the four grains it contains distributed to its four von Neumann neighbors. If this process results in one of more of the neighbors containing four grains, they in turn topple and the process continues until no cell contains four grains. This was the first model discovered which exhibits the property of self-organized criticality. The system exhibits avalanches whose size follows a power law: many small, local events, and a few rare large ones.

    Color is used to represent the number of grains in each cell: grey for none, blue for 1, yellow for 2, and red for 3. Since a cell with four grains immediately topples, no cell can contain more than three grains. As the pile grows, you will see how the addition of a single grain can cause cascades of all sizes. While you might expect a smoothly growing structure, in fact the depth of the sand in the pile exhibits a complex fractal pattern that emerges as the pile grows. The edges of the map consume any grains which reach them: they limit the growth of the pile.

    If you’re patient and have a high-resolution screen, try running Sand in the double-wide simulator—it will produce intricate mandala patterns. The Sand rule is entirely implemented within the sand user evaluator. This is an interesting mathematical model which has proved useful in analyzing emergent processes in a variety of fields. It does not, however, accurately model the behavior of actual piles of sand.

    Run the Sandpile simulation
    Run the Sandpile simulation (double-wide)

    • #25
    • June 26, 2017, at 5:54 AM PDT
    • 4 likes

Comments are closed because this post is more than six months old. Please write a new post if you would like to continue this conversation.