Even more explorations with Multiple Neighborhoods Cellular Automata

History

If you are not aware what Multiple Neighborhoods Cellular Automata (aka MNCA) are, you can refer to this post and this post for some history.

Multiple Neighborhoods Cellular Automata were created by Slackermanz (see his Reddit history or his GitHub repository).

The basic principal of these cellular automata is to have multiple neighborhoods (usually in circular and/or toroidal shapes) of different sizes that are used to determine the next state of each cell in the grid. Using these more complicated neighborhoods has lead to these fascinating examples of cellular automata beyond the usual simpler versions of cellular automata I usually see or experiment with.

New Discoveries

Two years have passed since those first two blog posts. I saw Slackermanz was still generating new versions of MNCA. He shared a bunch (over 11,000) of his shaders he created as he continues to experiment with MNCA. It only took a little coding to write a converter that massaged his shaders into a format that Visions of Chaos supported. I spent a few days going through these examples and whittled it down to 162 of the “best of the best” MNCA shaders in my opinion. Here is a sample movie showing some of the newer MNCA results.

The shaders that were used for creating the above movie are included with Visions of Chaos under the “Mode->OpenGL Shading Language->Shader Editor” mode as the GLSL shaders starting with “Cellular automaton”.

Multi-scale Multiple Neighborhoods Cellular Automata

During some of his recent developments Slackermanz started to get results that look similar to Multi-scale Turing Patterns (MSTP). I find these results are more interesting with much finer structures that evolve and change more than with MSTP. MSTP tend to reach a relative stable state after a while, the small structures stabilize and only the larger sized shapes pulsate. Compare MSTP to the following example of multi-scale multiple neighborhood cellular automata (MSMNCA?)

The first 3 minutes and 20 seconds are Slackermanz original multi-scale shaders. The next 3 minutes and 20 seconds are those same shaders “zoomed in” by multiplying the neighborhood sizes by 4. The last minute are examples of the very latest experiments with using the multi-scale principals.

The shaders that were used for creating the above movie are included with Visions of Chaos under the “Mode->OpenGL Shading Language->Shader Editor” mode as the GLSL shaders starting with “Cellular automaton”.

To see the shader code than generates the multi-scale image thumbnail for the video, click here.

New Shaders Part 1

After that bunch of shaders the ever prolific Slackermanz shared another set of his new shaders with me.

To see an example shader used in the above movie click here. The only difference between the movie parts are the 32 e0 array parameters in the shader at line 141. Otherwise the shader code remains the same for all movie parts.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 2

Click here to see the shader that makes the first part of the above movie. All other parts use the same shader, only altering the 32 float values of the ubvn array at line 156.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 3

I did say Slackermanz was prolific. Here is another set of samples from his latest work.

Click here to see the shader that makes the first part of the above movie. All other parts use the same shader, only altering the 32 float values of the ubvn array at line 138.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 4

Slackermanz is not a slacker man and shared another bunch of new shaders. Here is another sample movie showing some of the latest MNCA results.

Click here to see the shader code that makes these results. The only part of the shader code that changes between the examples is the 32 float values of the ubvn array at line 136.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 5

These next bunch of Slackermanz shaders include a bit of color shading that helps bring out more of the structures within the blobs and other shapes.

See here to see the shader code that makes these results. Note that this shader code has more commenting than the above shaders so if the earlier ones didn’t make any sense this one may help. The only part of the shader code that changes between the movie examples is the 32 float values of the ubvn array at line 107.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 6

These MNCA shaders continue to be impressively intriguing with new and unique features in each new version.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117. Yes, 52 parameters in these newer shaders compared to the 32 parameters of the above examples.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 7

Another new MNCA shader from Slackermanz.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 8

New MNCA variations.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 9

More absorbing and intriguing (thanks Thesaurus.com) examples.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 10

More compelling and appealing (thanks Thesaurus.com) examples.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 11

More beautiful and astonishing (thanks Thesaurus.com) examples.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 12

The parts in the following movie came from a few different shaders so no specific code this time. If you are curious you can see the shader code within Visions of Chaos when you open the preset/sample MCA files that are listed in the description of the video.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 13

The final sample movie for now. The parts in the following movie came from a few different shaders so no specific code this time. If you are curious you can see the shader code within Visions of Chaos when you open the preset/sample MCA files that are listed in the description of the video.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

Variations of MNCA in Visions of Chaos

The above movies show only a tiny subset of all the MNCA examples Slackermanz has experimented with. There are thousands more variations of MNCA included with Visions of Chaos to explore.

Enough With The MNCA Movies Already!

Yes, there were a lot of Multiple Neighborhoods Cellular Automata movies in this post that I have been uploading to my YouTube channel lately.

Each of the movies in order of uploading show the steps and evolution that Slackermanz went through while creating these cellular automata. Each movie is a selection of samples from one of his MNCA shaders (code links after each movie). They are all unique, or at least I have tried to pick the best unique results from each batch to create the sample movies that show off what each of the shaders can do.

These discoveries deserve to be seen by more people, especially people interested in cellular automata.

The Same As Other CA Types?

Yes, you may see structures that look like Stephan Rafler’s SmoothLife, Kellie Evans’ Larger Than Life and/or Bert Chan’s Lenia (see also here), but the techniques and components found in the construction of MNCA are unique and individual and were developed outside academia separately from the previous linked papers.

Jason.

3D Rule Table Cellular Automata

Origins

This cellular automaton comes from Matthew Guay who describes it in this post on r/cellular_automata on Reddit. There was no name given for these CAs so I have called them “Rule Table Cellular Automata” for my implementation in Visions of Chaos.

The usual behavior for a cellular automata with more than 2 states is that a living (state 1) cell that dies does not become a dead (state 0) cell immediately. Instead the cell goes through a refractory dying period. In a 5 state automata such as this one a state 1 cell that dies would first go through states 2 to 4 each step before finally disappearing and leaving a state 0 empty cell. Using the rule table described below this does not always happen. A cell in a refractory period can suddenly turn back into an alive cell or any other state. This opens up a much wider variety of possible results.

For a 2D example of a rule table based CA I have experimented with in the past see the Indexed Totalistic Cellular Automata.

The Rule Table Explained

This CA uses a rule table to determine how the cells update. It is a 5 state CA with rules like the following

3D Rule Table Cellular Automata

The current cell state (5 states so cell values are between 0 and 4) is shown down the left hand side. Depending on how many state 1 neighbor cells the current cell has determines the new state value.

For example, for that rule table, if a cell has a state of 3 and has 5 neighbor cells that are state 1, then the next state for the cell will be 2. Take the 4th row down for state 3, then go across until you find count 5, then use that column value for the new cell state.

Matthew also uses C and A characters.

“C” means “all states not already shown in this row”. For the 0 state row only count 4 is specified. The C in count 0 column means any cell that does not have 4 neighbor cells becomes state 0.

“A” means all possible count values. So with the above table, any cell that is in state 4 becomes state 0.

To make it easier, compare the above rule table to a state lookup arrays as follows. This is basically what I construct internally for the CA to use as the CA runs. After playing with the rules a while having the GUI interface be a grid as follows would be much easier to use and easier to understand at a glance. Maybe the idea of using C and A to reduce complexity of displaying the rules makes it more complicated to read? Anyway, I do show the rule table as above to match the original.

State 0 [0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
State 1 [2,2,2,2,2,2,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2]
State 2 [3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3]
State 3 [4,4,4,2,2,2,2,1,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4]
State 4 [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]

That shows the 27 possible results (0 to 26 neighbors) for each cell state. Count a cell’s 26 neighbors and use that count as an index to the new state.

Results

As with all new CA rules the first challenge is finding some good examples within the usual vast search space. Seeing as I have not found a way to detect interesting rules looking for new rules comes down to repeatedly trying random rules hoping for a visually pleasing result. After experimenting with so many CA types over the years I am used to this random process by now. Put a movie on to watch as I repeatedly try random setups. Save the ones that show potential and then manually tweak them to see if anything really interesting happens.

Here is a compilation of interesting rules I found so far.

Availability

If you want to experiment with these CAs yourself they are now included with Visions of Chaos.

Jason.

5D Cellular Automata

After 4D cellular automata the next logical step was to another dimension and see what 5D cellular automata can do.

If you are familiar with lesser dimension CAs then 5D is just an additional value in the cell arrays. 3D uses [X,Y,Z], 4D uses [X,Y,Z,W], and for 5D you use [X,Y,Z,W,V]. 5D extends the number of immediate Moore neighbors of each cell to 242 (3^5-1) (4D has 80, 3D has 26).

The settings dialog gets even more checkbox chock-a-block as follows.

5D Cellular Automata Settings Dialog

Looping through the additional dimension makes calculating these 5D CAs much slower than 4D and lesser dimension CAs to process. I was able to go up to 50x50x50x50x50 sized arrays, but beyond that was too slow for my patience.

Coloring these CAs use the same methods as the 4D cellular automata. The only change is for the “4D density” display method. Rather than using the density of the 4th dimension to show a cell color, the 5D version uses the density of both the 4th and 5th array dimensions for color.

I have not found any really interesting 5D CA rules yet. Because they are so much slower and the search space is so vast, trying multiple random rules really needs to fluke it to find an interesting result. For now here is a simple example starting from a single active cell. Click the image to watch a short animated GIF.

5D Cellular Automaton

5D CAs are now available in Visions of Chaos. If you do happen to find any interesting 5D CA rules, let me know. I asked the same for 3D and 4D and got no responses, but who knows, maybe you reading this now will be the one to find a bunch of new and interesting rules for higher dimension cellular automata. Stay tuned for a YouTube sample movie once I get enough interesting 5D rules.

Jason.

Hexagonal Cellular Automata

Cellular automata using hexagons have been a requested feature in Visions of Chaos for some time now.

If you are ever in need of programming hexagonal based code, I highly recommend this excellent page that seems to cover every possible question you would ever have about hexagons.

2D Hexagonal CAs

Hexagonal CAs work very similar to square grid CAs. The only difference is that each cell has 6 possible neighbors rather than 8.

Here is a selection of a few 2D Hexagonal cellular automata. Rules shown before each part are in the survival/birth/states format.

2D Hexagonal CA was my top performing twitter post so far. It even got a like from Dan Shiffman which is a real honor as Dan has been an inspiration to me for years now.

3D Hexagonal CAs

This is the one that got most requests. Many people seemed to think that moving from cube grids to hexagonal grids would give better more interesting results, so it was finally time to experiment with 3D hexagonal grids.

3D Hexagonal Cellular Automaton

I adopted the usual Von Neumann and Moore neighborhoods. For 3D hexagonal CA Von Neumann means 8 neighbors per cell (the neighbor cells sharing a face with the current cell, 6 in the XZ plane and 1 above and 1 below). For Moore it is 20 total neighbors (for all face sharing and “edge sharing” neighbor cells).

Rendering

As with other cellular automata in Visions of Chaos, by default I render the CA cells using software OpenGL. This is fast, but no real support for shadows or ambient occlusion so the resulting images are more difficult to read like the following.

3D Hexagonal Cellular Automaton

For better rendering results I have been using the Mitsuba Renderer for years now. Mitsuba has no native support for hexagonal prisms though. First I tried creating a ply file for a hexagonal prism and rendering that in Mitsuba. I hit a similar bug as described here with the edges rendering black. In the end I ended up using 3 rotated stretched cubes to create each hexagonal prism. Not the fastest solution but I do get nicely rendered 3D grids of hexagonal prisms as the following image shows.

3D Hexagonal Cellular Automaton

Then I also reached what seems to be an object limit in Mitsuba. For a while I would get occasional complaints from people reporting Visions of Chaos would not render using Mitsuba resulting in black frames. I now have code that checks the Mitsuba log at the end of renders looking for “exception” errors. If a Mitsuba exception is detected I now notify the user that Mitsuba had a problem. The limit seems to be around 4.5 million cubes from my tests.

The following image is rendered using just the ambient occlusion integrator in Mitsuba.

3D Hexagonal Cellular Automaton

3D Results

This first example movie of the 3D variant is a generations type. Generations cellular automata base cell updates on the count of each cell’s neighbors.

The rules are in survival/birth/states format. All use the Moore neighborhood so each cell has 20 total neighbor cells.

Help Me

Seeing as these hexagonal CAs are relatively new features of Visions of Chaos I don’t have a large number of sample rules to show them off. If you download Visons of Chaos and experiment with the 2D and/or 3D Hexagonal CA modes please send me any interesting new rules you discover.

Jason.

Totalistic Cellular Automata

Totalistic Cellular Automaton

A new (old) CA based on this twitter post. The original reference is from the December 1986 issue of BYTE magazine which can be read online here.

Totalistic Cellular Automaton

A fairly simple 1D cellular automaton but one I had not implemented or explored before.

Totalistic Cellular Automaton

The original BYTE article uses 4 states per cell (dead and 3 live states).

Cells are updated by totaling the state values for each cell and its left and right neighbors of the previous step, and then using that value as an index into a rule string to get the new cell state.

For example, as Jeremy explains in his tweet, if the rule string is 0010332321, then updates each step would be as follows.

00002120000
00010301000
00003030000
00000200000

Rule 0010332321 gives the following result when starting from a random row of cells.

Four State Cellular Automaton

Totalistic CAs can use any number of states. They can also extend to a range of neighbors beyond just the immediate left and right neighbor cells. The current center cell can be ignored and not included in the totaling.

1D and 2D Totalistic Cellular Automata are now included with the latest version of Visions of Chaos.

Totalistic Cellular Automaton

Jason.

A New Kind of Science

A New Kind of Science

A New Kind of Science cover

A New Kind of Science (referred to as ANKOS from now on) is an immense tome written by Stephen Wolfram in virtual isolation over a ten year period.

Like most cellular automata enthusiasts I was interested in seeing the book once I first heard about it and I grabbed a copy when it was first available in the local book shop. When I first purchased ANKOS I was hoping to get some ideas for new cellular automata from the book to add to Visions of Chaos. What really happened is that after skimming through it a few times it went on the bookshelf and was never referred to again.

For many more detailed reviews of the book you can refer to the Amazon reviews and this collection. I am not qualified to judge if there is a “new kind of science” contained within the many pages, but the general consensus seems to be that the book does not contain a new kind of science and that someone should never ever write a book of that size without an editor and without peer review. If anything, ANKOS is a good lesson on how not to write about something you have done or discovered.

The other day I took the book off the shelf again and cracked it open, still in pristine condition other than the layer of dust that had gathered over the past 18 years. This time I am looking specifically for ideas that I can program and add to Visions of Chaos. ANKOS is a high quality book in terms of physical production value but some of the diagrams and font sizes (especially in the notes section) can be a strain for my not so perfect eyes. Luckily Wolfram provides the entire book online so I can read (and zoom) that version much more easily. Having the book online is appreciated as I can easily link to specific sections of the book directly. Physical ANKOS went back on the shelf again. Maybe forever this time.

Over the past three days I implemented the following nine cellular automata types from ANKOS that I had not previously included with Visions of Chaos.

1D Cellular Automata

Page 60 – Three Color Totalistic Automata.

ANKOS CA

ANKOS CA

ANKOS CA

ANKOS CA

ANKOS CA

Page 71 – Mobile Automata.

ANKOS CA

ANKOS CA

ANKOS CA

Page 73 – Extended Mobile Automata.

ANKOS CA

ANKOS CA

ANKOS CA

Page 76 – Generalized Mobile Automata.

ANKOS CA

ANKOS CA

ANKOS CA

Page 125.

ANKOS CA

ANKOS CA

Page 156 – Continuous Automata.

ANKOS CA

ANKOS CA

ANKOS CA

Page 460 – Two State Block Cellular Automata.

ANKOS CA

ANKOS CA

ANKOS CA

Page 461 – Three State Block Cellular Automata.

ANKOS CA

ANKOS CA

ANKOS CA

2D Cellular Automata

Page 173.

ANKOS CA

ANKOS CA

ANKOS CA

846 pages later I am done.

See my ANKOS album on Flickr for more images from ANKOS.

Final Summary

I had a bunch of other complaints about the book here that I deleted. Everything I wanted to say has been covered in other reviews. Besides that I do try and keep to the “if you don’t have anything nice to say, then don’t say anything at all” adage in this blog.

Was I glad I went back and opened ANKOS again? Yes, overall I did find some interesting new cellular automata to play with. I have experimented with cellular automata of many types over the years and while I do find them very interesting I doubt that they will be the answer to everything, but hey, what do I know?

I do know that if anyone shows any interest in my copy of the physical book it will be thrust into their eager hands with my insistence that they take it as a gift and not a loan.

Following on from this (a year and a bit later) there was an ad hoc book giveaway in my building. You know how a few books appear, then more, and everyone sort of exchanges the books they do not want over a few days. Everything from novels to cook books to unwanted coffee table books. So I plopped ANKOS on the pile thinking someone else can have it. Over the next week the other books rotated around and new ones came and went, but ANKOS sat there like a deformed puppy at the pound that nobody wanted to adopt. Eventually it went in the bin destined for landfill. I tried to find it a home.

Jason.

Alternating Neighborhoods Cyclic Cellular Automata

Idea

What happens when you combine the Alternating Neighborhoods with Cyclic Cellular Automata? This was an idea emailed to me from Asher (check out his blog and YouTube).

Range 1 Neighborhoods

Firstly for CCAs that only use neighbor cells within a range of a single cell. Rather than check all 8 neighbor cells each step of the CA, the neighborhood cells alternate between the following neighborhoods each cycle. The green square is the current cell being processed and the red squares are the neighborhood cells counted.

Range 1 Results

Expanding Stripes – Range 1 – Threshold 1 – States 3

Large Spirals – Range 1 – Threshold 1 – States 15

Spirals – Range 1 – Threshold 2 – States 4

Range 2 Neighborhoods

And then I also extended the neighborhoods to Range 2 and I give the option to use either of the following layouts.

The second neighborhoods tend to give more squarish shaped spirals compared to the first neighborhoods.

The usual CCA rules apply, but the neighborhoods above are used to calculate the count of values that is checked against the threshold.

Range 2 Results

Dithered Spirals – Range 2 – Threshold 2 – States 5

Small Spirals Form In Larger Spirals – Range 2 – Threshold 2 – States 4

Stable Blobs – Range 2 – Threshold 7 – States 2

Availability

Alternating Neighborhoods Cyclic Cellular Automata are now included in Visions of Chaos.

Jason.

3D Cellular Automata

This is a post to provide info and answer some questions raised in the comments of the following YouTube movie.

Cellular Automata in 3D

3D Cellular Automata are extensions of the more common 1D Cellular Automata and 2D Cellular Automata into the third dimension. Rather than just checking neighbor cells in the X and Y directions, the Z direction is also included.

Neighborhoods

Neighborhoods in CA refers to which cells around each cell influence it’s birth, survival and death.

The two most common types of cell neighborhoods used in 2D CA are Moore and Von Neumann.

For 3D Moore extends to 26 possible neighbors (think of a Rubik’s cube with the middle of the cube as the current cell). Or consider a 3x3x3 3D grid of little cubes. The interior cube is the current cell, so the remaining 26 cubes around it are the neighbors of the center cube.

3D Von Neumann uses only neighbor cells sharing a face with current cell. This gives the 6 cells in the +/- X, Y and Z axis direction from each cell. Think of a 3D “plus sign” or cross shape.

Rules Explained

Rule 445 is the first rule in the video and shown as 4/4/5/M. This is fairly standard survival/birth CA syntax.
The first 4 indicates that a state 1 cell survives if it has 4 neighbor cells.
The second 4 indicates that a cell is born in an empty location if it has 4 neighbors.
The 5 means each cell has 5 total states it can be in (state 4 for newly born which then fades to state 1 and then state 0 for no cell)
M means a Moore neighborhood.

Another rule is Clouds 1 shown as 13-26/13-14,17-19/2/M
Alive cells with 13,14,15,16,17,18,19,20,21,22,23,24,25 or 26 neighbors survive.
Empty cells with 13,14,17,18 or 19 neighbors have a new cell born at that location.
2 states. Cells are either dead or alive. No refractory period they fade from birth to death.
M means a Moore neighborhood.

More than 2 states can be confusing at first. In a 2 state CA when a cell dies it goes immediately from living (state 1) to dead (state 0). In more than 2 states, when a cell dies it does not immediately go to state 0. Instead it fades out to state 0. If there are 5 total states then a live cell with state 4 (4 not 5 as the possible state values are 0,1,2,3 and 4) fades to state 3, then 2, then 1 and finally disappears at state 0.

Here are all the rules I currently include with Visions of Chaos.

3D Brain (Jason Rampe) /4/2/M
445 (Jason Rampe) 4/4/5/M
Amoeba (Jason Rampe) 9-26/5-7,12-13,15/5/M
Architecture (Jason Rampe) 4-6/3/2/M
Builder 1 (Jason Rampe) 2,6,9/4,6,8-9/10/M
Builder 2 (Jason Rampe) 5-7/1/2/M
Clouds 1 (Jason Rampe) 13-26/13-14,17-19/2/M
Clouds 2 (Jason Rampe) 12-26/13-14/2/M
Construction (Jason Rampe) 0-2,4,6-11,13-17,21-26/9-10,16,23-24/2/M
Coral (Jason Rampe) 5-8/6-7,9,12/4/M
Crystal Growth (Jason Rampe) 1 0-6/1,3/2/N
Crystal Growth (Jason Rampe) 2 1-2/1,3/5/N
Diamond Growth (Jason Rampe) 5-6/1-3/7/N
Expanding Shell (Jason Rampe) 6,7-9,11,13,15-16,18.6-10,13-14,16,18-19,22-25/5/M
More Structures (Jason Rampe) 7-26/4/4/M
Pulse Waves (Jason Rampe) 3/1-3/10/M
Pyroclastic (Jason Rampe) 4-7/6-8/10/M
Sample 1 (Jason Rampe) 10-26/5,8-26/4/M
Shells (Jason Rampe) 3,5,7,9,11,15,17,19,21,23-24,26/3,6,8-9,11,14-17,19,24/7/M
Single Point Replication (Jason Rampe) /1/2/M
Slow Decay 1 (Jason Rampe) 13-26/10-26/3/M
Slow Decay 2 (Jason Rampe) 1,4,8,11,13-26/13-26/5/M
Spiky Growth (Jason Rampe) 0-3,7-9,11-13,18,21-22,24,26/13,17,20-26/4/M
Stable Structures (Evan Wallace) 13-26/14-19/2/M
Symmetry (Jason Rampe) /2/10/M
von Neumann Builder (Jason Rampe) 1-3/1,4-5/5/N

Most of the rules that I include with Visions of Chaos were found by trying multiple random rules until something interesting appeared. I am always interested in new rules so if you download Visions of Chaos and discover any new rules let me know.

Cell Coloring

There are various ways you can assign colors to the CA cells;

RGB Cube. Convert the XYZ coordinates to RGB color values.

Color Palette. Map the distance of each cube from the center to a color palette.

White Only. Color all cubes white. This can be useful when you have multiple colored lights.

State Shading. Color cells based on which state they are in. Shaded between yellow and red for the example movie.

Neighborhood Density. Color based on how dense each cell and its nearest neighboring cells are.

Important Note For Other Coders

You saw my 3D CA video and are making your own 3D CA. Awesome. You have gotten your renders working to display all those little cubes and decide to try a few of the rules in the video. But then you get different results. The rules “almost work” but are not the same. Relax, this is probably my fault and not yours.

When I originally created the 3D CA video for YouTube I was not using the correct cell survival logic. Normally in a (for example) 3 state CA a state 1 cell will survive it is has the required number of neighbors. Cells not in state 1 will automatically fade out no matter what their neighbor configuration is. In my original code I had the logic that a cell of any state can survive. That is what causes the slight differences.

The rules I list in this post under the “Rules Explained” section do all work as expected with the correct survival logic. If you want to confirm your code works against my rules you can download Visions of Chaos and compare the results with yours.

When this error was pointed out to me it had been years that I used the incorrect logic in my 3D and many of my 2D CAs. Just one of those bugs that can go on undiscovered for the longest time in your code.

Answering Questions and Responding to Comments

Some people refer to the 3D CA as the “Game of Life” or “Brian’s Brain”. This is wrong. “Game of Life” is a specific rule of 2D CAs and it does not have a direct equivalent in 3D. The movie above is a 3D Cellular Automaton, not a “3D Game of Life”. When referring to these CAs call them 3D Cellular Automata, not 3D Life or 3D Brain or whatever else.

The music was a quick composition by me using FL Studio.

I am glad most people seem to like the movie. I have to give a shout out to the Mitsuba Renderer. Mitsuba is responsible for rendering the very nicely shaded little cubes that allow the structures of the CA rules to be seen so clearly.

If you have any other 3D CA questions, leave a comment here or in the YouTube video comments and I will try and address them here in the future. Cellular Automata are a fairly simple concept once you understand the basics of how they work.

Jason.

Automatic Detection of Interesting Cellular Automata

This post has been in a draft state for at least a couple of years now. I revisit it whenever I get inspiration for a new idea. I wasn’t going to bother posting it until I had a better solution to the problem, but maybe these ideas can trigger a working solution in someone else’s mind.

Compared to my other blog posts this one is more rambling as it follows the paths I have gone down so far when trying to solve this problem.




Objective

Cellular automata tend to have huge parameter search spaces to find interesting results within. The vast majority of rules within this space will be junk rules with only a small fraction of a percentage being interesting rules. I have spent way too many hours repeatedly trying random rules when looking for new interesting cellular automata. Between the boring rules that either die out or rules that go chaotic there is that sweet spot of interesting rules. Finding these interesting rules is the problem.

Needle in a haystack

My ideal goal has always been to be able to run random rules repeatedly hands free and have software that is “clever” enough to determine the difference of interesting vs boring results. If the algorithms are good enough at detecting interesting then you can come back to the computer hours or days later and have a set of rules in a folder with preview images and/or movies to check out.

I want the smarts to be smart enough to work with a variety of CA types beyond the basic 2 state 2D cellular automata. Visions of Chaos contains many varieties of cellular automata with varying maximum cell states, dimensions and neighborhoods that I ultimately would like to be able to click a “Look for interesting rules” button.




Interesting Defined

Interesting is a very loose term. Maybe a few examples will help define what I mean when I say interesting.

Boring results are when a CA stabilizes to a fixed pattern or a pattern with very minimal change between steps.

Cellular Automaton

Cellular Automaton

Cellular Automaton

Chaotic results are when the CA turns into a screen of static with no real discernible patterns or features like gliders or other CA related structures. For a CA classifier these rules are also boring.

Cellular Automaton

Cellular Automaton

Cellular Automaton

Interesting is anything else. Rules like Game of Life, Brian’s Brain and others that create evolvable structures that survive after multiple cycles of the CA. This is what I want the software to be able to detect.

Cellular Automaton

Conway’s Game of Life – 23/3/2



Cellular Automaton

Brian’s Brain – /2/3



Cellular Automaton

Fireballs – 346/2/4




My Previous Search Methods

1. Random rules. Repeatedly generate random rules hoping to see an interesting result. Tedious to say the least, although the majority of the interesting cellular automata rules I have found over the years have been through repeatedly trying different random rules. While a boring TV show or movie is on I can repeatedly hit F3, F4 and Enter in Visions of Chaos while looking for interesting results. F3 stops the current CA running, F4 shows the settings dialog, Enter clicks the Random Rule button.

2. Brute force all possible rules. Only applicable for when the total number of rules is small (possible for some of the simpler 1D CAs). Most 2D CAs have millions or billions of possible rules and brute force rendering them all and then checking manually is impossible.

3. Mutating existing interesting rules. If you get an interesting rule, you can try mutating the rule slightly to try alternatives that may behave similarly yet better to the rule. Slightly usually means toggling one of the survival/birth checkboxes on/off. This has occasionally helped me find interesting rules or refine a rule to that sweet spot. The problem with CAs is that even changing one checkbox will usually result in a completely different result. The good results do not tend to “clump” together in the parameter space.

The rest of this blog post contains methods others and myself have tried to classify cellular automata behavior.




Wolfram Classification

Stephen Wolfram

Stephen Wolfram defined a rough set of 4 classifications for CAs.

Class 1: Nearly all initial patterns evolve quickly into a stable, homogeneous state. Any randomness in the initial pattern disappears.

Class 2: Nearly all initial patterns evolve quickly into stable or oscillating structures. Some of the randomness in the initial pattern may filter out, but some remains. Local changes to the initial pattern tend to remain local.

Class 3: Nearly all initial patterns evolve in a pseudo-random or chaotic manner. Any stable structures that appear are quickly destroyed by the surrounding noise. Local changes to the initial pattern tend to spread indefinitely.

Class 4: Nearly all initial patterns evolve into structures that interact in complex and interesting ways, with the formation of local structures that are able to survive for long periods of time.

Classes 1 to 3 would be considered “boring” for anyone trying random rules. Class 4 is that “sweet spot” of CAs that something interesting happens between dying out and chaotic explosions.

You can look at a CA after it has been discovered and put it into one of those 4 categories but that doesn’t help detecting interesting rules in Class 4.




Other Methods From Various Papers

Here are some other classification methods in papers I found or saw mentioned elsewhere. The mathematics is beyond me for most of them. I wish papers included a small snippet of source code with them that shows the math. I always find it much easier understanding and implementing some source code rather than try and understand formal equations.

Behavioral Metrics

Search Of Complex Binary Cellular Automata_Using_Behavioral_Metrics.

Entropy

Wolfram’s Universality And Complexity In Cellular Automata discusses “entropy” values that I don’t understand.

Wuensche’s Classifying Cellular Automata Automatically

Lyapunov Exponents

Stability Of Cellular Automata Trajectories Revisited : Branching Walks And Lyapunov Profiles.

Towards The Full Lyapunov Sprectrum Of Elementary Cellular Automata.

Kolmogorov–Chaitin Complexity

Asymptotic Behaviour And Ratios Of Complexity In Cellular Automata.

Genetic Algorithms

Searching For Complex CA Rules With GAs.

Evolving Continuous Cellular Automata For Aesthetic Objectives.

Extracting Cellular Automaton Rules Directly From Experimental Data.

Other Papers

Pattern Generation Using Likelihood Inference For Cellular Automata. 1D CAs.




MergeLife

Jeff Heaton uses genetic mutations to evolve cellular automata.




Langton’s Lambda

Chris Langton

Chris Langton defined a single number that can help predict if a CA will fall within the ordered realm. See his paper Computation at the edge of chaos for the mathematical definitions etc.

Langton called this number lambda. According to this page Lambda is calculated by counting the number of cells that have just been “born” that step of the CA and dividing it by the total CA cells. This gives a value between 0 and 1.

L = newlyborn/totalcellcount
L within 0.01 and 0.15 means a good rule to further investigate.

So if the grid is 20×20 in size and there were 50 cells that were newly born that CA cycle, then lambda would be 50/20×20=0.125

I skip the first 100 CA cycles to allow the CA to settle down and then average the lambda value for the next 50 steps.

As stated here there is no single value of lambda that will always give an interesting result. Langton’s paper and example applet are only concerned with 1D CA examples. I really want to find methods to search and classify 2D, 3D (and even 4D) cellular automata.

Rampe’s Lambdas

For lack of a better name, these are the “Rampe’s Lambda” values I experimented with as alternatives to Langton’s Lambda.

R1 = newlyborn/newlydead
R1 within 0.9 and 1.1 means a good rule to further investigate.

R2 = abs(newlyborn-newlydead)/totalcellcount
R2 within 0.001 and 0.005 means a good rule to further investigate.

R3 = (newlyborn+newlydead)/totalcellcount
R3 within 0.01 and 0.8 means a good rule to further investigate.

R4 = ((newlyborn/totalcellcount)+(newlydead/totalcellcount))/2
R4 within 0.01 and 0.23 means a good rule to further investigate.

R5 = % change in Langton’s Lambda between the last and current CA cycle
R5 within 0.01 and 0.1 means a good rule to further investigate.

Again, skip the first 100 cycles of the CA and then use the average lambda from the following 50 cycles.

Lambda Results

All of them (both Langton and my “Rampe” variations) are next to useless from my tests. I ran a bunch of known good rules and got mixed results. All the lambda’s gave enough false positives to not be of any use in searching for interesting new rules. You may as well use a random number generator to classify the rules.

Maybe they can be used to weed out the extreme class 1, 2 and 3 uninteresting dead rules, but they are not useful for classifying if a class 4 like result is interesting or not.




Fractal Dimension

Fractal Dimension CA Search

Another method I tried is finding the fractal dimension of the CA image using box counting. Fractal dimensions are unlike the usual 1D, 2D and 3D fixed dimensions and for a 2D image are and floating point value between 0 and 2.

The above screenshot shows the fractal dimension tests on existing sample interesting CA files. The results are all over the place with no “sweet spot” of dimension correlating to interesting. The way it works is that each CA is run for 50 steps, the image is converted to black and white (non black pixels in the image are changed to white) and then the dimension is calculated using the box counting method.

Increasing the range of dimension for “good” detection may result in the known interesting rules to pass the tests, but it then thinks a lot of uninteresting rules are then interesting, meaning you still need to manually sort good vs bad.

A fractal dimension between 1.0 and 1.4-1.5 can help weed out obvious “bad” results, but is really not helpful in hands free searching.




Compression Based Searching

Another new interesting idea on CA searching comes from Hugo Cisneros, Josef Sivic and Tomas Mikolov. Using data compression algorithms to rate CAs.

Their paper “Evolving Structures in Complex Systems” available here is an interesting read.

Source code accompanying the paper is provided here.




Neural Networks – Part 1

This was an idea I had for a while. Train a neural network to detect if a CA rule is interesting or not.

I was able to implement a rudimentary neural network system after watching these excellent videos from Dan Shiffman.

I went from almost zero knowledge of the internals of neural networks to much more comfortable and being able to code a working NN system. If you want to learn about the basics of coding a neural network I highly recommend Dan’s playlist.

For a neural network to be able to give you meaningful output (in this case if a CA rule is interesting or not) it needs to be trained with known good and bad data.

I tried creating a neural network with 19 inputs (9 for survival states, 9 for birth states and 1 for number of states) to cover the possible CA settings, ie

2D CA Rules

The neural network has 19 inputs, a number of neurons in the hidden layer and a single output neuron that does the interestingness prediction.

Neural Network

I mainly kept the hidden neuron count the same as the inputs, but I did experiment with other counts as the next diagram shows.

Neural Network

The known good and bad rules are fed through the neural network in random order for 10 million or more times. You can see how well the network is “learning” by tracking the mean squared error. As you repeatedly feed the network known data the error value should drop meaning the network is becoming more accurate at predicting the results you train it with.

Once the network is trained, you can run random rules and see if the prediction of the network matches your rating of if the CA is interesting or not. You can also repeatedly try random rules until they pass a threshold level of interesting. Every time a prediction is made the human can rate if the detection was correct. These human ratings are added back to the good and bad rule training pool so they can be used the next time the network is trained.

The end result is “just OK”. I used a well trained network (with a mean squared error of around 0.001) and got it to repeatedly try random rules until it found a rule it predicted would be interesting. The results are not always interesting. More interesting than purely sitting there clicking random repeatedly as I have done in the past, but there are still a lot of not interesting rules spat out. If I let the network run for a few hours and got it to save every rule it predicted to be interesting I would still have a tedious process of weeding out the actual interesting rules.

I don’t think inputs from survival and birth rules is the best way of doing this. This is because a toggle of a single survival or birth checkbox will usually drastically change the results from interesting to boring or just chaos. Also changing the maximum states each cell can have by 1 will cause well behaved rules to change into chaotic mess results.

One idea I need to try is using a basic NN like this that uses the lambda values above for inputs. Maybe then it can work out which combination of lambdas (and maybe fractal dimension) work together to create good rules. This is worth experimenting with when I get some time.




Neural Networks – Part 2

This time I am trying to get the network to detect interesting CAs by using images from a frame of the CAs. For each of the known good and bad rules I take the 100th frame as an input. I also repeat each of the rules 100 times to get 100 samples of each rule.

If I use a 64×64 sized grayscale image then there will now be 4,096 inputs to the network. Add another 100 hidden nodes and that makes a large and much slower network when training.

Run the CA rules on a 64×64 sized grid, convert the image pixels into the 4,096 inputs and train the network.

So far, no good results. The mean squared error falls very slowly. Maybe it would get better after days of training, but I am not that patient yet.

This online example and this article show how this method (a fully connected neural network) is never as accurate as a convolutional neural network. So, onto Part 3…




Neural Networks – Part 3

My next idea was to try using Convolutional Neural Networks. See here for a nice explanation of convolutional neural networks.

Convolutional Neural Network

CNN’s are made for image processing, feature extraction and detection. If a CNN can be trained to recognize digits and tell if a photo is of a cat or a dog then I should be able to use a CNN to “look at” a frame of a cellular automaton and tell me if it is interesting or not.

After watching a bunch of YouTube university lectures and tutorials on CNNs I decided not to extend my existing neural network code to handle CNNs. For the network sizes I will be training I need a real world library. I chose Google’s TensorFlow.

TensorFlow Logo

TensorFlow supports GPU acceleration with CUDA and is magnitudes faster and more reliable than anything I could code.

Once I managed to get Python, TensorFlow, Keras, CUDA and cuDNN installed correctly I was able to execute Python scripts from within Visions of Chaos and successfully run the example TensorFlow CNN MNIST code. That showed I had all the various components working as expected.

Creating Training Data for the CNN

Acquiring clean and accurate training data is vital for a good model. The more data the better.

I used the following steps to create a lot of training images;

1. Take a bunch of CA rules that I had previously ranked as either good or bad.

2. Run all of them over a 128×128 sized grid for 100 steps and save the 100th frame as a grayscale jpg file.

3. Step 2 can be repeated multiple times to increase the amount of training data. CAs starting from a random grid will always give you a unique 100th frame so this is an easy way to generate lots more training data.

4. Copy some of the generated images into a test folder. I usually move 1/10th of the total generated images into a test folder. These will be used to evaluate how accurate the model is at predictions once it has been trained. You want test data that is different to the data used to train and validate the model.

Good Cellular Automata Training Data

Examples of good CA frames



Bad Cellular Automata Training Data

Examples of bad CA frames


Quantity and Dimensions of Training Data Images

I tried image sizes between 32×32 pixels and 128×128 pixels. I also tried various zoomed in CA images with each cell being 2×2 pixels rather than a single pixel per cell.

For image counts I tried between 10,000 up to 300,000.

After days of generating images and training and testing models I found a good balance between image size and model accuracy was images 128×128 pixels in size with a single pixel per CA cell (so a CA grid of 128×128 too).

I also experimented with blurring the images thinking that may help search for more general patterns, but it did not seem to make any difference in the number found or accuracy of results.

One thing working with neural networks teaches you is patience. Generating the images is the slowest part of these experiments. If anyone is willing to gift me some decent high end CPUs and GPUs I would put them to good use.

Custom Input for CNNs

The best videos I found on using CNNs with custom images were these videos on YouTube by sentdex. Parts 1 to 6 of that playlist got me up and running.

Creating the Training Data for TensorFlow

Once you have your training images they need to be converted into a data format that TensorFlow can be trained with.

Again, I recommend the following sentdex video that covers how to create the training data.

The process to convert the training images into training data is fast and should not take longer than a minute or two.

Model Variations

Time to actually use this training data to train a convolutional neural network (what TensorFlow calls a model).

There are a wide variety of model and layer types to experiment with. For CNNs you basically start with one or more Conv2D layers followed by one or more Dense layers and finally a single output node to predict a probability of the image being good or bad.

Here are some models I tried during testing. From various sources and videos and pages I have seen. Running on an Nvidia 1080 GPU took around 2 hours per model to train (50 epochs each with 100,000 training images), which seemed lightning fast after waiting 30 hours for my training images to generate.


# Version 1
# Original model from sentdex videos
# https://youtu.be/WvoLTXIjBYU

model = Sequential()

model.add(Conv2D(64, (3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())

model.add(Dense(64))
model.add(Activation("relu"))

model.add(Dense(1))
model.add(Activation("sigmoid"))

model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit(X, y, batch_size=50, epochs=50, validation_split=0.3)

When the 50 epochs finish, you can plot the accuracy and loss vs the validation accuracy and loss.

TensorFlow training graph

Version 1 gave these results;
test loss, test acc: [0.13674676911142003, 0.9789666691422463]
98% accuracy with a loss of 13%
When I test a different unique set of images as test data I get;
14500 good images predicted as good – 301 good images predicted as bad – 97.97% predicted correctly
14653 bad images predicted as bad – 184 bad images predicted as good – 98.76% predicted correctly

One thing the above “Model loss” graph shows is overfitting. The val_loss graph should follow the loss graph and continue to go down. Instead of going down the line starts going up around the 5th epoch. This is an obvious sign of overfitting. Overfitting is bad. We don’t want overfitting. See here for more info on overfitting and how to avoid it.

The second suggestion from here mentions dropouts. Dropouts remove random links between nodes in the model network as it trains. This can help reduce overfitting. So let’s give that a go.


# Version 2
# Original model from sentdex videos
# https://youtu.be/WvoLTXIjBYU
# Adding dropouts to stop overfitting

model = Sequential()

model.add(Conv2D(64, (3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.4))

model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.4))

model.add(Flatten())

model.add(Dense(64))
model.add(Activation("relu"))
model.add(Dropout(0.4))

model.add(Dense(1))
model.add(Activation("sigmoid"))

model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit(X, y, batch_size=50, epochs=50, validation_split=0.3)

50 epochs finished with this graph.

TensorFlow training graph

Now the validation loss continues to generally go down with the loss graph. This shows overfitting is no longer occurring.

Version 2 gave these results;
test loss, test acc: [0.037338864829847204, 0.9866000044345856]
98% accuracy with a 13% loss
When I test a different unique set of images as test data I get;
14151 good images predicted as good – 68 good images predicted as bad – 99.52% predicted correctly
14326 bad images predicted as bad – 12 bad images predicted as good – 99.92% predicted correctly


# Version 3
# https://towardsdatascience.com/applied-deep-learning-part-4-convolutional-neural-networks-584bc134c1e2

model = Sequential()

model.add(Conv2D(32, (3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(128, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(128, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dropout(0.5))

model.add(Dense(512))
model.add(Activation("relu"))

model.add(Dense(1))
model.add(Activation("sigmoid"))

model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit(X, y, batch_size=50, epochs=50, validation_split=0.3)

Graphs looked good without any obvious overfitting.

Version 3 gave these results;
test loss, test acc: [0.03628389219510306, 0.9891333370407422]
98% accuracy with 3% loss. Getting better.
When I test a different unique set of images as test data I get;
14669 good images predicted as good – 59 good images predicted as bad – 99.60% predicted correctly
14490 bad images predicted as bad – 62 bad images predicted as good – 99.57% predicted correctly


# Version 4
# http://www.dsimb.inserm.fr/~ghouzam/personal_projects/Simpson_character_recognition.html

model = Sequential()

model.add(Conv2D(32, (3,3), input_shape = X.shape[1:]))
model.add(Conv2D(32, (3,3)))
model.add(Activation("relu"))
# BatchNormalization better than Dropout? https://www.kdnuggets.com/2018/09/dropout-convolutional-networks.html
# model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, (3,3)))
model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
# BatchNormalization better than Dropout? https://www.kdnuggets.com/2018/09/dropout-convolutional-networks.html
# model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Conv2D(128, (3,3)))
model.add(Conv2D(128, (3,3)))
model.add(Activation("relu"))
# BatchNormalization better than Dropout? https://www.kdnuggets.com/2018/09/dropout-convolutional-networks.html
# model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))

model.add(Flatten())

model.add(Dense(64))
model.add(BatchNormalization())
model.add(Activation("relu"))

model.add(Dense(32))
model.add(BatchNormalization())
model.add(Activation("relu"))

model.add(Dense(16))
model.add(BatchNormalization())
model.add(Activation("relu"))

model.add(Dense(1))
model.add(Activation("sigmoid"))

model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit(X, y, batch_size=50, epochs=50, validation_split=0.3)

For this model I threw in multiple ideas from all previous models and more.

Version 4 gave these results;
test loss, test acc: [0.031484683321298994, 0.9896000043551128]
99% accuracy with a 3% loss. Best result so far.
When I test a different unique set of images as test data I get;
14383 good images predicted as good – 119 good images predicted as bad – 99.18% predicted correctly
14845 bad images predicted as bad – 4 bad images predicted as good – 99.97% predicted correctly

For the rest of my tests I used Version 4 for all training.

Tweaking your CNN models

See the sentdex videos above for a good example of how to tweak models and see how the variations rate. Use TensorBoard to see how they compare and optimize them.

TensorBoard Graphs

TensorBoard has other interesting histograms it will generate from your training like the following. I have no idea what this is telling me yet, but they look cool. Using histograms did seem to slow down the training with extended pauses between epochs, so unless you need them I recommend disabling them.

TensorBoard Histograms

Testing the Trained Model

Now it is finally time to put the model to the test.

Randomly set a CA rule, run it for 100 generations and then use model.predict on the 100th frame. This takes around 6 seconds per random rule.

The model.predict function returns a floating point value between 0 and 1.
Between 0 and 0.2 are classified as bad.
0.2 to 0.95 are classified as unsure.
0.95 to 1 are classified as good.

The prediction accuracy is better than any of the other methods shown previously in this post.

The rules it did detect in the bad category were all bad, so it does a great job there. No interesting rules got incorrectly classified as bad from my tests. I can safely ignore rules classified as bad which speeds up the search time as I don’t have to re-run the rules and create a sample movie.

The detected good rules did have a blend of interesting and boring/chaotic, but there were a lot less of them to check. Roughly 1% of total random rules are classified as good. The rules the model incorrectly predicts as interesting can be moved into the “known bad” folder and can be added to the next trained model (another 40 hours or so of my PC churning away generating images and training a new model).

The rules it predicted in the unsure 0.2 to 0.95 range did have features that were in the range between good and bad. Some of them would have made excellent good samples if only they were not as chaotic and “busy”.

Results

Here are some examples found from overnight convolutional neural network searches.

Cellular Automaton

TF247445 – 4567/2358/5 – Brian’s Brain with islands


Cellular Automaton

TF394435 – 34/256/3 – Another Brian like rule


Cellular Automaton

TF263299 – 3/25/3 – Over excited brain rule


Cellular Automaton

TF174642 – 15678/12678/2 – Solid islands grow amongst static


Cellular Automaton

TF1959733 – 1235678/23478/5 – Solidification


Cellular Automaton

TF2254533 – 0478/2356/5 – Waves with stable pixels


Other CNN Problems and Ideas

One problem is that CNNs seem to only detect shapes/gliders/patterns that are similar to the training data. After days of testing self searching with the CNN models there were no brand new different rules discovered. Just a bunch of very similar to existing rules and maybe a few slight tweaks. For example if a CNN is trained using only examples of Conway’s Game of Life CA then it is not going to predict Brian’s Brain is interesting if it randomly tries the rule for Brian’s Brain. The CNN needs to have previously seen the rule(s) it will detect as interesting. I did see slight variations found and scored as interesting, but for a new CA type without a lot of “good” rules to train on the CNN is not going to have problems finding new/different interesting rules. The main reason I want a “search for interesting” function is for when I have a new type of CA without a lot of known good rules. I want the search to be able to work without needing hundreds or thousands of examples of already rated good vs bad. Otherwise I need to sit there trying random rules for hours and manually rate them good or bad before training a new model specific to that CA rule.

Maybe using single frames is not the best idea. Maybe the difference between the 99th and 100th frame? Maybe a blur or average of 3 frames? This is still to be experimented with when I have another week to spend generating images and training and testing new models.

Then I thought maybe I am over training the models. If you train a neural network for too long it will overfit and then only be able to recognize the data you trained it with. This is as if it memorizes only the good data you gave it as good. It cannot generalize to detect other different good results as good. This results in new interesting CAs being potentially classified as bad. I did try lowering the training epochs from 50 to 10 to see if that helped detect more generalized interesting CA rules but it didn’t seem to make any difference. Even lowering it to 5 epochs trained a model that was still accurate at predictions. Plus the difference between random frames of good CAs shows it can detect gliders at different locations within frames.

Rather than train a model for each type of CA, train a model with examples from multiple CA types. Try and make the model more capable of general CA detection. Maybe it could then detect newer shapes/gliders in different new CA rules if it has a good general idea of what interesting CA features are from multiple different CAs. This may work? Another one for the to do list.

Convolutional Neural Networks (and neural networks in general) are not an instant win solution. You do need to do a lot of research about the various settings and do a lot of testing to get a good model which you can then use to predict the “things” you want the model to predict. But once you get a well trained model CNNs can be almost magical in how they can learn and be useful when solving problems.

The more I experiment with and learn about neural networks only makes me want to continue the journey. They really are fascinating. Using TensorFlow and Keras are a great way to get into the world of neural networks without having to code your own neural network system from scratch. I do recommend at least coding a basic feed forward neural network to get a good grip on the basics. When you jump into Keras the terminology will make more sense. YouTube has lots of good neural network related videos.




Availability to End Users

I have now included the trained (20 epochs Version 4 to hopefully leave a little room for finding more unique results) TensorFlow CNN model with Visions of Chaos. That means the end user does not need to do any image generation or training before using the CNN for searching. Python and TensorFlow need to be installed first, but after that the user can start a hands free search for interesting rules. When TensorFlow is installed and detected a search button appears on the 2D Cellular Automata dialog. Clicking Search starts a hands free random search and classification.

TensorFlow CNN CA Searching

The other search methods above are still hidden as they do not predict interesting with a high enough accuracy.




The End (for now)

If you managed to get this far, thanks for reading.

If you have some knowledge about any of the above methods that I missed please leave a reply or get in touch and let me know.

Any other ideas for cellular automata searching and classification are also welcome.

I will continue to update this post with any other methods I find in the future.

Jason.

MergeLife Cellular Automata

A new variety of cellular automata from Jeff Heaton. His original paper describing MergeLife is here, but he also made the following video that clearly explains how the rules work.

Jeff’s MergLife page also has more info and examples you can run online.

MergeLife is now included with Visions of Chaos. I haven’t added the genetic mutations yet, but you can repeatedly click the random and mutate buttons and see what new patterns emerge.

Jason.