At that time I managed to translate some (probably Fortran) LBM source code provided by the now defunct “LB Method” website (here is how LB Method looked around that time). The algorithms worked and did give me some nice results, but there were problems like lack of detail and pulsating colors due to my display routines scaling minimum and maximum velocities to a color palette.

Yesterday I was looking around for some new LBM source code and found Daniel Schroeder‘s LBM page here. Daniel graciously shares the source code for his applet so I was able to convert his main LBM algorithms into something I could use in Visions of Chaos. Many thanks Dan!

Using Dan’s code/algorithms was much faster than my older code. It also allows me to render much more finer detailed fluids without causing the system to blow out. I can push the simulation parameters further. Dan’s method of coloring solved the pulsing colors issue my older code had and includes a really nice way of visualizing the “curl” of the flowing fluid. Tracer particles are also used to follow the velocity of the underlying fluid to give another way of visualizing the fluid flow. Once particles leave the right side of the screen they are buffered up until they fill up and can be reinjected to the left side of the flow. Tracer particles help seeing the vortices easier than shading alone.

With less memory requirements (another plus from Dan’s code) I was able to render some nice 4K resolution LBM flows. This movie must be watched at 4K if possible as the compression of lower resolutions cannot handle displaying the tracer particles.

The new LBM code is now included with Visions of Chaos.

Jason.

]]>

Back in 1994 Karl Sims developed his Evolved Virtual Creatures. More info here and here.

I have always found these sort of simulations fascinating, using the principals of genetics to evolve better solutions to a problem. For years I have wanted to try writing my own evolved creatures, but coding a physics engine to handle the movements and joints was beyond me so it was yet another entry on my to do list (until now).

__2D Physics__

For my virtual creatures I decided to start with 2D. I needed a general physics engine that takes care of all the individual world parts interacting and moving. Erin Catto’s Box2D has all the physics simulation features that I need to finally start experimenting with a 2D environment. Box2D is great. You only need some relatively simple coding to get the physics setup and then Box2D handles all the collisions etc for you.

__Random Creatures__

The creatures I am using are the simplest structures I could come up with that hopefully lead to some interesting movements. Creatures consist of three segments (rectangles) joined together by rotating (revolute joints in Box2D) joints. The size of the rectangle segments and joint rotation speeds are picked at random. Once the random creature is created it is set free into a virtual world to see what it does.

Many of them crunch up and go nowhere,

but some setups result in jumping and crawling creatures.

In only a few minutes thousands of random creatures can be setup and simulated. From these thousands the creatures that perform well are saved.

__Performance Criteria__

Once thousands of random virtual creatures have been created you need a way to pick the best ones. For these creatures I used three different criteria;

1. Distance traveled. The maximum X distance the creature travels in 5,000 time steps.

2. Distance crawled. The maximum X distance the creature travels in 5,000 time steps, but with a height restriction to weed out creatures that jump too high.

3. Height reached. The maximum Y height the creature reaches in 5,000 time steps.

The best picks become a set of creatures in the saved “gene pool”. If you have a large enough random set of creatures (say 10,000) and only take the top 10 performers then you do tend to get a set of creatures that perform the task well.

__Mutations__

Mutation takes a current “good creature” that passed criteria searching and scales segment length, segment width, joint rotation torque and joint rotation speed by a random percentage. The mutated creature is then run through 5,000 time steps and checked if it performs better than the original. If so, it is saved over the original and mutations continue. This process can be left to churn away for hours hands free and when the mutations are stopped you have a new set of best creatures.

For the creatures described here the features I randomly change are the segment widths and heights, the joint motor torques and the joint motor speeds (for 10 total attributes that can be tweaked by mutation). The user specifies a max mutation percentage and then each of the creature values are changed by

```
changepercentage:=maxmutationpercentage/100*random;
amount:=(MaxSegmentWidth-MinSegmentWidth)*changepercentage;
if random(2)=0 then amount:=-amount;
segmentwidth:=segmentwidth+amount;
```

The new attribute is clamped to a min and max value so as not to suddenly grow extremely long segments or super fast motors. You can also randomly mutate only 1 of the attributes rather than all 10 each mutation.

Choosing the right mutation amount can be tricky. Too high a random percentage and you may as well be randomly picking creatures. Too low a percentage and you will get very few mutations that beat the current creature. After some experimenting I am now using a mutation rate of 15% and mutating 3 of the attributes (ie a segment length, a motor’s torque, etc) each mutation.

Running on an i7-6800K CPU my current code can churn through up to 21 mutation tests per second. This screen shot shows 9 copies of Visions of Chaos running, each looking for mutations of different creature types, ie 3 segment distance, 4 segment height reached, etc etc.

A mutation test requires the new mutated creature to be run for 5000 time steps and then comparing against its “parent” to see if it is better in the required fitness criteria (distance traveled, distance crawled or height reached).

__Mutation Results__

After mutating the best randomly found creatures for a while, this movie shows the best creature for distance traveled, distance crawled and height reached.

I will have to run the mutation searches overnight or for a few days to see if even better results are evolved.

__4 Segment Creatures__

Here are some results from evolving (mutating) 4 segment creatures. Same criteria of distance, crawl distance and height for best creatures. Note how only the white “arms” collide with each other. The grey “body” segments are set to pass through each other.

__5 Segment Creatures__

And finally, using 5 segments per creature. Only the 2 end arms collide with each other (otherwise the creatures always bunched up in a virtual knot and moved nowhere).

__Availability__

These Virtual Creatures are now included in the latest version of Visions of Chaos. I have also included the Box2D test bed to show some of the extra potential that I can use Box2D for in future creatures.

__To Do__

This is only the beginning. I have plenty of ideas for future improvements and expansions;

1. Using more than just mutations when evolving creatures. With more complex creatures crossover breeding could be experimented with.

2. Use more of the features of Box2D to create more complex creature setups. Arms and legs that “wave” back and forth like a fish tail rather than just spinning around.

3. 3D creatures and environments. I will need to find another physics engine supporting 3D hopefully as easily as Box2D supports 2D.

Jason.

]]>

I recently revisited my old strange attractor code in Visions of Chaos to add some new variations. This post will show many of the strange attractor formulas and some 4K resolution sample images they create. The images were created using over 1 billion points each. They have also been oversampled at least 3×3 pixels to reduce aliasing artifacts.

__Clifford Attractor__

Discovered by Clifford A Pickover. I found them explained on Paul Bourke‘s page here.

```
x and y both start at 0.1
xnew=sin(a*y)+c*cos(a*x)
ynew=sin(b*x)+d*cos(b*y)
Variables a,b,c and d are floating point values bewteen -3 and +3
```

A=-1.7 B=1.3 C=-0.1 D=-1.21

A=-1.7 B=1.8 C=-0.9 D=-0.4

A=1.5 B=-1.8 C=1.6 D=2

A=-2.239 B=-2.956 C=1.272 D=1.419

A=-1.7 B=1.8 C=-1.9 D=-0.4

__Fractal Dream Attractor__

Discovered by Clifford A Pickover and discussed in his book “Chaos In Wonderland”.

```
x and y both start at 0.1
xnew=sin(y*b)+c*sin(x*b)
ynew=sin(x*a)+d*sin(y*a)
Variables a and b are floating point values bewteen -3 and +3
Variables c and d are floating point values between -0.5 and +1.5
```

A=-0.966918 B=2.879879 C=0.765145 D=0.744728

A=-2.9585 B=-2.2965 C=-2.8829 D=-0.1622

A=-2.8276 B=1.2813 C=1.9655 D=0.597

A=-1.1554 B=-2.3419 C=-1.9799 D=2.1828

A=-1.9956 B=-1.4528 C=-2.6206 D=0.8517

__Gumowski-Mira Attractor__

The Gumowski-Mira equation was developed in 1980 at CERN by I. Gumowski and C. Mira to calculate the trajectories of sub-atomic particles. It can also be used to create attractor images.

```
x and y both start at any floating point value between -20 and +20
t=x
xnew=b*y+w
w=a*x+(1-a)*2*x*x/(1+x*x)
ynew=w-t
The a and b parameters can be any floating point value between -1 and +1.
```

Initial X=0 Initial Y=0.5 A=0.008 B=-0.7

Initial X=-0.723135391715914 Initial Y=-0.327585775405169 A=0.79253300698474 B=0.345703079365194

Initial X=-0.312847771216184 Initial Y=-0.710899183526635 A=0.579161538276821 B=-0.820410779677331

Initial X=-0.325819793157279 Initial Y=0.48573582014069 A=0.062683217227459 B=-0.436713613104075

Initial X=0.78662442881614 Initial Y=0.919355855789036 A=0.900278024375439 B=0.661233567167073

__Hopalong Attractor__

The Hopalong attractor was discovered by Barry Martin.

```
x and y both start at 0
xnew=y-1-sqrt(abs(b*x-1-c))*sign(x-1)
ynew=a-x-1
The parameters a, b and c can be any floating point value between 0 and +10.
```

A=7.16878197155893 B=8.43659746693447 C=2.55983412731439

A=7.7867514709942 B=0.132189802825451 C=8.14610984409228

A=9.74546888144687 B=1.56320227775723 C=7.86818214459345

A=9.8724800767377 B=8.66862616268918 C=8.66950439289212

A=9.7671244922094 B=4.10973468795419 C=3.78332691499963

__Jason Rampe 1__

A variation I discovered while trying random formula changes.

```
x and y both start at 0.1
xnew=cos(y*b)+c*sin(x*b)
ynew=cos(x*a)+d*sin(y*a)
Variables a, b, c and d are floating point values between -3 and +3
```

A=2.6 B=-2.5995 C=-2.9007 D=0.3565

A=1.8285 B=-1.8539 C=0.3816 D=1.9765

A=2.5425 B=2.8358 C=-0.8721 D=2.7044

A=-1.8669 B=1.2768 C=-2.9296 D=-0.4121

A=-2.7918 B=2.1196 C=1.0284 D=0.1384

__Jason Rampe 2__

Another variation I discovered while trying random formula changes.

```
x and y both start at 0.1
xnew=cos(y*b)+c*cos(x*b)
ynew=cos(x*a)+d*cos(y*a)
Variables a, b, c and d are floating point values between -3 and +3
```

A=1.546 B=1.929 C=1.09 D=1.41

A=2.907 B=-1.9472 C=1.2833 D=1.3206

A=0.8875 B=0.7821 C=-2.3262 D=1.5379

A=-2.4121 B=-1.0028 C=-2.2386 D=0.274

A=-2.9581 B=0.927 C=2.7842 D=0.6267

__Jason Rampe 3__

Yet another variation I discovered while trying random formula changes.

```
x and y both start at 0.1
xnew=sin(y*b)+c*cos(x*b)
ynew=cos(x*a)+d*sin(y*a)
Variables a, b, c and d are floating point values between -3 and +3
```

A=2.0246 B=-1.323 C=-2.8151 D=0.2277

A=1.4662 B=-2.3632 C=-0.4167 D=2.4162

A=-2.7564 B=-1.8234 C=2.8514 D=-0.8745

A=-2.218 B=1.4318 C=-0.3346 D=2.4993

A=1.2418 B=-2.4174 C=-0.7112 D=-1.9802

__Johnny Svensson Attractor__

See here.

```
x and y both start at 0.1
xnew=d*sin(x*a)-sin(y*b)
ynew=c*cos(x*a)+cos(y*b)
Variables a, b, c and d are floating point values between -3 and +3
```

A=1.40 B=1.56 C=1.40 D=-6.56

A=-2.538 B=1.362 C=1.315 D=0.513

A=1.913 B=2.796 C=1.468 D=1.01

A=-2.337 B=-2.337 C=0.533 D=1.378

A=-2.722 B=2.574 C=1.284 D=1.043

__Peter DeJong Attractor__

See here.

```
x and y both start at 0.1
xnew=sin(y*a)-cos(x*b)
ynew=sin(x*c)-cos(y*d)
Variables a, b, c and d are floating point values between -3 and +3
```

A=0.970 B=-1.899 C=1.381 D=-1.506

A=1.4 B=-2.3 C=2.4 D=-2.1

A=2.01 B=-2.53 C=1.61 D=-0.33

A=-2.7 B=-0.09 C=-0.86 D=-2.2

A=-0.827 B=-1.637 C=1.659 D=-0.943

A=-2 B=-2 C=-1.2 D=2

A=-0.709 B=1.638 C=0.452 D=1.740

__Symmetric Icon Attractor__

These attractors came from the book “Symmetry in Chaos” by Michael Field and Martin Golubitsky. They give symmetric results to the attractors formed.

```
x and y both start at 0.01
zzbar=sqr(x)+sqr(y)
p=alpha*zzbar+lambda
zreal=x
zimag=y
for i=1 to degree-2 do
begin
za=zreal*x-zimag*y
zb=zimag*x+zreal*y
zreal=za
zimag=zb
end
zn=x*zreal-y*zimag
p=p+beta*zn
xnew=p*x+gamma*zreal-omega*y
ynew=p*y-gamma*zimag+omega*x
x=xnew
y=ynew
The Lambda, Alpha, Beta, Gamma, Omega and Degree parameters can be changed to create new plot shapes.
```

These sample images all come from paramters in the “Symmetry in Chaos” book.

L=-2.5 A=5 B=-1.9 G=1 O=0.188 D=5

L=1.56 A=-1 B=0.1 G=-0.82 O=0.12 D=3

L=-1.806 A=1.806 B=0 G=1 O=0 D=5

L=-2.195 A=10 B=-12 G=1 O=0 D=3

L=2.5 A=-2.5 B=0 G=0.9 O=0 D=3

L=-2.05 A=3 B=-16.79 G=1 O=0 D=9

L=-2.7 A=5 B=1.5 G=1.0 O=0 D=6

L=2.409 A=-2.5 B=0 G=0.9 O=0 D=23

L=-2.08 A=1 B=-0.1 G=0.167 O=0 D=7

L=-2.32 A=2.32 B=0 G=0.75 O=0 D=5

L=2.6 A=-2 B=0 G=-0.5 O=0 D=5

L=-2.34 A=2 B=0.2 G=0.1 O=0 D=5

L=-1.86 A=2 B=0 G=1 O=0.1 D=4

L=1.56 A=-1 B=0.1 G=-0.82 O=0 D=3

L=1.5 A=-1 B=0.1 G=-0.805 O=0 D=3

L=1.455 A=-1 B=0.03 G=-0.8 O=0 D=3

L=2.39 A=-2.5 B=-0.1 G=0.9 O=-0.15 D=16

__3D Alternatives__

Strange Attractors can also be extended into three dimensions. See here and here for my previous experiments with 3D Strange Attractors.

All of the images in this post were created using Visions of Chaos.

Jason.

]]>

Daniel Shiffman has been making YouTube movies for some time now. His videos focus on programming and include coding challenges in which he writes code for a target idea from scratch. If you are a coder I recommend Dan’s videos for entertainment and inspiration.

His latest live stream focused on Fractal Spirographs.

If you prefer to watch a shorter edited version, here it is.

He was inspired by the following image from the Benice Equation blog.

Fractal Spirographs (aka Fractal Routlette) are generated by tracking a series (or chain) of circles rotating around each other as shown in the above gif animation. You track the chain of 10 or so circles and plot the path the final smallest circle takes. Changing the number of circles, the size ratio between circles, the speed of angle change, and the constant “k” changes the resulting plots and images.

__How I Coded It__

As I watched Daniel’s video I coded my own version. For my code (Delphi/pascal) I used a dynamic array of records to hold the details of each circle/orbit. This seemed the simplest approach to me for keeping track of a list of the linked circles.

```
type orbit=record
x,y:double;
radius:double;
angle:double;
speed:double;
end;
```

Before the main loop you fill the array;

```
//parent orbit
orbits[0].x:=destimage.width/2;
orbits[0].y:=min(destimage.width,destimage.height)/2;
orbits[0].radius:=orbits[0].y/2.5;
orbits[0].angle:=0;
orbits[0].speed:=0;
rsum:=orbits[0].radius;
//children orbits
for loop:=1 to numorbits-1 do
begin
newr:=orbits[loop-1].radius/orbitsizeratio;
newx:=orbits[loop-1].x+orbits[loop-1].radius+newr;
newy:=orbits[loop-1].y;
orbits[loop].x:=newx;
orbits[loop].y:=newy;
orbits[loop].radius:=newr;
orbits[loop].angle:=orbits[loop-1].angle;
orbits[loop].speed:=power(k,loop-1)/sqr(k*k);
end;
```

Then inside the main loop, you update the orbits;

```
//update orbits
for loop:=1 to numorbits-1 do
begin
orbits[loop].angle:=orbits[loop].angle+orbits[loop].speed;
rsum:=orbits[loop-1].radius+orbits[loop].radius;
orbits[loop].x:=orbits[loop-1].x+rsum*cos(orbits[loop].angle*pi/180);
orbits[loop].y:=orbits[loop-1].y+rsum*sin(orbits[loop].angle*pi/180);
end;
```

and then you use the last orbit positions to plot the line, ie

```
canvas.lineto(round(orbits[numorbits-1].x),round(orbits[numorbits-1].y));
```

__Results__

Once the code was working I rendered the following images and movie. They are all 4K resolution to see the details. Click the images to see them full size.

Here is a 4K movie showing how these curves are built up.

Fractal Spirographs are now included with the latest version of Visions of Chaos.

Finally, here is an 8K Fulldome 8192×8192 pixel resolution image. Must be seen full size to see the fine detailed plot line.

__To Do__

Experiment with more changes in the circle sizes. The original blog post links to another 4 posts here, here, here and here and even this sumo wrestler

Plenty of inspiration for future enhancements.

I have already experimented with 3D Spirographs in the past, but they are using spheres rotating within other spheres. Plotting the sqheres rotating around the outside of other spheres should give more new unique results.

Jason.

]]>

The basic Mandelbrot Fractal formula is z=z^2+c. The Burning Ship Fractal formula is z=abs(z)^2+c.

The following image is the standard power 2 Burning Ship Fractal rendered using CPM smooth coloring.

Zooming in to the right *antenna* part of the fractal shows why it was named the Burning Ship.

The next 3 images change the exponent 2 in z=abs(z)^2+c to 3, 4 and 5.

The same power 2 through power 5 Burning Ships but this time using Triangle Inequality Average (TIA) coloring

The next 4K resolution movie shows a series of zooms into Burning Ship Fractals between power 2 and power 5 colored using CPM coloring

and finally another 4K movie showing more Burning Ship zooms colored using TIA coloring

All of the above images and movies were created with Visions of Chaos.

Jason.

]]>

The usual Mandelbrot fomula is

z=z*z+c

Taking the z*z+c part, replace the z’s with (z*z+c) and replace the c’s with (c*c+z)

After one level of replacement you get

((z*z+c)*(z*z+c)+(c*c+z))

Level 2 is

(((z*z+c)*(z*z+c)+(c*c+z)) * ((z*z+c)*(z*z+c)+(c*c+z)) + ((c*c+z)*(c*c+z)+(z*z+c)))

and Level 3 is

((((z*z+c)*(z*z+c)+(c*c+z))*((z*z+c)*(z*z+c)+(c*c+z))+((c*c+z)*(c*c+z)+(z*z+c)))*(((z*z+c)*(z*z+c)+(c*c+z))*((z*z+c)*(z*z+c)+(c*c+z))+((c*c+z)*(c*c+z)+(z*z+c)))+(((c*c+z)*(c*c+z)+(z*z+c))*((c*c+z)*(c*c+z)+(z*z+c))+((z*z+c)*(z*z+c)+(c*c+z))))

Then you use the level 3 formula and render it as a Julia Set.

Complex C (-0.2,0.0)

Complex C (-0.14 0.0)

Complex C (-0.141 0.0)

The following movie shows the complex C changing slowly from 0 to -0.2 and three zooms into Meta-Mandelbrots. Unfortunately because these are Julia sets the shapes deeper in are virtually identical to the original fractal. You don’t get those totally different looking areas as you do with Mandelbrot fractals.

For more information see the original Fractal Forums post here.

The GLSL shader to generate these fractals is now included with Visions of Chaos.

]]>

The other day I saw this YouTube video of a Belousov-Zhabotinsky Cellular Automaton (BZ CA) by *John BitSavage*

After a while of running in an oscillating state it begins to grow cell like structures. I had never seen this in BZ CAs before. I have seen similar cell like growths in Digital Inkblot CAs and in the Yin Yang Fire CA. Seeing John’s results different to the usual BZ CA was what got me back into researching BZ in more depth.

__The Belousov-Zhabotinsky Reaction__

Belousov-Zhabotinsky Reactions (see here for more info) are examples of a chemical reactions that can oscillate between two different states and form intetesting patterns when performed in shallow petri dishes.

Here are some sample high res images of real BZ reaction by Stephen Morris. Click for full size.

and some other images from around the Internet

and some sample movies I found on YouTube

__The Hodgepodge Machine Cellular Automaton__

Back in August 1988, Scientific American‘s Computer Recreations section had an article by A. K. Dewdney named “The hodgepodge machine makes waves”. After a fair bit of hunting around I could not find any copies of the article online so I ended up paying $8 USD to get the issue in PDF format. The PDF is a high quality version of the issue, but $8 is still a rip off.

In the article Dewdney describes the “hodgepodge machine” cellular automaton designed by Martin Gerhardt and Heike Schuster of the University of Bielefeld in West Germany. A copy of their original paper can be seen here.

__How the Hodgepodge Machine works__

The inidividual cells/pixels in the hodgepodge automaton have n+1 states (between 0 and n). Cells at state 0 are considered “healthy” and cells at the maximum state n are said to be “ill”. All cells with states inbetween 0 and n are “infected” with the larger the state representing the greater level of infection.

Each cycle of the cellular automaton a series of rules are applied to each cell depending on its state.

(a) If the cell is healthy (i.e., in state 0) then its new state is [a/k1] + [b/k2], where a is the number of infected cells among its eight neighbors, b is the number of ill cells among its neighbors, and k1 and k2 are constants. Here “[]” means the integer part of the number enclosed, so that, for example, [7/3] = [2+1/3] = 2.

(b) If the cell is ill (i.e., in state n) then it miraculously becomes healthy (i.e., its state becomes 0).

(c) If the cell is infected (i.e., in a state other than 0 and n) then its new state is [s/(a+b+1)] + g, where a and b are as above, s is the sum of the states of the cell and of its neighbors and g is a constant.

The parameters given for these CA are usual q (for max states), k1 and k2 (the above constants) and g (which is a constant for how quickly the infection tends to spread).

__My previous limited history experimenting with BZ__

Years ago I implemented BZ CA in Visions of Chaos (I have now correctly renamed the mode Hodgepodge Machine) and got the following result. This resolution used to be considered the norm for YouTube and looked OK on lower resolution screens. How times have changed.

The above run used these parameters

q=200

k1=3

k2=3

g=28

__Replicating Gerhardt and Miner’s results__

Gerhadt and Miner used fixed values of k1=2 and k2=3. The majority of their experiments used a grid size of q=20 (ie only 20×20 cells) without a wraparound toroidal world. This leaves the single infection spreading g variable to play with. Their paper states they used values of g between 1 and 10, but I get no spirals with g in that range.

Here are a few samples which are 512×512 sized grids with wraparound edges and many thousands of generations to be sure they had finally settled down. Each cell is 2×2 pixels in size so they are 1024×1024 images.

q=100, k1=2, k2=3, g=5

q=100, k1=2, k2=3, g=20

q=100, k1=2, k2=3, g=25

q=100, k1=2, k2=3, g=30

__Results from other parameters__

q=100, k1=3, k2=3, g=10

q=100, k1=3, k2=3, g=15

q=100, k1=3, k2=3, g=20

__Extending into 3D__

The next logical step was extending it into three dimensions. This blog post from Rudy Rucker shows a 3D BZ CA from Harry Fu back in 2004 for his Master’s degree writing project. I must be a nerd as I whipped up my 3D version over two afternoons. Surprisingly there are no other references to experiments with 3D Hodgepodge that I can find.

The algorithms are almost identical to their 2D counterparts. The Moore neighborhood is extended into three dimensions (so 26 neighbors rather than 8 in the 2D version). It is difficult to see the internal structures as they are hidden from view. Methods I have used to try and see more of the internals are to slice out 1/8th of the cubes and to render only some of the states.

Clicking these sample images will show them in 4K resolution.

q=100, k1=1, k2=18, g=43 (150x150x150 grid)

q=100, k1=1, k2=18, g=43 (150x150x150 grid – same as previous with a 1/8th slice out to see the same patterns are extending through the 3D structure)

q=100, k1=1, k2=18, g=43 (150x150x150 grid – same rules again, but this time with only state 0 to state 50 cells being shown)

q=100, k1=2, k2=3, g=15 (150x150x150 grid)

q=100, k1=3, k2=6, g=31 (150x150x150 grid)

q=100, k1=4, k2=6, g=10 (150x150x150 grid)

q=100, k1=4, k2=6, g=10 (150x150x150 grid – same rules as the previous image – without the 1/8th slice – with only states 70 to 100 visible)

q=100, k1=3, k2=31, g=43 (250x250x250 sized grid – 15,625,000 total cells)

q=100, k1=4, k2=12, g=34 (350x350x350 sized grid – 42,875,000 total cells)

q=100, k1=1, k2=9, g=36 (400x400x400 sized grid – 64,000,000 total cells)

Download Visions of Chaos if you would like to experiment with both 2D and 3D Hodgepodge Machine cellular automata. If you find any interesting rules please let know in the comments or via email.

]]>

__Generating the inital terrain__

There are many ways to generate a terrain height array. For the terrain in this post I am using Perlin noise.

This is the 2D Perlin Noise image…

…that is extruded to the following 3d terrain…

An alternative method is to use 1/f Perlin Noise that creates this type of heightmap…

..and this 3D terrain.

__Simulating erosion__

Rather than try and replicate some of the much more complex simulators out there for wind and rain erosion (see for example here, here and here) I experimented with the simplest version I could come up with.

1. Take a single virtual rain drop and drop it to a random location on the terrain grid. Keep track of a totalsoil amount which starts at 0 when the drop is first dropped onto the terrain.

2. Look at its immediate 8 neighbors and find the lowest neighbor.

3. If no neighbors are lower deposit the remaining total soil carried and stop. This lead to large spikes as the totalsoil was too much. I since changed the drop rate to the same as the fixed depositrate. Technically this removes soil from the system, but the results are more realistic looking terrain.

4. Pick up a bit of the soil from the current spot (lower the terrain array at this point).

soilamount:=slope*erosionrate;

totalsoil:=totalsoil+soilamount;

heightarray[wx,wy]:=max(heightarray[wx,wy]-soilamount,0);

5. Move to the lowest neighbor point.

6. Deposit a bit of the carried soil at this location.

deposit:=soilamount*depositrate/slope;

heightarray[lx,ly]:=heightarray[lx,ly]+deposit;

totalsoil:=max(totalsoil-deposit,0);

7. Goto 1.

Repeat this for millions of drops.

The erosion and deposit steps (4 and 6 above) simulate the water flowing down hill, picking up and depositing soil as it goes.

To add some wind erosion you can smooth the height array every few thousand rain drops. I use a simple convolution blur every 5000 rain drops. This smooths the terrain out a little bit and can be thought of as wind smoothing the jagged edges of the terrain down a bit.

__Erosion Movie__

Here is a sample movie of the erosion process. This is a 13,500 frame 4K resolution movie. Each frame covers 10,000 virtual raindrops being dropped. This took days to render. 99% of the time is rendering the mesh with OpenGL. Simulating the raindrops is very fast.

and here are screenshots of every 2000th frame to show the erosion details more clearly.

__Future ideas__

The above is really just a quick experiment and I would like to spend more time on the more serious simulations of terrain generation and erosion. I have the book Texturing and Modeling, A Procedural Approach on my bookshelf and it contains many other terrain ideas.

Jason.

]]>

I spent a few hours getting my own version going and doing some high res test runs. The example images in this post are all 4K size. Click the thumbnails to see them full size.

The basic setup includes four “factions” that use the same settings, ie

This sequence started from a block of 4 squares of factions in the middle of the screen and grew outwards from there. None of the cells die once born.

And finally after approximately 27,000 steps when my hard drive filled up and crashed the program when it could not save any more frames.

Here is the same setup settings started from a random soup of factions with single pixel sized cells after running for a few thousand steps. There is some clumping of factions.

There seems to also be a bias to later factions in the method used in the source code. The factions are processed in order (first faction 1, then 2, etc). This is OK when each faction is surrounded by empty space, but breaks down once factions meet. This means that (for example) faction 1 may fight and take over a faction 4 location, but then once the faction 4 cells are processed the same cell may be taken back negating the original win. Overall this leads to the later factions being given a slight bias to spread faster and further than earlier factions. Or maybe it is just my version of the source code. I need to think about this some more and make sure the moving/fighting is fair.

Here are a few other color methods I experimented with. Firstly, average each cell color from which factions have visited. Cells are 10×10 pixels this time.

And the same, but with histogram equalization to get a wider range of colors.

Different faction colors and 5×5 sized cells.

I have added Cell Conquest as a new mode in the latest version of Visions of Chaos.

Jason.

]]>

After my original attempt to replicate Jonathan McCabe‘s Coupled Cellular Automata results I was contacted by Ian McDonald with some questions about how my original algorithms worked and that inspired both of us to make another attempt at getting McCabe like results.

With some back and forth and some hints from Jonathan we were able to get a little further towards replicating his images.

__How the McCabe algorithms work__

Jonathan has provided some hints to how his Coupled CA work in the past.

Firstly from here;

**Each pixel represents the state of the 4 cells of 4 cellular automata, which are cross coupled and have their individual state transition tables. There is a “history” or “memory” of the previous states which is used as an offset into the state transition tables, resulting in update rules which depend on what has happened at that pixel in previous generations. Different regions end up in a particular state or cycle of states, and act very much like immiscible liquids with surface tension.**

and secondly from here;

**The generative system involves four linked cellular automata – think of them as layers. “Linked” because at each time step, a cell’s state depends both on its neighbours in that layer, and on the states of the corresponding cells in the other three layers. Something like a three-dimensional CA, but not quite; the four layers influence each other through a weighted network of sixteen connections (a bit like a neural net). The pixels in the output image use three of the four CA layers for their red, green and blue values.**

**As in a conventional CA, each cell looks to its neighbours to determine its future state. This is a “totalistic” CA, which means each simply sums the values of its neighbours, then changes its state based on a table of transition rules. Now for the really good part: each cell also uses its recent history as an “offset” into that transition table. In other words, the past states of a cell transform the rules that cell is using. The result is a riot of feedback cycles operating between state sequences and rulesets; stable or periodically oscillating regions form, bounded by membrane-like surfaces where these feedback cycles compete. Structures even form inside the membranes – rule/state networks that can only exist between other zones. **

After emailing him he did provide a few more more clues to how his Coupled CA work which did help me get to the point I am at now.

How the “inner loop” behaves and how the 4 (or more) CA layers are actually processed and combined is still a mystery, but this is a hint;

**“Cross-Coupled” is a matrix multiply to give 4 numbers from 4 numbers. If the matrix was “1.0” diagnally and “0.0” elsewhere you have zero coupling. If the zeros are replaced with other values you get cross-coupling of various amounts. If the cross coupling is very low you have 4 independant systems, if it is very high you get effectively one system, I think there is a sweet spot in between where they influence each other but don’t disappear into the mix.**

And regarding the history;

**The history or memory is an average of previous states at that cell. You can do it a few ways, say as a decaying average, adding the current state and multiplying by 0.99 or some number, so the memory fades. The memory is added to the index to look up the table, so you actually need a bigger table than 9*256.
**

__How my version works__

Here are the basic steps to how I created the images within this post. Note that this is not how the McCabe CA works, but does show some similarities visually to his.

The CA is based on a series of CA layers. Each layer/array constains byte values (0-255) and is initialised with random noise. You need a minimum of 4 layers, but any number of layers can be used.

You also need a set of lookup tables for each layer. The lookup tables are 1D arrays containing byte values (0-255). They each have 256*10 entries. The arrays are filled using multiple sine waves combining different frequencies and amplitudes which are then normalised into the 0-255 range. You want to have smallish changes between the lookup table entries.

256*10 entries in the lookup table is because in the McCabe CA the current cell, its 8 neighbours and the history of that cell are used to index the lookup table. That gives 10 possible values between 0 and 255 which are totalled to index the lookup table. For the method I discuss in this blog post, you really only need a lookup table with 256 entries as the index maths never goes betyond 0 to 255. BUT I still use the larger sized array to stretch the sine waves further/more smoothly across the tables. Note you can also change the “*10” size to a larger number to help smooth out the resulting CA displays even more. Experiment with different values.

Now for the main loop.

1. Blur each layer array. Gaussian blur can be used but tends to be slow. Using a faster 2 pass blur as described here works just as well for this purpose. Make sure the blur radius has a decent amount of difference between layers, for example 1, 10, 50, 100. These large differences in blur amounts is what leads to the resulting images having their detail at many scales appearance.

2. Process the layers. This is the *secret* step that Jonathan has not yet released. I tried all sorts of random combinations and equations until I found the following. This is what I use in the images for this post.

`for i:=0 to numlayers-1 do t[i,x,y]:=c[i,x,y]+lookuptable[i,trunc(abs(h[(i+1) mod numlayers,x,y]-c[(i+2) mod numlayers,x,y]))];`

That allows for any number of layers, but to show the combinations simpler, here it is unrolled for 4 layers

```
t[0,x,y]:=c[0,x,y]+lookuptable[0,trunc(abs(h[1,x,y]-c[2,x,y]))];
t[1,x,y]:=c[1,x,y]+lookuptable[1,trunc(abs(h[2,x,y]-c[3,x,y]))];
t[2,x,y]:=c[2,x,y]+lookuptable[2,trunc(abs(h[3,x,y]-c[0,x,y]))];
t[3,x,y]:=c[3,x,y]+lookuptable[3,trunc(abs(h[0,x,y]-c[1,x,y]))];
```

t is a temp array. Like all CAs you put the results into a temp array until the whole grid is processed and then you copy t back to c

c is the current CA layers array

lookuptable is the array created earlier with the combined sine waves

h is the history array, ie what the c array was last cycle/step

An important step here is to blur the arrays again once they have been processed. Using just a radius 1 gaussian is enough. This helps cut down the noise and speckled nature of the output.

3. Display. So now you have 4 byte vlaues (one for each layer) for each pixel. How can you convert these into a color to display? So far I have used these methods;

Color palette – Average the 4 values and use them as an index into a 256 color palette.

RGB – Use the first 3 layers as RGB components.

YUV – Use the first 3 layers as YUV components and then convert into RGB.

Matrix coloring – This deserves a larger section below.

__Matrix coloring__

If you use just RGB or YUV conversions the colors tend to be a bit bland and unexciting. When I asked Jonathan how he got the amazing colors in his images he explained that the layers can be multiplied by a matrix. Now, in the past I had never really needed to use matrices except in some OpenGL code (and even then OpenGL handles most of it for you) but a quick read of some online tutorials and I was good to go.

You take the current cell values for 3 layers and multiply them by a 3×3 matrix

```
[m2 m2 m2]
[m1 m1 m1] x [m2 m2 m2] = [m3 m3 m3]
[m2 m2 m2]
```

or 4 layers multiplied by a 4×4 matrix

```
[m2 m2 m2 m2]
[m2 m2 m2 m2]
[m1 m1 m1 m1] x [m2 m2 m2 m2] = [m3 m3 m3 m3]
[m2 m2 m2 m2]
```

In both cases the m1 array variables are the CA layer values for each of the X and Y pixel locations. I use the first 3 layers, last 3 layers or first layer and last 2 layers. If you use a 4×4 matrix use the first 4 etc. Multiply them by the m2 matrix and get the m3 results.

You can use the m3 results as RGB components to color the pixels (after multiplication the RGB values will be outside the normal 0 to 255 range so they will need to be normalised/scaled back into the 0 to 255 range). For the 4×4 matrix you can use the 4th returned value for something too. I use it for a contrast setting but it doesn’t make that much of a difference.

If the matrix you are multiplying is the identity matrix (ie all 0s with diagonal 1s) the m3 result is the same as the m1 input. If you change the 0s slightly you get a slight change. For the images in this post I have used random matrix values anywhere from -10 to +10 (with and without a forced diagonal of 1s).

Using matrix multiplication to get the wider range of colors is really awesome. I had never seen this method used before and am so grateful of the hint from Jonathan. I am sure it will come in handy in many other areas when I need to convert a set of values to colors.

Another great tip from Jonathan is to do histogram equalisation on the bitmap before display. This acts like an auto-contrast in Photoshop and can really make blander images pop.

Another thing to try is hue shifting the image. Convert each pixel’s RGB values to HSL, shift the H component and then convert back to RGB.

Once you put all those options together you get might get a dialog with a bunch of settings that looks like the following screen shot.

__Problem__

There is one major issue with this method though. You cannot create a nice smooth evolving animation this way. The cell values fluctuate way too much between steps causing the movie to flicker wildly. To help reduce flicker you can render only every second frame, but the movie still has wild areas. Jonathtan confirmed his method also has the same display results. Coupled Cellular Automata using the method described in this post are best used for single frame images and not movies.

__Help me__

If you are a fellow cellular automata enthuisast and have a play with this method, let me know of any advancements you make. Specifically, the main area that can be tweaked is the inner loop algorithm of how the layers are combined and processed. I am *really* interested in any variations that people come up with.

__Explore it yourself__

If you are not a coder and want to experiment with Coupled Cellular Automata they are now included in Visions of Chaos.

To see my complete gallery of Coupled Cellular Automata images click here.

Jason.

]]>