The basic Mandelbrot Fractal formula is z=z^2+c. The Burning Ship Fractal formula is z=abs(z)^2+c.

The following image is the standard power 2 Burning Ship Fractal rendered using CPM smooth coloring.

Zooming in to the right *antenna* part of the fractal shows why it was named the Burning Ship.

The next 3 images change the exponent 2 in z=abs(z)^2+c to 3, 4 and 5.

The same power 2 through power 5 Burning Ships but this time using Triangle Inequality Average (TIA) coloring

The next 4K resolution movie shows a series of zooms into Burning Ship Fractals between power 2 and power 5 colored using CPM coloring

and finally another 4K movie showing more Burning Ship zooms colored using TIA coloring

All of the above images and movies were created with Visions of Chaos.

Jason.

]]>

The usual Mandelbrot fomula is

z=z*z+c

Taking the z*z+c part, replace the z’s with (z*z+c) and replace the c’s with (c*c+z)

After one level of replacement you get

((z*z+c)*(z*z+c)+(c*c+z))

Level 2 is

(((z*z+c)*(z*z+c)+(c*c+z)) * ((z*z+c)*(z*z+c)+(c*c+z)) + ((c*c+z)*(c*c+z)+(z*z+c)))

and Level 3 is

((((z*z+c)*(z*z+c)+(c*c+z))*((z*z+c)*(z*z+c)+(c*c+z))+((c*c+z)*(c*c+z)+(z*z+c)))*(((z*z+c)*(z*z+c)+(c*c+z))*((z*z+c)*(z*z+c)+(c*c+z))+((c*c+z)*(c*c+z)+(z*z+c)))+(((c*c+z)*(c*c+z)+(z*z+c))*((c*c+z)*(c*c+z)+(z*z+c))+((z*z+c)*(z*z+c)+(c*c+z))))

Then you use the level 3 formula and render it as a Julia Set.

Complex C (-0.2,0.0)

Complex C (-0.14 0.0)

Complex C (-0.141 0.0)

The following movie shows the complex C changing slowly from 0 to -0.2 and three zooms into Meta-Mandelbrots. Unfortunately because these are Julia sets the shapes deeper in are virtually identical to the original fractal. You don’t get those totally different looking areas as you do with Mandelbrot fractals.

For more information see the original Fractal Forums post here.

The GLSL shader to generate these fractals is now included with Visions of Chaos.

]]>

The other day I saw this YouTube video of a Belousov-Zhabotinsky Cellular Automaton (BZ CA) by *John BitSavage*

After a while of running in an oscillating state it begins to grow cell like structures. I had never seen this in BZ CAs before. I have seen similar cell like growths in Digital Inkblot CAs and in the Yin Yang Fire CA. Seeing John’s results different to the usual BZ CA was what got me back into researching BZ in more depth.

__The Belousov-Zhabotinsky Reaction__

Belousov-Zhabotinsky Reactions (see here for more info) are examples of a chemical reactions that can oscillate between two different states and form intetesting patterns when performed in shallow petri dishes.

Here are some sample high res images of real BZ reaction by Stephen Morris. Click for full size.

and some other images from around the Internet

and some sample movies I found on YouTube

__The Hodgepodge Machine Cellular Automaton__

Back in August 1988, Scientific American‘s Computer Recreations section had an article by A. K. Dewdney named “The hodgepodge machine makes waves”. After a fair bit of hunting around I could not find any copies of the article online so I ended up paying $8 USD to get the issue in PDF format. The PDF is a high quality version of the issue, but $8 is still a rip off.

In the article Dewdney describes the “hodgepodge machine” cellular automaton designed by Martin Gerhardt and Heike Schuster of the University of Bielefeld in West Germany. A copy of their original paper can be seen here.

__How the Hodgepodge Machine works__

The inidividual cells/pixels in the hodgepodge automaton have n+1 states (between 0 and n). Cells at state 0 are considered “healthy” and cells at the maximum state n are said to be “ill”. All cells with states inbetween 0 and n are “infected” with the larger the state representing the greater level of infection.

Each cycle of the cellular automaton a series of rules are applied to each cell depending on its state.

(a) If the cell is healthy (i.e., in state 0) then its new state is [a/k1] + [b/k2], where a is the number of infected cells among its eight neighbors, b is the number of ill cells among its neighbors, and k1 and k2 are constants. Here “[]” means the integer part of the number enclosed, so that, for example, [7/3] = [2+1/3] = 2.

(b) If the cell is ill (i.e., in state n) then it miraculously becomes healthy (i.e., its state becomes 0).

(c) If the cell is infected (i.e., in a state other than 0 and n) then its new state is [s/(a+b+1)] + g, where a and b are as above, s is the sum of the states of the cell and of its neighbors and g is a constant.

The parameters given for these CA are usual q (for max states), k1 and k2 (the above constants) and g (which is a constant for how quickly the infection tends to spread).

__My previous limited history experimenting with BZ__

Years ago I implemented BZ CA in Visions of Chaos (I have now correctly renamed the mode Hodgepodge Machine) and got the following result. This resolution used to be considered the norm for YouTube and looked OK on lower resolution screens. How times have changed.

The above run used these parameters

q=200

k1=3

k2=3

g=28

__Replicating Gerhardt and Miner’s results__

Gerhadt and Miner used fixed values of k1=2 and k2=3. The majority of their experiments used a grid size of q=20 (ie only 20×20 cells) without a wraparound toroidal world. This leaves the single infection spreading g variable to play with. Their paper states they used values of g between 1 and 10, but I get no spirals with g in that range.

Here are a few samples which are 512×512 sized grids with wraparound edges and many thousands of generations to be sure they had finally settled down. Each cell is 2×2 pixels in size so they are 1024×1024 images.

q=100, k1=2, k2=3, g=5

q=100, k1=2, k2=3, g=20

q=100, k1=2, k2=3, g=25

q=100, k1=2, k2=3, g=30

__Results from other parameters__

q=100, k1=3, k2=3, g=10

q=100, k1=3, k2=3, g=15

q=100, k1=3, k2=3, g=20

__Extending into 3D__

The next logical step was extending it into three dimensions. This blog post from Rudy Rucker shows a 3D BZ CA from Harry Fu back in 2004 for his Master’s degree writing project. I must be a nerd as I whipped up my 3D version over two afternoons. Surprisingly there are no other references to experiments with 3D Hodgepodge that I can find.

The algorithms are almost identical to their 2D counterparts. The Moore neighborhood is extended into three dimensions (so 26 neighbors rather than 8 in the 2D version). It is difficult to see the internal structures as they are hidden from view. Methods I have used to try and see more of the internals are to slice out 1/8th of the cubes and to render only some of the states.

Clicking these sample images will show them in 4K resolution.

q=100, k1=1, k2=18, g=43 (150x150x150 grid)

q=100, k1=1, k2=18, g=43 (150x150x150 grid – same as previous with a 1/8th slice out to see the same patterns are extending through the 3D structure)

q=100, k1=1, k2=18, g=43 (150x150x150 grid – same rules again, but this time with only state 0 to state 50 cells being shown)

q=100, k1=2, k2=3, g=15 (150x150x150 grid)

q=100, k1=3, k2=6, g=31 (150x150x150 grid)

q=100, k1=4, k2=6, g=10 (150x150x150 grid)

q=100, k1=4, k2=6, g=10 (150x150x150 grid – same rules as the previous image – without the 1/8th slice – with only states 70 to 100 visible)

q=100, k1=3, k2=31, g=43 (250x250x250 sized grid – 15,625,000 total cells)

q=100, k1=4, k2=12, g=34 (350x350x350 sized grid – 42,875,000 total cells)

q=100, k1=1, k2=9, g=36 (400x400x400 sized grid – 64,000,000 total cells)

Download Visions of Chaos if you would like to experiment with both 2D and 3D Hodgepodge Machine cellular automata. If you find any interesting rules please let know in the comments or via email.

]]>

__Generating the inital terrain__

There are many ways to generate a terrain height array. For the terrain in this post I am using Perlin noise.

This is the 2D Perlin Noise image…

…that is extruded to the following 3d terrain…

An alternative method is to use 1/f Perlin Noise that creates this type of heightmap…

..and this 3D terrain.

__Simulating erosion__

Rather than try and replicate some of the much more complex simulators out there for wind and rain erosion (see for example here, here and here) I experimented with the simplest version I could come up with.

1. Take a single virtual rain drop and drop it to a random location on the terrain grid. Keep track of a totalsoil amount which starts at 0 when the drop is first dropped onto the terrain.

2. Look at its immediate 8 neighbors and find the lowest neighbor.

3. If no neighbors are lower deposit the remaining total soil carried and stop. This lead to large spikes as the totalsoil was too much. I since changed the drop rate to the same as the fixed depositrate. Technically this removes soil from the system, but the results are more realistic looking terrain.

4. Pick up a bit of the soil from the current spot (lower the terrain array at this point).

soilamount:=slope*erosionrate;

totalsoil:=totalsoil+soilamount;

heightarray[wx,wy]:=max(heightarray[wx,wy]-soilamount,0);

5. Move to the lowest neighbor point.

6. Deposit a bit of the carried soil at this location.

deposit:=soilamount*depositrate/slope;

heightarray[lx,ly]:=heightarray[lx,ly]+deposit;

totalsoil:=max(totalsoil-deposit,0);

7. Goto 1.

Repeat this for millions of drops.

The erosion and deposit steps (4 and 6 above) simulate the water flowing down hill, picking up and depositing soil as it goes.

To add some wind erosion you can smooth the height array every few thousand rain drops. I use a simple convolution blur every 5000 rain drops. This smooths the terrain out a little bit and can be thought of as wind smoothing the jagged edges of the terrain down a bit.

__Erosion Movie__

Here is a sample movie of the erosion process. This is a 13,500 frame 4K resolution movie. Each frame covers 10,000 virtual raindrops being dropped. This took days to render. 99% of the time is rendering the mesh with OpenGL. Simulating the raindrops is very fast.

and here are screenshots of every 2000th frame to show the erosion details more clearly.

__Future ideas__

The above is really just a quick experiment and I would like to spend more time on the more serious simulations of terrain generation and erosion. I have the book Texturing and Modeling, A Procedural Approach on my bookshelf and it contains many other terrain ideas.

Jason.

]]>

I spent a few hours getting my own version going and doing some high res test runs. The example images in this post are all 4K size. Click the thumbnails to see them full size.

The basic setup includes four “factions” that use the same settings, ie

This sequence started from a block of 4 squares of factions in the middle of the screen and grew outwards from there. None of the cells die once born.

And finally after approximately 27,000 steps when my hard drive filled up and crashed the program when it could not save any more frames.

Here is the same setup settings started from a random soup of factions with single pixel sized cells after running for a few thousand steps. There is some clumping of factions.

There seems to also be a bias to later factions in the method used in the source code. The factions are processed in order (first faction 1, then 2, etc). This is OK when each faction is surrounded by empty space, but breaks down once factions meet. This means that (for example) faction 1 may fight and take over a faction 4 location, but then once the faction 4 cells are processed the same cell may be taken back negating the original win. Overall this leads to the later factions being given a slight bias to spread faster and further than earlier factions. Or maybe it is just my version of the source code. I need to think about this some more and make sure the moving/fighting is fair.

Here are a few other color methods I experimented with. Firstly, average each cell color from which factions have visited. Cells are 10×10 pixels this time.

And the same, but with histogram equalization to get a wider range of colors.

Different faction colors and 5×5 sized cells.

I have added Cell Conquest as a new mode in the latest version of Visions of Chaos.

Jason.

]]>

After my original attempt to replicate Jonathan McCabe‘s Coupled Cellular Automata results I was contacted by Ian McDonald with some questions about how my original algorithms worked and that inspired both of us to make another attempt at getting McCabe like results.

With some back and forth and some hints from Jonathan we were able to get a little further towards replicating his images.

__How the McCabe algorithms work__

Jonathan has provided some hints to how his Coupled CA work in the past.

Firstly from here;

**Each pixel represents the state of the 4 cells of 4 cellular automata, which are cross coupled and have their individual state transition tables. There is a “history” or “memory” of the previous states which is used as an offset into the state transition tables, resulting in update rules which depend on what has happened at that pixel in previous generations. Different regions end up in a particular state or cycle of states, and act very much like immiscible liquids with surface tension.**

and secondly from here;

**The generative system involves four linked cellular automata – think of them as layers. “Linked” because at each time step, a cell’s state depends both on its neighbours in that layer, and on the states of the corresponding cells in the other three layers. Something like a three-dimensional CA, but not quite; the four layers influence each other through a weighted network of sixteen connections (a bit like a neural net). The pixels in the output image use three of the four CA layers for their red, green and blue values.**

**As in a conventional CA, each cell looks to its neighbours to determine its future state. This is a “totalistic” CA, which means each simply sums the values of its neighbours, then changes its state based on a table of transition rules. Now for the really good part: each cell also uses its recent history as an “offset” into that transition table. In other words, the past states of a cell transform the rules that cell is using. The result is a riot of feedback cycles operating between state sequences and rulesets; stable or periodically oscillating regions form, bounded by membrane-like surfaces where these feedback cycles compete. Structures even form inside the membranes – rule/state networks that can only exist between other zones. **

After emailing him he did provide a few more more clues to how his Coupled CA work which did help me get to the point I am at now.

How the “inner loop” behaves and how the 4 (or more) CA layers are actually processed and combined is still a mystery, but this is a hint;

**“Cross-Coupled” is a matrix multiply to give 4 numbers from 4 numbers. If the matrix was “1.0” diagnally and “0.0” elsewhere you have zero coupling. If the zeros are replaced with other values you get cross-coupling of various amounts. If the cross coupling is very low you have 4 independant systems, if it is very high you get effectively one system, I think there is a sweet spot in between where they influence each other but don’t disappear into the mix.**

And regarding the history;

**The history or memory is an average of previous states at that cell. You can do it a few ways, say as a decaying average, adding the current state and multiplying by 0.99 or some number, so the memory fades. The memory is added to the index to look up the table, so you actually need a bigger table than 9*256.
**

__How my version works__

Here are the basic steps to how I created the images within this post. Note that this is not how the McCabe CA works, but does show some similarities visually to his.

The CA is based on a series of CA layers. Each layer/array constains byte values (0-255) and is initialised with random noise. You need a minimum of 4 layers, but any number of layers can be used.

You also need a set of lookup tables for each layer. The lookup tables are 1D arrays containing byte values (0-255). They each have 256*10 entries. The arrays are filled using multiple sine waves combining different frequencies and amplitudes which are then normalised into the 0-255 range. You want to have smallish changes between the lookup table entries.

256*10 entries in the lookup table is because in the McCabe CA the current cell, its 8 neighbours and the history of that cell are used to index the lookup table. That gives 10 possible values between 0 and 255 which are totalled to index the lookup table. For the method I discuss in this blog post, you really only need a lookup table with 256 entries as the index maths never goes betyond 0 to 255. BUT I still use the larger sized array to stretch the sine waves further/more smoothly across the tables. Note you can also change the “*10” size to a larger number to help smooth out the resulting CA displays even more. Experiment with different values.

Now for the main loop.

1. Blur each layer array. Gaussian blur can be used but tends to be slow. Using a faster 2 pass blur as described here works just as well for this purpose. Make sure the blur radius has a decent amount of difference between layers, for example 1, 10, 50, 100. These large differences in blur amounts is what leads to the resulting images having their detail at many scales appearance.

2. Process the layers. This is the *secret* step that Jonathan has not yet released. I tried all sorts of random combinations and equations until I found the following. This is what I use in the images for this post.

`for i:=0 to numlayers-1 do t[i,x,y]:=c[i,x,y]+lookuptable[i,trunc(abs(h[(i+1) mod numlayers,x,y]-c[(i+2) mod numlayers,x,y]))];`

That allows for any number of layers, but to show the combinations simpler, here it is unrolled for 4 layers

`t[0,x,y]:=c[0,x,y]+lookuptable[0,trunc(abs(h[1,x,y]-c[2,x,y]))];`

t[1,x,y]:=c[1,x,y]+lookuptable[1,trunc(abs(h[2,x,y]-c[3,x,y]))];

t[2,x,y]:=c[2,x,y]+lookuptable[2,trunc(abs(h[3,x,y]-c[0,x,y]))];

t[3,x,y]:=c[3,x,y]+lookuptable[3,trunc(abs(h[0,x,y]-c[1,x,y]))];

t is a temp array. Like all CAs you put the results into a temp array until the whole grid is processed and then you copy t back to c

c is the current CA layers array

lookuptable is the array created earlier with the combined sine waves

h is the history array, ie what the c array was last cycle/step

An important step here is to blur the arrays again once they have been processed. Using just a radius 1 gaussian is enough. This helps cut down the noise and speckled nature of the output.

3. Display. So now you have 4 byte vlaues (one for each layer) for each pixel. How can you convert these into a color to display? So far I have used these methods;

Color palette – Average the 4 values and use them as an index into a 256 color palette.

RGB – Use the first 3 layers as RGB components.

YUV – Use the first 3 layers as YUV components and then convert into RGB.

Matrix coloring – This deserves a larger section below.

__Matrix coloring__

If you use just RGB or YUV conversions the colors tend to be a bit bland and unexciting. When I asked Jonathan how he got the amazing colors in his images he explained that the layers can be multiplied by a matrix. Now, in the past I had never really needed to use matrices except in some OpenGL code (and even then OpenGL handles most of it for you) but a quick read of some online tutorials and I was good to go.

You take the current cell values for 3 layers and multiply them by a 3×3 matrix

` [m2 m2 m2]`

[m1 m1 m1] x [m2 m2 m2] = [m3 m3 m3]

[m2 m2 m2]

or 4 layers multiplied by a 4×4 matrix

` [m2 m2 m2 m2]`

[m2 m2 m2 m2]

[m1 m1 m1 m1] x [m2 m2 m2 m2] = [m3 m3 m3 m3]

[m2 m2 m2 m2]

In both cases the m1 array variables are the CA layer values for each of the X and Y pixel locations. I use the first 3 layers, last 3 layers or first layer and last 2 layers. If you use a 4×4 matrix use the first 4 etc. Multiply them by the m2 matrix and get the m3 results.

You can use the m3 results as RGB components to color the pixels (after multiplication the RGB values will be outside the normal 0 to 255 range so they will need to be normalised/scaled back into the 0 to 255 range). For the 4×4 matrix you can use the 4th returned value for something too. I use it for a contrast setting but it doesn’t make that much of a difference.

If the matrix you are multiplying is the identity matrix (ie all 0s with diagonal 1s) the m3 result is the same as the m1 input. If you change the 0s slightly you get a slight change. For the images in this post I have used random matrix values anywhere from -10 to +10 (with and without a forced diagonal of 1s).

Using matrix multiplication to get the wider range of colors is really awesome. I had never seen this method used before and am so grateful of the hint from Jonathan. I am sure it will come in handy in many other areas when I need to convert a set of values to colors.

Another great tip from Jonathan is to do histogram equalisation on the bitmap before display. This acts like an auto-contrast in Photoshop and can really make blander images pop.

Another thing to try is hue shifting the image. Convert each pixel’s RGB values to HSL, shift the H component and then convert back to RGB.

Once you put all those options together you get might get a dialog with a bunch of settings that looks like the following screen shot.

__Problem__

There is one major issue with this method though. You cannot create a nice smooth evolving animation this way. The cell values fluctuate way too much between steps causing the movie to flicker wildly. To help reduce flicker you can render only every second frame, but the movie still has wild areas. Jonathtan confirmed his method also has the same display results. Coupled Cellular Automata using the method described in this post are best used for single frame images and not movies.

__Help me__

If you are a fellow cellular automata enthuisast and have a play with this method, let me know of any advancements you make. Specifically, the main area that can be tweaked is the inner loop algorithm of how the layers are combined and processed. I am *really* interested in any variations that people come up with.

__Explore it yourself__

If you are not a coder and want to experiment with Coupled Cellular Automata they are now included in Visions of Chaos.

To see my complete gallery of Coupled Cellular Automata images click here.

Jason.

]]>

Here is a 4K resolution 60 fps movie with some samples of what the new simulations can do.

This third attempt is fairly close to second version but with a few changes I will explain here.

The main change is being able to order the effects. This was the idea that got me programming version 3. A shuffle button is also provided that randomly orders the effects. Allowing the effect order to be customised gives a lot of new results compared to the first 2 video feedback simulation modes.

Here are some explanations for the various effects.

__HSL Flow__

Takes a pixel’s RGB values, converts them into HSL values and then uses the HSL values in formulas to select a new pixel color. For example if the pixel is red RGB(255,0,0), then this converts to HSL(0,1,0.5) assuming all HSL values range from 0 to 1. The length formula above is H*360 and the length formula is s*5. So in this case the new pixel value read would be 5 pixels away at the angle 0 degrees. Changing these formulas allows the image to “flow” depending on the colors.

__Sharpen__

Sharpens the image by blurring the image twice using different algorithms (in my case I use a QuickBlur (Box Blur) and a weighted convolution kernel). The second blur value is subtracted from the first and then using the following formula the target pixel value is found. newr=trunc(r1+amount*(r1-r2))

__Blur__

Uses a standard gaussian blur.

__Blend__

Combines the last “frame” and the current frame. Various blend options change how the layers are combined.

__Contrast__

Standard image contrast setting. Can also be set for negative contrasts. Uses the following formula for each of the RGB values r=r+trunc((r-128)*amount/100)

__Brightness__

Standard brightness. Increases or decreases the pixel color RGB values.

__Noise__

Adds a random value to each pixel. Adding a bit of noise can help stop a simulation dying out to a single color.

__Rotate__

Guess what this does?

__Histogram__

Uses a histogram of the image to auto-brightness. Can help the image from getting too dark or too light.

__Stretch__

Zooms the image. Various options determine the algorithm used to zoom the image.

__Image Processing__

Allows the various image processing functions in Visions of Chaos to be injected into the mix for even more variety.

As always, you can experiment with the new VF3 mode in the latest version of Visions of Chaos.

I would be interested in seeing any unique results you come up with.

For the next version 4 simulation I would like to chain various GLSL shaders together that make the blends, blurs etc. That will allow the user to fully customise the simulation and insert new effects that I did not even consider. Also GLSL for speed. Rendering the above movie frames took days at 4K resolution.

Jason.

]]>

After recently adding the new Custom Formula Editor into Visions of Chaos I decided to go back and re-render some sample Mandelbrot movies at 4K 60fps. Most of these Mandelbrot movie scripts go back to the earliest days of Visions of Chaos even before I first released it on the Internet. The only changes made have been to add more sub frames so they render more smoothly and last longer.

Thanks to the new formula editor using OpenGL shading language and a recent NVidia GPU all the movie parts rendered overnight. Every frame was 2×2 supsampled so the equivalent of rendering an image 4 times as large and then downsampling to increase the quality. The original movie was a 46 GB XVID encoded AVI before YouTube put it through all of its internal recoding steps.

Jason.

]]>

Rather than write my own formula parser and compiler I am using the OpenGL Shading Language. GLSL gvies faster results than any compiler I could code by hand and has an existing well documented syntax. The editor is a full color syntax editor with error highlighting.

As long as your graphics card GPU and drivers support OpenGL v4 and above you are good to go. Make sure that you have your video card drivers up to date. Old drivers can lead to poor performance and/or lack of support for new OpenGL features. In the past I have had outdated drivers produce corrupted display outputs and even hang the PC running GLSL shader code. Always make sure your video drivers are up to date.

For an HD image (1920×1080 resolution) a Mandelbrot fractal zoomed in at 19,000,000,000 x magnification took 36 seconds on CPU (Intel i7-4770) vs 6 seconds on GPU (GTX 750 Ti). A 4K Mandelbrot image (3840×2160 resolution) at 12,000,000,000 x magnification took 2 minutes and 3 seconds on CPU (Intel i7-6800) and 2.5 seconds on GPU (GTX 1080). The ability to quickly render 8K res images for dual monitor 4K displays in seconds is really nice. Zooming into a Mandelbrot with minimal delays for image redraws really increases the fluidity of exploration.

So far I have included the following fractal formulas with the new Visions of Chaos. All of these sample images can be clicked to open full 4K resolution images.

Buffalo Fractal Power 2

Buffalo Fractal Power 3

Buffalo Fractal Power 4

Buffalo Fractal Power 5

Burning Ship Fractal Power 2

Burning Ship Fractal Power 3

Burning Ship Fractal Power 4

Burning Ship Fractal Power 5

Celtic Buffalo Fractal Power 4 Mandelbar

Celtic Buffalo Fractal Power 5 Mandelbar

Celtic Burning Ship Fractal Power 4

Celtic Mandelbar Fractal Power 2

Celtic Mandelbrot Fractal Power 2

Celtic Heart Mandelbrot Fractal Power 2

Heart Mandelbrot Fractal Power 2

Lyapunov Fractals

Magnetic Pendulum

Mandelbar (Tricorn) Fractal Power 2

Mandelbar Fractal Power 3

Mandelbar Fractal Power 3 Diagonal

Mandelbar Fractal Power 4

Mandelbar Fractal Power 5 Horizontal

Mandelbar Fractal Power 5 Vertical

Mandelbrot Fractal Power 2

Mandelbrot Fractal Power 3

Mandelbrot Fractal Power 4

Mandelbrot Fractal Power 5

Partial Buffalo Fractal Power 3 Imaginary

Partial Buffalo Fractal Power 3 Real Celtic

Partial Buffalo Fractal Power 4 Imaginary

Partial Burning Ship Fractal Power 3 Imageinary

Partial Burning Ship Fractal Power 3 Real

Partial Burning Ship Fractal Power 4 Imageinary

Partial Burning Ship Fractal Power 4 Real

Partial Burning Ship Fractal Power 5

Partial Burning Ship Fractal Power 5 Mandelbar

Partial Celtic Buffalo Fractal Power 4 Real

Partial Celtic Buffalo Fractal Power 5

Partial Celtic Burning Ship Fractal Power 4 Imaginary

Partial Celtic Burning Ship Fractal Power 4 Real

Partial Celtic Burning Ship Fractal Power 4 Real Mandelbar

Perpendicular Buffalo Fractal Power 2

Perpendicular Burning Ship Fractal Power 2

Perpendicular Celtic Mandelbar Fractal Power 2

Perpendicular Mandelbrot Fractal Power 2

Quasi Burning Ship Fractal Power 3

Quasi Burning Ship Fractal Power 5 Hybrid

Quasi Celtic Heart Mandelbrot Fractal Power 4 Real

Quasi Celtic Heart Mandelbrot Fractal Power 4 False

Quasi Celtic Perpendicular Mandelbrot Fractal Power 4 False

Quasi Celtic Perpendicular Mandelbrot Fractal Power 4 Real

Quasi Heart Mandelbrot Fractal Power 3

Quasi Heart Mandelbrot Fractal Power 4 Real

Quasi Heart Mandelbrot Fractal Power 4 False

Quasi Heart Mandelbrot Fractal Power 5

Quasi Perpendicular Burning Ship Fractal Power 3

Quasi Perpendicular Burning Ship Fractal Power 4 Real

Quasi Perpendicular Celtic Heart Mandelbrot Fractal Power 4 Imaginary

Quasi Perpendicular Heart Mandelbrot Fractal Power 4 Imaginary

Quasi Perpendicular Mandelbrot Fractal Power 4 False

Quasi Perpendicular Mandelbrot Fractal Power 5

A lot of the above formulas came from these summaries *stardust4ever* created.

All of these new custom fractals are fully zoomable to the limit of double precision floating point variables (around 1,000,000,000,000 magnification in Visions of Chaos). The formula compiler is fully supported for movie scripts so you can make your own movies zooming into these new fractals.

This next sample movie is 4K resolution at 60 fps. Each pixel was supersampled as the average of 4 subpixels. This took only a few hours to render. The resulting Xvid AVI was 30 GB before uploading to YouTube.

So, if you have been waiting to program your own fractal formulas in Visions of Chaos you now can. I look forward to seeing the custom fractal formulas Visions of Chaos users create. If you do not understand the OpenGL shading language you can still have fun with all the default sample fractal formulas above without doing any coding.

Jason.

]]>

All of the images and movies in this post were created with Visions of Chaos.

__Real Life Experiments__

If you put an electrodeposition cell into a copper sulfate solution you can get results like the following images.

__2D Diffusion-limited Aggregation__

DLA is a simple process to simulate in the computer and was one of my first simulations that I ever coded. For the simplest example we can consider a 2D scenario that follows the following rules;

1. Take a grid of pixels and clear all pixels to white.

2. Make the center pixel black.

3. Pick a random edge point on any of the 4 edges.

4. Create a particle at that edge point.

5. Move the particle randomly 1 pixel in any of the 8 directions (N,S,E,W,NE,NW,SE,SW). This is the “diffusion”.

6. If the particle goes off the screen kill it and go to step 4 to create a new particle.

7. If the particle moves next to the center pixel it gets stuck to it and stays there. This is the “aggregation”. A new particle is then started.

Repeat this process as long as necessary (usually until the DLA structure grows enough to touch the edge of the screen).

When I asked some of my “non-nerd” aquaintances what pattern would result from this they all said a random blob like pattern. In reality though the pattern produced is far from a blob and more of a coral like branched dendritic structure. When you think about it for a minute it does make perfect sense. Because the moving particles are all coming from outside the fixed structure they have less of a chance of randomly walking between branches and getting to the inner parts of the DLA structure.

All of the following sample images can be clicked to see them at full 4K resolution.

Here is an image created by using the steps listed above. Starting from a single centered black pixel, random moving pixels are launched from the edge of the screen and randomly move around until they either stick to the existing structure or move off the screen.

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: when they move next to a single stuck particle

The change in this next example is that neighbouring pixels are only checked in the N, S, E and W directions. This leads to horizonal and vertical shaped branching.

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W

Particles stick condition: when they move next to a single stuck particle

There are many alternate ways to tweak the rules when the DLA simulation is running.

Paul Bourke’s DLA page introduces a “stickiness” probability. For this you select a probability of a particle sticking to the existing DLA structure when it touches it.

Decreasing probability of a particle sticking leads to more clumpy or hairy structures.

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: 0.5 chance of sticking when they move next to a single stuck particle

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: 0.1 chance of sticking when they move next to a single stuck particle

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: 0.01 chance of sticking when they move next to a single stuck particle

Another method is to have a set count of hits before a particle sticks. This gives similar results to using a probability.

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has hit stuck particles 15 times

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has hit stuck particles 30 times

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has hit stuck particles 100 times

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has hit stuck particles 1000 times

Another change to experiment with is having a threshold neighbour count random particles have to have before sticking. For example in this next image all 8 neighbours were checked and only 1 hit was required to stick, but the moving particles needed to have at least 2 fixed neighbour particles before they stuck to the existing structure.

The first 50 hits only need 1 neighbour. This build a small starting structure which all the following hits stick to with 2 or more neighbours.

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has 2 or more stuck neighbours

That gives a more clumpy effect without the extra “hairyness” of just adding more hit counts before sticking.

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has 3 or more stuck neighbours

This seems a happy balance to get thick branches yet not too fuzzy ones.

Start configuration: Single centered black pixel

Random pixels launched from: rectangle surrounding existing structure

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has 3 or more stuck neighbours and has hit 3 times

The next two examples launch particles from the middle of the screen with the screen edges set as stationary particles.

Start configuration: Edges of screen set to black pixels

Random pixels launched from: middle of the screen

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has 1 or more stuck neighbours

Start configuration: Edges of screen set to black pixels

Random pixels launched from: middle of the screen

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has 3 or more stuck neighbours

These next three use a circle as the target area. Particles are launched from the center of the screen.

Start configuration: Circle of black pixels

Random pixels launched from: middle of the screen

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has 1 or more stuck neighbours

Start configuration: Circle of black pixels

Random pixels launched from: middle of the screen

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has 2 or more stuck neighbours

Start configuration: Circle of black pixels

Random pixels launched from: middle of the screen

Neighbours checked for an existing pixel: N, S, E, W, NE, NW, SE, SW

Particles stick condition: once a moving particle has 3 or more stuck neighbours

This next image is a vertical 2D DLA. Particles drop from the top of the screen and attach to existing seed particles along the bottom of the screen. Color of sticking particles uses the color of the particle they stick to.

Start configuration: Bottom of screen is fixed particles

Random pixels launched from: top of the screen

Neighbours checked for an existing pixel: S, SE, SW

Particles stick condition: once a moving particle has 1 or more stuck neighbours

__Dendron Diffusion-limited Aggregation__

Another DLA method is Golan Levin’s Dendron DLA. He kindly provided java source code so I could experiment with Dendron myself. Here are a few sample images.

Hopefully all of the above examples shows the wide variety of 2D DLA possible and the various methods that can produce them. This post was mainly going to be about 3D DLA, but I got carried away trying all the 2D possibilities above.

__3D Diffusion-limited Aggregation__

DLA can be extended into 3 dimensions. Launch the random particles from a sphere rather than a surroundiong circle. Check for a hit against the 27 possible neighbours for each 3D grid cell.

For these movies I used default software based OpenGL spheres. Nothing fancy. Slow, but still managable with some overnight renders.

In this first sample movie particles stuck once they had 6 neighbours. In the end their are approximately 4 million total spheres making up the DLA structure.

This second example needed 25 hits before a particle stuck and a particle needed 5 neighbours to be considered stuck. This results in a much more dense coral like structure.

__A Quick Note About Ambient Occlusion__

Ambient occlusion is when areas inside nooks and crannies are shaded darker because light would have more trouble reaching them. Normally to get accurate ambient occlusion you raytrace a hemisphere of rays from the surface point seeing how many objects the rays intersect. The more hits, the darker the shading. This is a slow process. For the previous 3D DLA movies I cheated and used a method of counting how many neighbour particles the current sphere has. The more neighbours the darker the sphere was shaded. This gives a good enough fake AO.

__Programming Tip__

A speedup some people do not realise is when launching random particles. Use a circle or rectangle just slightly larger than the current DLA structure. No need to start random points far from the structure.

__Other Diffusion-limited Aggregation Links__

For many years now the ultimate inspiration in DLA for me has been Andy Lomas.

http://www.andylomas.com/aggregationImages.html

His images like this next one are very nice.

Another change Andy uses is to launch the random walker particles in patterns that influence the structure shape as in this next image.

__Future Changes__

1. More particles. The above 2 movies have between 4 and 5 million particle spheres at the end of the movies. Even more particles would give more natural looking structures like the Andy Lomas examples.

2. Better rendering. Using software (non-GPU) OpenGL works but it is slow. Getting a real ray traced result supporting real ambient occlusion and global illumination would be more impressive.

3. Controlled launching of particles to create different structure shapes other than a growing sphere.

Jason.

]]>