Voxel Automata Terrain

Voxel Automata Terrain

Brent Werness aka @R4_Unit tweeted about an algorithm he calls “Voxel Automata Terrain” that creates interesting 3D structures. He also provides some processing source code I was able to translate so I had to have a go at creating some of these myself.

Voxel Automata Terrain

These structures are created using a process similar to the diamond-square algorithm but in 3D. A 3D cube is repeatedly iterated and divided to leave these structures behind. I don’t really understand it beyond that so I cannot give a better explanation at this time.

Voxel Automata Terrain

The code for dividing the cube up is the first parts of Brent’s code. Mostly everything after the evaledges routine is display related.

Voxel Automata Terrain

For my images I use the states array to render solid cubes using the Mitsuba renderer. State 1 and state 2 cells can be shown or hidden and displayed using different colors.

Voxel Automata Terrain

I also setup a flickr gallery with more example images.

Voxel Automata Terrain

Voxel Automata Terrain

If you start the process without a random side and fill the initial base side with all state 1 or state 2 cells you get symmetric structures as a result.

Voxel Automata Terrain

Voxel Automata Terrain

Voxel Automata Terrain

I also tried extending the maximum possible cell states to 4. The rule arrays need to be changed to three dimensional. From some initial quick searches (a few hundred of the possible 4 billion rules) did not lead to any new unique structures, but it does add another color to the mix.

Voxel Automata Terrain

Voxel Automata Terrain

Voxel Automata Terrain

Here is a movie showing some samples with ncreasing levels of detail/recursion.

Voxel Automata Terrain are now available as part of the latest version of Visions of Chaos.

I have also added the option to export the Voxel Automata Terrain cube grids as OBJ files that can be imported into other 3D modelling programs.

Jason.

Pushing 3D Diffusion-Limited Aggregation even further

If you don’t know what Diffusion-Limited Aggregation (aka DLA) is, see this post of mine.

3D Diffusion-Limited Aggregation

3D Diffusion-Limited Aggregation

Previous Results

Previously, my best 3D DLA movies were these two;

I had two main issues with those movies. Firstly, I used software based OpenGL which does not support shadows or any advanced lighting techniques. Secondly, I was launching the moving particles on the surface of a sphere that was biased to the poles, which meant the resulting growth tended to grow more in the “up” and “down” directions rather than a nice spherical shape. This is especially noticable in the first movie above that grows more in the up direction than the other directions.

64-bit

Now that Visions of Chaos is 64-bit I wanted to push the limits of the DLA animations even further. In the past the 32-bit executable limited me to around 3GB of usable memory which restricted the size of the arrays that I could grow DLA structures within to around 750x750x750. With a 64-bit executable Visions of Chaos can now use as much memory as the PC has available.

3D Diffusion-Limited Aggregation

3D Diffusion-Limited Aggregation

Bugs in old code

I have been experimenting with 2D and 3D DLA for many years now and my existing code reflected that. When I started expanding the grid size and particle counts I was getting strange crashes/hangs/poor behavior. As any programmer knows, you think “this will be an easy fix”. Well, 3 or 4 days later after trying to fix these simple bugs and rendering lengthy test runs I was ready to chuck my PC out the window. Having to wait a few hours to see if a fix works or not really tests the patience. In the end I bit the bullet and woke up one morning and rewrote the code from scratch. It took a few hours and I now have a much more stable and faster version. I also added support for Mitsuba so I get nicely shaded images. Back on track.

3D Diffusion-Limited Aggregation

3D Diffusion-Limited Aggregation

Latest results

With the extra memory available from a 64-bit Visions of Chaos executable and being able to use Mitsuba to render each frame it was time to let my PC churn away for a few days rendering DLA frames. The few days expanded into a few weeks as I kept tweaking settings and code and started to render the DLA movie parts again from scratch. But finally, the following movie was complete.

I only have 32 GB memory in my main PC so those sample movies run out of RAM when around 10 million particles are reached. This is double the maximum particles I achieved in the past. I need to look at maxing out my memory to 128 GB so I can create even larger DLA structures.

Something to observe in that movie is how once the particle count gets beyond around one million the overall structure remains the same as it continues to grow. This is a great example of the self-similarity properties of fractals. With more memory and more particles the overall structure would not suddenly change beyond 10 million particles. The individual spheres would shrink into sub pixel sizes, but the overall growing shapes would remain self-similar (as render times continued to increased per frame). This is also noticeable in most of the sample images in this post.

RenderMan Blobby Implicit Surfaces

Using RenderMan as a rendering engine allows the use of blobby implicit surfaces aka metaballs. The metaball process merges the individual spheres into a blobby surface and gives the following result.

I didn’t let these examples run for as long as the previous example movie. This is because they were mainly as a quick example of blobby implicit surface rendering. Also because they were reaching the levels of detail that further steps don’t add any new structure shapes (due to the self similarity of DLA). And mainly because RenderMan was starting to stretch my patience taking over 6 minutes to render each frame.

Some DLA coding tips for programmers

If you are not programming 3D DLA then this next bit will be of little use to you and you can feel free to skip it.

3D Diffusion-Limited Aggregation

When launching particles into the DLA space use random points on a sphere surrounding the current growth. If you use random points on a cube surrounding the growth it will be biased to grow along the axiis. Using a sphere helps maintain a more spherical growth. Set the radius of the sphere slightly larger than the furthest distance the dla structure has grown. ie if the furthest particle is 10 units of length from the origin, then set the sphere size to 13. Also make sure you use an even distribution of points on the sphere surface. My original code was biased to the poles and this is why the first sample movie above grows towards the up/north direction more than evenly in all directions. Some quick code to evenly distribute random points on a sphere is


theta:=2*pi*random;
phi:=arccos(1-2*random);
px:=round(sin(phi)*cos(theta)*launchradius)+centerx;
py:=round(sin(phi)*sin(theta)*launchradius)+centery;
pz:=round(cos(phi)*launchradius)+centerz;

3D Diffusion-Limited Aggregation

The usual DLA method is to wait until a moving particle goes “off screen” before a new particle is launched. For 3D, off screen is the array dimensions. Waiting for particles to move off the existing DLA grid can really slow down the process (especially for large sized grids). Rather than wait for the edge of the grid, use a few grid units distance beyond the launch sphere radius. So if the launch sphere is radius 13 then moving particles get discarded if they travel more than 16 distance from the origin.

3D Diffusion-Limited Aggregation

Calculating distances involves costly sqrt calculations. If you are doing a sqrt each time the particle is moving they quickly add up and slow down the simulation. To speed up distance calcs I fill an array once at the start of the simulation that contains the distance from the origin (grid center) to each array location. This makes it much quicker when you want to know how far a moving particle is from the origin. All it takes is a single array lookup rather than a sqrt distance calculation.

3D Diffusion-Limited Aggregation

Another thing you want to know for moving particles is how many neighbor particles is it touching. For example if the current settings make a particle stick to 3 or more existing neighbors then you usually do a quick loop of neighbor cells adding them up. Again, doing this every time the moving particles move adds up and slows everything down. I use another array that holds the neighbor count for each grid location. When a new particle attaches to the existing DLA structure, you add 1 to the surrounding cells in the neighbors array. Much faster.

3D Diffusion-Limited Aggregation

If you are rendering a very dense DLA then there will be a lot of particles within the middle that remain hidden. Doing a quick check to see if a particle is surrounded (ie using the above neighbors array means if neighbors=26) means it can be skipped and not sent to Mitsuba for rendering. On the densest DLA structures this cut down the number of spheres passed to OpenGL and/or Mitsuba to only 7% of the total spheres. A huge speed up in terms of time per frame.

3D Diffusion-Limited Aggregation

You need to auto-increase the hits per update each frame. ie if you set the display to update every time 10 particles get stuck to the growing structure it will look fine at the start, but once the structure starts growing you will have a long wait to get to a million particles and beyond. I used a simple formula of increasing the stuck particles per frame as the number of total stuck particles div 40. Once you get to 100000 particles, keep the update at 25,000 particles per frame. This allows a nice smooth increase in stuck particles and gets you to the more interesting million+ range quicker.

3D Diffusion-Limited Aggregation

Using the above tricks you will find the particle movements and sticking part of the code takes seconds rather than minutes per frame. The main slowdown is now in the display code.

Jason.

10 years on YouTube

Today marks 10 years since I started my YouTube channel.

Back then a video with a resolution of 240p (426×240 pixels) like the following was considered OK.

These days I can upload a 4K video (nine times the vertical and horizontal resolution of that 240p video) and once YouTube puts it through their internal conversion routines it will usually come out looking excellent.

Jason.

Visions of Chaos now supports the Mitsuba renderer

Mitsuba Renderer is a free 3D rendering engine created by Wenzel Jakob that creates realistic images like the following.

Mitsuba render

Mitsuba render

Wenzel is one of the co-authors of the seminal PBRT book, so he knows his stuff. Mitsuba uses an XML file format for the scene files that you can pass to the renderer as a command line parameter. This makes it easy for me to build a compatible XML file and get Mitsuba to render it each frame.

Here are some sample 4K images created with Visions of Chaos and rendered with Mitsuba using the constant lighting model. Constant lighting means that light is simulated hitting surfaces from all directions evenly. This means there are no shadows, but crevices within structures and corners are shaded darker because of ambient occlusion.

Mitsuba 3D Cube Divider render

Mitsuba 3D Diffusion-Limited Aggregation render

Mitsuba 3D Cyclic Cellular Automaton render

Mitsuba 3D Cellular Automata render

Mitsuba 3D Ant Automaton render

Mitsuba 3D Cellular Automata render

Mitsuba 3D Cellular Automata render

Mitsuba 3D Cellular Automata render

Mitsuba 3D Cellular Automata render

Mitsuba 3D Cellular Automata render

Using Mitsuba really gives clean, nicely shaded results and the examples above only using the most basic Mitsuba lighting/material setups. Mitsuba has handled the multi-gigabyte sized scene files with millions of spheres and/or cubes scenes with ease. All the end user needs to do is download/unzip Mitsuba and point Visions of Chaos to the main executable.

Jason.

Evolving 3D Virtual Creatures

After my initial experiments with 2D Virtual Creatures the next step I wanted to try was 3D creatures.

Physics Engine

This time I am using the Kraft Physics Engine by Benjamin Rosseaux. Like Box2D in 2D, Kraft handles all the 3D object collisions and joints for me.

Types of starting creature setups

The virtual creatures are based on 3D boxes joined together by motor driven joints. Nothing more complex is required. No neural nets, no sensors, just a series of connected body parts with joints rotating at various angles and speeds. Even with these few simple parts some interesting behaviour can occur.

For my experiments I used the following base creature types.

Snake – A series of 2 or more body segments lying in a row connected by joints. Segment sizes and joint directions and strengths are all random.

Virtual Creature

Cross – A body with 4 arms linked out in the X and Z directions.

Virtual Creature

Jack – A body with arms linked out in all 6 XYZ directions. Named because the initial layout shape is like the pieces in the old jacks game.

Virtual Creature

Building an initial gene pool

To evolve creatures you need a set of existing creatures to mutate and evolve. To create a gene pool I repeatedly create random creatures (random body segments, segment sizes, joint speeds etc).

The creatures are then dropped onto a virtual ground and monitored over a set amount of time. If the creature stops moving or explodes (happens if the physcis engine math goes crazy) it is ignored. If it survives for the amount of testing time it is added to a temporary list. The search can be left to run for hours or overnight. When the search is stopped the best 10 are found and saved. Best creatures are selected based on how far they travel from their starting point across the virtual ground.

Evolving the gene pool

Once you have a set of reasonable performing creatures in the gene pool evolution can tweak and improve them further.

Each creature has its attributes slightly changed and the new mutated creature is tested. If it goes further then it replaces the old creature.

For these mutations I only mutate the joint speeds, torque and rotation axiis using the following code


     mutationpercentage:=;
     mutationfactor2:=mutationpercentage/100;
     mutationfactor1:=1-mutationfactor2/2;
     JointSpeed:=JointSpeed*(mutationfactor1+random*mutationfactor2);

If the initial mutationpercentage is specified to be 20% then the joint speed is randomly changed + or – 10% of its current amount. Same for the torque and rotation axis vector xyz components.

Results

Nothing too spectacular at this time. Maybe if I let the mutations churn for weeks it may fluke a really interesting setup. The following movie shows some of the creatures that the current methods discovered.

Jason.

64 bit Visions of Chaos now available

64 bit

The latest version of Visions of Chaos now includes both a 32-bit and a 64-bit version. You will need to have a 64-bit version of Windows to use the 64-bit version, but if you still use a 32-bit version of Windows then the 32-bit version of Visions of Chaos will continue to work for you. If you are not sure what “bitness” (bititude?) your Windows is, press Windows-Pause and look next to “System type” on the dialog that appears. Both versions are included with the same install exe to avoid confusion and the 64-bit version only installs if you are running 64-bit Windows.

The main advantage of the 64-bit version over the 32-bit is that there is no longer a 3 GB memory limit. As screen sizes have increased the amount of memory that Visions of Chaos requires to render some of its modes at these higher resolutions was hitting the 32 bit application memory limits. 64-bit Visions of Chaos can now use as much memory as you have physically installed in your PC.

Jason.