Primordial Particle Systems

Primordial Particle Systems

A while back I was playing with Particle Life simulations. At that time, another video I came across was the following

Click here to read the paper “How a life-like system emerges from a simple particle motion law” that describes how it works in great detail.

Primordial Particle Systems

For a simpler overview I recommend this page by Brian H that includes snippets of the source code that helped me get my version working.

Primordial Particle Systems

My even (hopefully) simpler explanation is as follows;

1. Fill the simulation space with a bunch of particles.
2. Particles have settings for radius, alpha, beta and velocity.
– radius is how far around itself each particle can sense the other particles.
– alpha is the fixed rotation amount. Each particle turns by this amount each step of the simulation.
– beta is the proportional rotation. This is the amount the particle turns depending on its neighbor particles.
– velocity is how far the particles move forward each step.
3. Each particle maintains a heading which is the direction it is facing.
4. Each of the particles move by the following steps
– Count how many neighbor particles are within the radius
– Work out how many of them are to the left and right of the particle
– Turn towards the left or right with the larger count
– Move forward

That’s all there is. From those relatively local and simple steps you can get some nice cell like and amoeba like structures emerging.

Primordial Particle Systems

More sample images in this gallery.

The following movie shows some example results created with the latest version of Visions of Chaos.

Jason.

Physarum Simulations

Physarum Polycephalum

Physarum Polycephalum aka slime mold is made up of a vast number of individual single cell organisms. These organisms have no brains or intelligence, but complex behaviors emerge when many of them are put together. Depending on their environment they move like what seems to be a much more complex entity.

Here are some great videos about slime molds with some awesome time lapse footage.

Once you have watched those you should hopefully have a better appreciation for the simple slime mold and the rest of this post will make more sense.

Here is one final video showing time lapse footage of various Physarum

Simulating Slime Molds

I have been interested in trying to simulate slime molds fror years now and my interest was once again peaked from seeing Sage Jenson‘s Physarum page here describing his simulations.

Sage was inspired by the paper Characteristics of Pattern Formation and Evolution in Approximations of Physarum Transport Networks.

He gives this simple diagram explaining the steps.

The basic explanation is a bunch of particles move over an area turning towards spots with higher concentrations of a pheromone trail. They also leave a trail as they move. These basic steps create interesting patterns and structures.

My method

Physarum Simulation

Following the principals from Sage and the paper, this is how my take on simulating Physarum works.

Physarum Simulation

1. Create a 2D array that tracks the pheromone trail intensity at every pixel location. Initially all spots are set to 0 intensity. I tried setting various shapes and perlin noise clouds to start, but the moving particles quickly erase any starting shapes and create their own paths so I just start with an empty space. Sage’s examples show interesting patterns and structures when starting with circles or other shapes, so I need to do some more work on start patterns.

Physarum Simulation

2. Create a list of particles with properties heading (direction/angle the particle is moving), x,y (positions), sense angle (how wide the particle looks to the left and right) and sense distance (how far in front the particle looks), turn angle (how quick the particle turns towards the sensed areas). I set the number of particles to match the image width multiplied by the image height. That seems to nicely adjust the particle count when changing image sizes.

Physarum Simulation

3. Main loop

Physarum Simulation

a) Display. For display I scale the minimum and maximum trail values to between 0 and 255 for a gray scale intensity (or to be used as an index into a color palette, but simple gray scale seems to look the best).

Physarum Simulation

b) Each particle looks at the 3 locations in front of it based on the sense angle and distance. You then work out which of the left, front and right spots have the highest concentration of the pheromone trail.

Physarum Simulation

c) Turn the particle towards the highest pheromone intensity. ie if the left spot is highest then subtract turn angle from the particle heading. If the front is highest do not make any change to the particle heading. If the right is highest add turn angle to the particle heading. You can also reverse this process so the particles turn away from the highest pheromone levels.

Physarum Simulation

d) Move the particle forwards by a specified move amount.

Physarum Simulation

e) Eat/absorb. I added a setting so that particles can absorb a bit of the pheromone trail at this point.

Physarum Simulation

f) Deposit an amount of pheromone onto the trail to increase it.

Physarum Simulation

g) Blur the trail array. This simulates the pheromones diffusing over the surface. I use this quick blur with an option for a blur radius between 1 and 5.

Physarum Simulation

h) Evaporate the trail by a small amount. This slowly decays the amount of pheromone.

Physarum Simulation

Repeat the main loop as long as necessary.

Physarum Simulation

Results

See my Physarum Simulations gallery for more images.

Here is a movie with some example results showing the simulations running. For the display the pheromone trail intensities are mapped to a gray scale palette (brighter = higher intensities).

Multiple Species Physarum Simulations

Physarum Simulation

My next idea was to have multiple Physarum types in the same area. For these cases I used 3 sets of Physarum (3 groups of particles with their own unique settings) as shown in the following settings dialog.

Physarum Simulation

Each of the pheromone trail intensities are then converted to RGB color components.

Physarum Simulation

This works but the results are just 3 separate simulations that do not interact. The idea is to have each of the particle types attract to their pheromones, but move away from the other 2 types of pheromones.

Physarum Simulation

The main change is in the pheromone detection and turn code. For the single Physarum simulation the particles look left, forward and right and then turn and move based on the location with the highest pheromone concentration. For 3 particle types they take into account their pheromone concentrations but subtract the pheromone concentrations of the other 2 types. For example if the 3 trail/pheromone arrays are called rtrail, gtrail and btrail, then the red particles pheromones are calculated by using rtrail[x,y]-gtrail[x,y]-btrail[x,y]. The highest concentration of left, forward and right is then turned and moved towards.

Physarum Simulation

More example images can be seen in my Physarum Simulations Gallery.

Here is a sample movie showing some of the multiple species results.

Physarum Image Processing

This was inspired after seeing the following video from Magic Jesus.

A bunch of Physarum particles start on the surface of an image. The particle colors are based on the image color they start on.

After this let them wander around the image area following Physarum simulation rules with a slight change. In this case rather than turning left or right based on a pheromone trail intensity, they turn towards the pixel that is closest in color to themselves.

This is my result after running Physarum simulations on three colorful paintings. The first and third are from Leonid Afremov and the second by Kandinsky (same painting as in Magic Jesus’ example movie).

These would look great on a large wall in a modern art gallery. Playing slowly enough so you could just notice the changing colors (like clouds moving slow enough you don’t notice they change until you look away and back again). The exhibits with those dark rooms you enter and read the little white plaque with a blurb on what it is all about. “The slow interplay of colors represents the human condition and the struggles of how humans still cannot find a peaceful equilibrium of coexistence with themselves and the planet.”

Availability

Both single and multiple species Physarum Simulations and Physarum Pixel Flow are now included with the latest version of Visions of Chaos.

Jason.

Style Transfer GANs (Generative Adversarial Networks)

Style Transfer Generative Adversarial Networks take two images and apply the style from one image to the other image. Here are some sample results from here.

Style Transfer GAN examples

For a more technical explanation of how these work, you can refer to the following papers;

Image Style Transfer Using Convolutional Neural Networks
Artistic style transfer for videos
Preserving Color in Neural Artistic Style Transfer

Ever since first seeing this technique I wanted to add it as an image processing option within Visions of Chaos.

If you only want to play around with style transfer or only have a few photos you want to experiment with, then I recommend you use an online service like DeepArt because this can be a tedious process to setup and use on your own PC.

GPU with Cuda support

The methods in this blog will run without a graphics card processor (GPU) but are very slow using only the CPU (ten minutes for a tiny image with few iterations, hours for larger sizes with many iterations).

For fast results you need a Nvidia graphics card that supports Cuda. Check the list here. On board GPUs are not supported. AMD Radeon GPUs are not supported. You need Nvidia.

If you do not have a supported Nvidia graphics card you can continue to get the CPU supported version going if you are very patient and/or a masochist.

Version Numbers Are Important

There are multiple components and steps to get this working. Versions of software is important. The used versions mentioned in this post worked for me. Changing any of the software versions may cause any of the parts to fail. If this works for you, great. If you find another combo of Python, Tensorflow and Cuda/cuDNN works for you, please leave a comment. I had all sorts of hassles installing/uninstalling/testing multiple versions of software until this set worked.

For these steps I made a new C:\STGAN\ folder for all the downloads, so that is what you will see referenced in the steps and screenshots.

Python

Python logo

Download Python v3.6.4 from here.

Install Python. NOTE: you must check the “Add Python 3.6 to PATH” checkbox on the first Python installer screen.

TensorFlow

TensorFlow logo

TensorFlow is a machine learning platform.

To get TensorFlow CPU support in Python type the following inside a command prompt window


pip3 install --no-cache-dir --ignore-installed --upgrade tensorflow

SciPy


pip3 install --ignore-installed --upgrade scipy

OpenCV

https://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv
Download opencv_python-3.4.5-cp36-cp36m-win_amd64.whl (make sure that is the version you download). Save it to the C:\STGAN\ folder.
Back in the command prompt, change into the folder the whl was downloaded to and install it


pip3 install --no-cache-dir --ignore-installed --upgrade opencv_python-3.4.5-cp36-cp36m-win_amd64.whl

Neural-style-tf

Now for the actual Python program that handles the style transfer.

Download and extract neural-style-tf (for this example I used the C:\STGAN\neural-style-tf-master\ folder )

Download this model and put it into the extracted neural-style-tf-master directory.

Change into the neural-style-tf-master folder with the command prompt

Now test each of these import lines (from neural-style-tf.py) one at a time to verify everything is OK
To do that just type “python” in your terminal and press enter (restart your terminal first if you haven’t after you finished installing everything). The line should now start with “>>>” instead of the directory. Copy and paste the following commands (you can copy paste them all at once to save typing them one at a time) into the Python prompt.

import tensorflow as tf
import numpy as np
import scipy.io
import argparse
import struct
import errno
import time
import cv2
import os

They should come back without errors, ie

Python test

Test Run

Here we go! If you got to here then it is time to do a quick (slow) CPU test run.

Go into a command prompt under the neural_style.py folder and run the following command


python neural_style.py --content_img golden_gate.jpg --style_imgs starry-night.jpg --max_size 1000 --max_iterations 100 --print_iterations 1 --original_colors --device /cpu:0 --verbose

You will see various stats and then after some time (on a not so new PC this took 15 minutes) you will see it finish.

You should then see the output under the C:\STGAN\neural-style-tf-master\image_output\ directory.

If you got to here then it is mostly working. The next step is to get GPU support working so the processing times can be much faster.

NVidia CUDA and cuDNN

Waiting 15 minutes per image really tests the patience. If you have a newer GPU then it can be used to speed up the calculations. Firstly you need to download the various support tools and drivers.

Nvidia logo

You need to download CUDA 9 + Update(s)

https://developer.nvidia.com/cuda-80-ga2-download-archive

NOTE, you need to install the base version then the updates.
Select the “exe (local)” versions of the files.
NOTE: by default the Nvidia installer wants to install extra drivers etc, you only need the first CUDA related stuff. Uncheck the rest.

CUDA

Download cuDNN version v7.2.1 from https://developer.nvidia.com/cudnn

You do have to register, but if you do not want to use your real name and email to register, use a fake name and 10 minute mail to get the verification email.

You want “cuDNN v7.2.1 Library for Windows 10” under “Download cuDNN v7.2.1 (August 7, 2018), for CUDA 9.2”
Yes, I know it says for 9.2 and we only downloaded 9.0, but this is the only version I got working.

Extract the zip to a temp folder and then copy the contents to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\ ie;
Copy cudnn64_6.dll into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin\ (which should be in the path now)
Copy cudnn.h into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\include
Copy cudnn.lib into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\lib\x64

Now you want to remove the CPU tensorflow and install the GPU tensorflow.
Use these two commands (note the ==1.12.0 forces version 1.12.0 of Tensorflow as newer versions did not work for me).


pip3 uninstall tensorflow
pip3 install --no-cache-dir --ignore-installed --upgrade tensorflow-gpu==1.12.0

REBOOT.

Re-run the test command (note that it now specifies GPU device to use). Also note that the max_size is small here. Larger sizes need more GPU memory and power and may fail, so best to start with a small sized image as a test.


python neural_style.py --content_img golden_gate.jpg --style_imgs starry-night.jpg --max_size 250 --max_iterations 100 --print_iterations 1 --original_colors --device /gpu:0 --verbose

and you should see it is MUCH faster.

If your GPU is not supported or it does not run, you are stuck with CPU so roll back to CPU support,


pip3 uninstall tensorflow-gpu
pip3 install --no-cache-dir --ignore-installed --upgrade tensorflow

Style Transfer in Visions of Chaos

If you made it this far you can now experiment with style tranfer GANs in Visions of Chaos. I have added some basic wrapper code that executes the python command to apply style transfer to any fractal or other image you can create.

Generate any image, then select Image->Image Processing->Style Transfer.

Visions of Chaos Style Transfer GAN

Start with smaller image sizes to get an idea of how long the process will take on your system before going for larger sized images.

You can also select any external image file to apply the style transfer to. So dig out those cat photos and have fun. Note that if you get tired of the limited style images that come with neural-style-tf you can put any image you like under the styles folder and use those. Grab an image of your favorite artist’s works and experiment.

For some examples I used the following photo of Miss Marple.

Miss Marple

And applied some various transfer style images.

MC Escher Plane Filling II

Miss Marple Style Transfer GAN

A Mandelbrot fractal

Miss Marple Style Transfer GAN

Another Mandelbrot fractal

Miss Marple Style Transfer GAN

HR Giger Biomechanical Landscape

Miss Marple Style Transfer GAN

Kandinsky Composition VII

Miss Marple Style Transfer GAN

Mondrian

Miss Marple Style Transfer GAN

Monet

Miss Marple Style Transfer GAN

Picasso Les Femmes d’Alger

Miss Marple Style Transfer GAN

Picasso Seated Nude

Miss Marple Style Transfer GAN

Hokusai The Great Wave off Kanagawa

Miss Marple Style Transfer GAN

Munch The Scream

Miss Marple Style Transfer GAN

Turner The Wreck of a Transport Ship

Miss Marple Style Transfer GAN

van Gogh Starry Night

Miss Marple Style Transfer GAN

Troubleshooting

If you get a failed style transfer and an error message, here are a few things to try;
1. Smaller image size. Depending on the RAM in your PC and GPU you may have maxed out.
2. Reboot. Seems to always fix a stubborn error for me. The Cuda and/or cuDNN seem to be the main culprit. They get hung or locked or something and only a reboot will get them working again.

Jason.

Automatic Color Palette Creation

Fractint MAP format palette files

Going back 30 years, Fractint was a fractal generation program for DOS based systems. For its time it was the fractal program of choice for enthusiasts.

Fractint used a simple text format for its color palettes. These *.MAP files were text files with each color’s RGB values separated by spaces each on a new line. So, for example if you wanted the first color in your palette to be blue the first line would be “0 0 255”.

When I first started creating Visions of Chaos I adopted the format. The most common map files had 256 colors (you could have palettes with other color counts but I only use 256 color palettes).

The rest of this post covers the palette creation methods that have been included with Visions of Chaos. Although I use these methods specifically to create 256 color MAP files the principles could be applied to any number of colors for different sized palettes.

If you are just looking for a Fractint color palette collection, scroll down to the end of this post and grab the archive provided.

Smoothly blending colors

Visions of Chaos Color Palette Editor

This is probably the first and most obvious method to use. Take a small number of base colors (I allow up to 16) and blend them into a palette.

How you get the colors to blend can be;

1. User selects them from the standard color picker dialog.
2. User can use eye dropper functionality to pick them out of a photo.
3. Set them at random.
4. Use the color wheel. Allows selection of complmentary colors, tetrads, and other color theory based colors.

Visions of Chaos Color Palette Editor

5. Extract colors from an image. See this previous blog post explaining how that works.

Visions of Chaos Color Palette Editor

Once you have the colors there are numerous ways you can blend them;

1. Smooth blend. Smoothly interpolate the colors.

Visions of Chaos Color Palette Editor

2. Fade out blend. Fade each of the colors to black.

Visions of Chaos Color Palette Editor

3. Fade in blend. Fade each of the colors from black.

Visions of Chaos Color Palette Editor

4. Neon blend. Fade from black to the colors then back to black.

Visions of Chaos Color Palette Editor

5. Stripe blend. Alternate each color for the duration of the palette.

Visions of Chaos Color Palette Editor

Using curves to create palettes

The idea here is to use various mathematical functions to generate curves for the RGB components of the palette. The following is a list of the various methods I use so far.

Sine. Each RGB color component is its own sine wave. Randomize the wave amplitude, frequency and period.

Visions of Chaos Color Palette Editor

Multiple Sine. Add multiple sine waves together for each RGB component and then scale down to between 0 and 255.

Visions of Chaos Color Palette Editor

IQ. Idea from Inigo Quilez.

Visions of Chaos Color Palette Editor

Perlin. Use repeating noise loops as in this coding train video. Map the resulting noise values to each RGB channel. Using a looping noise function is best because it means the palette wraps around smoothly and using it for fractal zooms does not show a sharp break when the palette ends and restarts. I have only implemented this method over the last few days (at the time of writing this post), but so far it gives some really unique color palettes.

Visions of Chaos Color Palette Editor

Here are some examples palettes created using Perlin noise. Click to see the full sized image.

Visions of Chaos Color Palette Editor

Simplex. Same as Perlin, but uses Simplex noise.

Visions of Chaos Color Palette Editor

Simplex + Perlin. Create each RGB value by adding Simplex noise to Perlin noise.

Visions of Chaos Color Palette Editor

Here are some examples of Simplex and Simplex + Perlin palettes. Click for full size.

Visions of Chaos Color Palette Editor

Multiple Perlin – Add/subtract multiple Perlin Noise curves into RGB amounts.

Visions of Chaos Color Palette Editor

Random Walk. Random curve for each RGB component between index 0 and 127. Reverse for the rest of the palette. Each step the RGB is changed by +random(5)-2 to randomly go up and/or down.

Visions of Chaos Color Palette Editor

Terrain Fault. Take 2 random points between 0 and 255. Between the points randomly raise or lower by a small amount. Repeat this a number of times.

Visions of Chaos Color Palette Editor

HSL to RGB. Random HSL curves converted to RGB.

Visions of Chaos Color Palette Editor

RGB. Random curves for each RGB component. Use various easing functions to tween curve control points.

Visions of Chaos Color Palette Editor

YUV to RGB. Random YUV curves converted to RGB.

Visions of Chaos Color Palette Editor

Combine palettes. Take 2 previously created palettes and combine their RGB components by addition, subtraction or multiplication.

Visions of Chaos Color Palette Editor

Multiple RGB. Combine multiple RGB curves.

Visions of Chaos Color Palette Editor

Multiple YUV to RGB. Combine multiple YUV to RGB curves.

Visions of Chaos Color Palette Editor

Modify an existing palette

Once you have palette files, you can also use various techniques to modify them;

1. Increase or decrease the individual RGB channel amounts
2. Brightness
3. Contrast
4. Increase or decrease the individual YUV channel amounts
5. Wrap. Take the existing palette, halve it, then add the flipped half to itself. This is useful when you want a non repeating palette to wrap around.

Visions of Chaos Color Palette Editor

Visions of Chaos Color Palette Editor

6. Double. If you have a palette that is too smooth/sparse for the current fractal image, doubling can add more lines/gradients to the palette.

Visions of Chaos Color Palette Editor

Visions of Chaos Color Palette Editor

7. Blur. Just like a blur function in image processing. Averages out the palette values with neighbor colors.
8. Sharpen. Just like a sharpen function in image processing.
9. Shift RGB. R->G,G->B,B->R.

Visions of Chaos Color Palette Editor

Visions of Chaos Color Palette Editor

Visions of Chaos Color Palette Editor

10. Invert. R=255-R, G=255-G, B=255-B.
11. Reverse. Flip the order of the palette colors.
12. Histogram equalize palette. Like the auto-levels in Photoshop. My method tends to make the results slightly too bright. Needs fixing when I get a chance.

Visions of Chaos Color Palette Editor

Visions of Chaos Color Palette Editor

13. Matrix multiplication. Take a 3×3 matrix and multiply the 1×3 RGB components by the matrix to get new RGB amounts.

Visions of Chaos Color Palette Editor

Any other ideas?

If you know of any other ways to generate palettes, or have an idea for ways to create new unique color palettes, let me know.

Availability

The color palette editor shown in this post is included with Visions of Chaos.

Just give me the palettes!

If you are using another program that uses Fractint palette files you can download the 3371 color palettes I include with Visions of Chaos here. Some created by me, others found on various Internet sites over the years, some converted from gradient packs. No copyright on them so do with them as you wish.

If you do have any other sets of MAP palettes you would like to share, send me an email. You can never have enough colors when creating fractal images.

Jason.

Vorticity Confinement for Eulerian Fluid Simulations

Eulerian MAC Fluid Simulation with Vorticity Confinement

Eulerian fluid simulations simulate the flow of fluids by tracking fluid velocity and density over a set of individual (discreet) evenly spaced grid locations. One downside to this approach is that the finer details in the fluid can be smoothed out, so you lose those little swirls and vortices.

Eulerian MAC Fluid Simulation with Vorticity Confinement

A simple fix for this is to add Vorticity Confinement. If you read the Wikipedia page on Vorticity Confinement you may be no wiser on what it is or how to add it into your fluid simulations.

Eulerian MAC Fluid Simulation with Vorticity Confinement

My explanation of vorticity confinement is that it looks for curls (vortices) in the fluid and adds in velocity to help boost the swirling motion of the fluid. Adding vorticity confinement can also give more turbulent looking fluid simulations which tend to be more aesthetically pleasing in simulations (unless you are a member of team laminar flow).

Eulerian MAC Fluid Simulation with Vorticity Confinement

The code for implementing vorticity confinement is relatively simple. For 2D I used the snippet provided by Iam0x539 in this video.


function Curl(x,y:integer):double;
begin
     Curl:=xvelocity[x,y+1]-xvelocity[x,y-1] + yvelocity[x-1,y]-yvelocity[x+1, y];
end;

procedure VorticityConfinement(vorticity:double);
var dx,dy,len:double;
    x,y:integer;
begin
     for y:=2 to _h-3 do
     begin
          for x:=2 to _w-3 do
          begin
               dx:=abs(curl(x + 0, y - 1)) - abs(curl(x + 0, y + 1));
               dy:=abs(curl(x + 1, y + 0)) - abs(curl(x - 1, y + 0));
               len:=sqrt(sqr(dx)+sqr(dy))+1e-5;
               dx:=vorticity/len*dx;
               dy:=vorticity/len*dy;
               xvelocity[x,y]:=xvelocity[x,y]+timestep*curl(x,y)*dx);
               yvelocity[x,y]:=yvelocity[x,y]+timestep*curl(x,y)*dy);
          end;
     end;
end;

Eulerian MAC Fluid Simulation with Vorticity Confinement

The VorticityConfinement procedure is called once per simulation step. It looks for local curl at each fluid grid point and then increases the local x and y velocities using the curl. This is what helps preserve the little vortices and helps reduce the smoothing out of the fluid.

Eulerian MAC Fluid Simulation with Vorticity Confinement

To demonstrate how vorticity confinement changes a fluid simulation, the images within this post and the following movie add vorticity confinement to my previous Eulerian MAC Fluid Simulations code.

Eulerian MAC Fluid Simulations with Vorticity Confinement is now included in the latest version of Visions of Chaos.

Jason.

Eulerian Marker-and-Cell Fluids

Eulerian MAC Fluid Simulation

Benedikt Bitterli has a set of YouTube videos that have been an inspiration for years.

Eulerian MAC Fluid Simulation

He generously shares the source code to a series of programs on his Incremental Fluids GitHub that cover implementing a 2D fluid simulation. His code is based on Robert Bridson’s book, “Fluid Simulation for Computer Graphics”. I have seen that book mentioned all over the place and almost bought a copy, but reviews say it is focused more on the math (not so helpful to me) and not on the code (which I can follow much easier than math formulas).

Eulerian MAC Fluid Simulation

So far, I have converted Benedikt’s first and second programs for inclusion in Visions of Chaos. Calculations at 4K resolution were originally taking up to 10 minutes per frame, but with some multi-threading and code optimizations I got it down to around 10 seconds per 4K resolution frame on a relatively modern i7 CPU.

Eulerian MAC Fluid Simulation

The results so far are really nice. The resulting flows show very high detailed vortices and fluid behavior.

Eulerian MAC Fluid Simulation

Here is a sample 4K resolution movie showing these fluids in motion.

Eulerian Marker-and-Cell Fluid Simulations are now available in the latest version of Visions of Chaos.

Jason.

Jos Stam’s Fluid Simulations in 3D

I am a huge fan of anything related to fluid simulations. This interest was once again sparked when Daniel Shiffman covered fluid simulation in his latest Coding Train video.

As always, I recommend Coding Train to any developer. Dan has a unique way of making his programming topics both interesting and entertaining.

The code converted during the Coding Train video (Mike Ash’s Fluid Simulation For Dummies) is based on Jos Stam’s Stable Fluids code.

Jos Stam

Jos Stam

Jos Stam wrote his seminal paper Real-Time Fluid Dynamics for Games back in March 2003. I have lost count of the times I seen the paper cited or linked to. It has had a huge influence on fluid simulation.

The original source code from the paper is provided here and here.

Going back 11 years, one of my first YouTube videos was this super low resolution example of 2D fluid simulation using Stam’s methods.

3D Fluid

My main objective when revisiting Stam’s stable fluids was to get a 3D version going.

A quick Google search led me to Blain Maguire’s implementation which I was able to translate into a working 3D fluid simulation.

By default Stam’s fluid method generates a fairly smoothed fluid/smoke flow. To make things more interesting you want to add a bit of turbulence. The key term here is “vorticity confinement”. Vorticity confinemement was first described in the paper Visual Simulation of Smoke by Fedkiw et al.

I found some 3D source code for adding vorticity confinement to Stam’s stable fluids here. See the fluid.cpp file inside the fire32.tar.gz archive.

At this stage I had 3D fluid working. Now comes the fun part, how to display the fluid.

Displaying 3D Fluid

Once you get the code working, the main issue becomes how to display the fluid.

Stam’s fluid uses what is known as an Eulerian approach to fluid simulation. Rather than track individual fluid particles (as in SPH simulations) the fluid is simulated by tracking velocity and density at fixed grid based locations in 3D space. This means that as the simulation runs you have a 3D grid of fluid properties that need to be displayed.

The method I use is to hide the cells that contain a fluid velocity lower than a threshold value. 0.01 seems to work OK for my tests. Then render the cells as spheres or cubes. Color shaded based on fluid velocities.

3D Jos Stam Stable Fluid

For a fake volume rendering like approach you can render the fluid using translucent billboard quads. If you run a pre-render pass over the array and strip any cell that is surrounded by other cells it results in only having the “shell” of the fluids rendering. Rendering these shells with the billboard particles gives a nice result.

3D Jos Stam Stable Fluid

Movie results

Availability

3D Jos Stam Stable Fluids are now included in Visions of Chaos.

Jason.