Automatic Detection of Interesting Cellular Automata

This post has been in a draft state for at least a couple of years now. I revisit it whenever I get inspiration for a new idea. I wasn’t going to bother posting it until I had a better solution to the problem, but maybe these ideas can trigger a working solution in someone else’s mind.

Compared to my other blog posts this one is more rambling as it follows the paths I have gone down so far when trying to solve this problem.




Objective

Cellular automata tend to have huge parameter search spaces to find interesting results within. The vast majority of rules within this space will be junk rules with only a small fraction of a percentage being interesting rules. I have spent way too many hours repeatedly trying random rules when looking for new interesting cellular automata. Between the boring rules that either die out or rules that go chaotic there is that sweet spot of interesting rules. Finding these interesting rules is the problem.

Needle in a haystack

My ideal goal has always been to be able to run random rules repeatedly hands free and have software that is “clever” enough to determine the difference of interesting vs boring results. If the algorithms are good enough at detecting interesting then you can come back to the computer hours or days later and have a set of rules in a folder with preview images and/or movies to check out.

I want the smarts to be smart enough to work with a variety of CA types beyond the basic 2 state 2D cellular automata. Visions of Chaos contains many varieties of cellular automata with varying maximum cell states, dimensions and neighborhoods that I ultimately would like to be able to click a “Look for interesting rules” button.




Interesting Defined

Interesting is a very loose term. Maybe a few examples will help define what I mean when I say interesting.

Boring results are when a CA stabilizes to a fixed pattern or a pattern with very minimal change between steps.

Cellular Automaton

Cellular Automaton

Cellular Automaton

Chaotic results are when the CA turns into a screen of static with no real discernible patterns or features like gliders or other CA related structures. For a CA classifier these rules are also boring.

Cellular Automaton

Cellular Automaton

Cellular Automaton

Interesting is anything else. Rules like Game of Life, Brian’s Brain and others that create evolvable structures that survive after multiple cycles of the CA. This is what I want the software to be able to detect.

Cellular Automaton

Conway’s Game of Life – 23/3/2



Cellular Automaton

Brian’s Brain – /2/3



Cellular Automaton

Fireballs – 346/2/4




My Previous Search Methods

1. Random rules. Repeatedly generate random rules hoping to see an interesting result. Tedious to say the least, although the majority of the interesting cellular automata rules I have found over the years have been through repeatedly trying different random rules. While a boring TV show or movie is on I can repeatedly hit F3, F4 and Enter in Visions of Chaos while looking for interesting results. F3 stops the current CA running, F4 shows the settings dialog, Enter clicks the Random Rule button.

2. Brute force all possible rules. Only applicable for when the total number of rules is small (possible for some of the simpler 1D CAs). Most 2D CAs have millions or billions of possible rules and brute force rendering them all and then checking manually is impossible.

3. Mutating existing interesting rules. If you get an interesting rule, you can try mutating the rule slightly to try alternatives that may behave similarly yet better to the rule. Slightly usually means toggling one of the survival/birth checkboxes on/off. This has occasionally helped me find interesting rules or refine a rule to that sweet spot. The problem with CAs is that even changing one checkbox will usually result in a completely different result. The good results do not tend to “clump” together in the parameter space.

The rest of this blog post contains methods others and myself have tried to classify cellular automata behavior.




Wolfram Classification

Stephen Wolfram

Stephen Wolfram defined a rough set of 4 classifications for CAs.

Class 1: Nearly all initial patterns evolve quickly into a stable, homogeneous state. Any randomness in the initial pattern disappears.

Class 2: Nearly all initial patterns evolve quickly into stable or oscillating structures. Some of the randomness in the initial pattern may filter out, but some remains. Local changes to the initial pattern tend to remain local.

Class 3: Nearly all initial patterns evolve in a pseudo-random or chaotic manner. Any stable structures that appear are quickly destroyed by the surrounding noise. Local changes to the initial pattern tend to spread indefinitely.

Class 4: Nearly all initial patterns evolve into structures that interact in complex and interesting ways, with the formation of local structures that are able to survive for long periods of time.

Classes 1 to 3 would be considered “boring” for anyone trying random rules. Class 4 is that “sweet spot” of CAs that something interesting happens between dying out and chaotic explosions.

You can look at a CA after it has been discovered and put it into one of those 4 categories but that doesn’t help detecting interesting rules in Class 4.




Other Methods From Various Papers

Here are some other classification methods in papers I found or saw mentioned elsewhere. The mathematics is beyond me for most of them. I wish papers included a small snippet of source code with them that shows the math. I always find it much easier understanding and implementing some source code rather than try and understand formal equations.

Behavioral Metrics

Search Of Complex Binary Cellular Automata_Using_Behavioral_Metrics.

Entropy

Wolfram’s Universality And Complexity In Cellular Automata discusses “entropy” values that I don’t understand.

Lyapunov Exponents

Stability Of Cellular Automata Trajectories Revisited : Branching Walks And Lyapunov Profiles.

Towards The Full Lyapunov Sprectrum Of Elementary Cellular Automata.

Kolmogorov–Chaitin Complexity

Asymptotic Behaviour And Ratios Of Complexity In Cellular Automata.

Genetic Algorithms

Searching For Complex CA Rules With GAs.

Evolving Continuous Cellular Automata For Aesthetic Objectives.

Extracting Cellular Automaton Rules Directly From Experimental Data.

Other Papers

Pattern Generation Using Likelihood Inference For Cellular Automata. 1D CAs.




MergeLife

Jeff Heaton uses genetic mutations to evolve cellular automata.




Langton’s Lambda

Chris Langton

Chris Langton defined a single number that can help predict if a CA will fall within the ordered realm. See his paper Computation at the edge of chaos for the mathematical definitions etc.

Langton called this number lambda. According to this page Lambda is calculated by counting the number of cells that have just been “born” that step of the CA and dividing it by the total CA cells. This gives a value between 0 and 1.

L = newlyborn/totalcellcount
L within 0.01 and 0.15 means a good rule to further investigate.

So if the grid is 20×20 in size and there were 50 cells that were newly born that CA cycle, then lambda would be 50/20×20=0.125

I skip the first 100 CA cycles to allow the CA to settle down and then average the lambda value for the next 50 steps.

As stated here there is no single value of lambda that will always give an interesting result. Langton’s paper and example applet are only concerned with 1D CA examples. I really want to find methods to search and classify 2D, 3D (and even 4D) cellular automata.

Rampe’s Lambdas

For lack of a better name, these are the “Rampe’s Lambda” values I experimented with as alternatives to Langton’s Lambda.

R1 = newlyborn/newlydead
R1 within 0.9 and 1.1 means a good rule to further investigate.

R2 = abs(newlyborn-newlydead)/totalcellcount
R2 within 0.001 and 0.005 means a good rule to further investigate.

R3 = (newlyborn+newlydead)/totalcellcount
R3 within 0.01 and 0.8 means a good rule to further investigate.

R4 = ((newlyborn/totalcellcount)+(newlydead/totalcellcount))/2
R4 within 0.01 and 0.23 means a good rule to further investigate.

R5 = % change in Langton’s Lambda between the last and current CA cycle
R5 within 0.01 and 0.1 means a good rule to further investigate.

Again, skip the first 100 cycles of the CA and then use the average lambda from the following 50 cycles.

Lambda Results

All of them (both Langton and my “Rampe” variations) are next to useless from my tests. I ran a bunch of known good rules and got mixed results. All the lambda’s gave enough false positives to not be of any use in searching for interesting new rules. You may as well use a random number generator to classify the rules.

Maybe they can be used to weed out the extreme class 1, 2 and 3 uninteresting dead rules, but they are not useful for classifying if a class 4 like result is interesting or not.




Fractal Dimension

Fractal Dimension CA Search

Another method I tried is finding the fractal dimension of the CA image using box counting. Fractal dimensions are unlike the usual 1D, 2D and 3D fixed dimensions and for a 2D image are and floating point value between 0 and 2.

The above screenshot shows the fractal dimension tests on existing sample interesting CA files. The results are all over the place with no “sweet spot” of dimension correlating to interesting. The way it works is that each CA is run for 50 steps, the image is converted to black and white (non black pixels in the image are changed to white) and then the dimension is calculated using the box counting method.

Increasing the range of dimension for “good” detection may result in the known interesting rules to pass the tests, but it then thinks a lot of uninteresting rules are then interesting, meaning you still need to manually sort good vs bad.

A fractal dimension between 1.0 and 1.4-1.5 can help weed out obvious “bad” results, but is really not helpful in hands free searching.




Neural Networks – Part 1

This was an idea I had for a while. Train a neural network to detect if a CA rule is interesting or not.

I was able to implement a rudimentary neural network system after watching these excellent videos from Dan Shiffman.

I went from almost zero knowledge of the internals of neural networks to much more comfortable and being able to code a working NN system. If you want to learn about the basics of coding a neural network I highly recommend Dan’s playlist.

For a neural network to be able to give you meaningful output (in this case if a CA rule is interesting or not) it needs to be trained with known good and bad data.

I tried creating a neural network with 19 inputs (9 for survival states, 9 for birth states and 1 for number of states) to cover the possible CA settings, ie

2D CA Rules

The neural network has 19 inputs, a number of neurons in the hidden layer and a single output neuron that does the interestingness prediction.

Neural Network

I mainly kept the hidden neuron count the same as the inputs, but I did experiment with other counts as the next diagram shows.

Neural Network

The known good and bad rules are fed through the neural network in random order for 10 million or more times. You can see how well the network is “learning” by tracking the mean squared error. As you repeatedly feed the network known data the error value should drop meaning the network is becoming more accurate at predicting the results you train it with.

Once the network is trained, you can run random rules and see if the prediction of the network matches your rating of if the CA is interesting or not. You can also repeatedly try random rules until they pass a threshold level of interesting. Every time a prediction is made the human can rate if the detection was correct. These human ratings are added back to the good and bad rule training pool so they can be used the next time the network is trained.

The end result is “just OK”. I used a well trained network (with a mean squared error of around 0.001) and got it to repeatedly try random rules until it found a rule it predicted would be interesting. The results are not always interesting. More interesting than purely sitting there clicking random repeatedly as I have done in the past, but there are still a lot of not interesting rules spat out. If I let the network run for a few hours and got it to save every rule it predicted to be interesting I would still have a tedious process of weeding out the actual interesting rules.

I don’t think inputs from survival and birth rules is the best way of doing this. This is because a toggle of a single survival or birth checkbox will usually drastically change the results from interesting to boring or just chaos. Also changing the maximum states each cell can have by 1 will cause well behaved rules to change into chaotic mess results.

One idea I need to try is using a basic NN like this that uses the lambda values above for inputs. Maybe then it can work out which combination of lambdas (and maybe fractal dimension) work together to create good rules. This is worth experimenting with when I get some time.




Neural Networks – Part 2

This time I am trying to get the network to detect interesting CAs by using images from a frame of the CAs. For each of the known good and bad rules I take the 100th frame as an input. I also repeat each of the rules 100 times to get 100 samples of each rule.

If I use a 64×64 sized grayscale image then there will now be 4,096 inputs to the network. Add another 100 hidden nodes and that makes a large and much slower network when training.

Run the CA rules on a 64×64 sized grid, convert the image pixels into the 4,096 inputs and train the network.

So far, no good results. The mean squared error falls very slowly. Maybe it would get better after days of training, but I am not that patient yet.

This online example and this article show how this method (a fully connected neural network) is never as accurate as a convolutional neural network. So, onto Part 3…




Neural Networks – Part 3

My next idea was to try using Convolutional Neural Networks. See here for a nice explanation of convolutional neural networks.

Convolutional Neural Network

CNN’s are made for image processing, feature extraction and detection. If a CNN can be trained to recognize digits and tell if a photo is of a cat or a dog then I should be able to use a CNN to “look at” a frame of a cellular automaton and tell me if it is interesting or not.

After watching a bunch of YouTube university lectures and tutorials on CNNs I decided not to extend my existing neural network code to handle CNNs. For the network sizes I will be training I need a real world library. I chose Google’s TensorFlow.

TensorFlow Logo

TensorFlow supports GPU acceleration with CUDA and is magnitudes faster and more reliable than anything I could code.

Once I managed to get Python, TensorFlow, Keras, CUDA and cuDNN installed correctly I was able to execute Python scripts from within Visions of Chaos and successfully run the example TensorFlow CNN MNIST code. That showed I had all the various components working as expected.

Creating Training Data for the CNN

Acquiring clean and accurate training data is vital for a good model. The more data the better.

I used the following steps to create a lot of training images;

1. Take a bunch of CA rules that I had previously ranked as either good or bad.

2. Run all of them over a 128×128 sized grid for 100 steps and save the 100th frame as a grayscale jpg file.

3. Step 2 can be repeated multiple times to increase the amount of training data. CAs starting from a random grid will always give you a unique 100th frame so this is an easy way to generate lots more training data.

4. Copy some of the generated images into a test folder. I usually move 1/10th of the total generated images into a test folder. These will be used to evaluate how accurate the model is at predictions once it has been trained. You want test data that is different to the data used to train and validate the model.

Good Cellular Automata Training Data

Examples of good CA frames



Bad Cellular Automata Training Data

Examples of bad CA frames


Quantity and Dimensions of Training Data Images

I tried image sizes between 32×32 pixels and 128×128 pixels. I also tried various zoomed in CA images with each cell being 2×2 pixels rather than a single pixel per cell.

For image counts I tried between 10,000 up to 300,000.

After days of generating images and training and testing models I found a good balance between image size and model accuracy was images 128×128 pixels in size with a single pixel per CA cell (so a CA grid of 128×128 too).

I also experimented with blurring the images thinking that may help search for more general patterns, but it did not seem to make any difference in the number found or accuracy of results.

One thing working with neural networks teaches you is patience. Generating the images is the slowest part of these experiments. If anyone is willing to gift me some decent high end CPUs and GPUs I would put them to good use.

Custom Input for CNNs

The best videos I found on using CNNs with custom images were these videos on YouTube by sentdex. Parts 1 to 6 of that playlist got me up and running.

Creating the Training Data for TensorFlow

Once you have your training images they need to be converted into a data format that TensorFlow can be trained with.

Again, I recommend the following sentdex video that covers how to create the training data.

The process to convert the training images into training data is fast and should not take longer than a minute or two.

Model Variations

Time to actually use this training data to train a convolutional neural network (what TensorFlow calls a model).

There are a wide variety of model and layer types to experiment with. For CNNs you basically start with one or more Conv2D layers followed by one or more Dense layers and finally a single output node to predict a probability of the image being good or bad.

Here are some models I tried during testing. From various sources and videos and pages I have seen. Running on an Nvidia 1080 GPU took around 2 hours per model to train (50 epochs each with 100,000 training images), which seemed lightning fast after waiting 30 hours for my training images to generate.


# Version 1
# Original model from sentdex videos
# https://youtu.be/WvoLTXIjBYU

model = Sequential()

model.add(Conv2D(64, (3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())

model.add(Dense(64))
model.add(Activation("relu"))

model.add(Dense(1))
model.add(Activation("sigmoid"))

model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit(X, y, batch_size=50, epochs=50, validation_split=0.3)

When the 50 epochs finish, you can plot the accuracy and loss vs the validation accuracy and loss.

TensorFlow training graph

Version 1 gave these results;
test loss, test acc: [0.13674676911142003, 0.9789666691422463]
98% accuracy with a loss of 13%
When I test a different unique set of images as test data I get;
14500 good images predicted as good – 301 good images predicted as bad – 97.97% predicted correctly
14653 bad images predicted as bad – 184 bad images predicted as good – 98.76% predicted correctly

One thing the above “Model loss” graph shows is overfitting. The val_loss graph should follow the loss graph and continue to go down. Instead of going down the line starts going up around the 5th epoch. This is an obvious sign of overfitting. Overfitting is bad. We don’t want overfitting. See here for more info on overfitting and how to avoid it.

The second suggestion from here mentions dropouts. Dropouts remove random links between nodes in the model network as it trains. This can help reduce overfitting. So let’s give that a go.


# Version 2
# Original model from sentdex videos
# https://youtu.be/WvoLTXIjBYU
# Adding dropouts to stop overfitting

model = Sequential()

model.add(Conv2D(64, (3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.4))

model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.4))

model.add(Flatten())

model.add(Dense(64))
model.add(Activation("relu"))
model.add(Dropout(0.4))

model.add(Dense(1))
model.add(Activation("sigmoid"))

model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit(X, y, batch_size=50, epochs=50, validation_split=0.3)

50 epochs finished with this graph.

TensorFlow training graph

Now the validation loss continues to generally go down with the loss graph. This shows overfitting is no longer occurring.

Version 2 gave these results;
test loss, test acc: [0.037338864829847204, 0.9866000044345856]
98% accuracy with a 13% loss
When I test a different unique set of images as test data I get;
14151 good images predicted as good – 68 good images predicted as bad – 99.52% predicted correctly
14326 bad images predicted as bad – 12 bad images predicted as good – 99.92% predicted correctly


# Version 3
# https://towardsdatascience.com/applied-deep-learning-part-4-convolutional-neural-networks-584bc134c1e2

model = Sequential()

model.add(Conv2D(32, (3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(128, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(128, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dropout(0.5))

model.add(Dense(512))
model.add(Activation("relu"))

model.add(Dense(1))
model.add(Activation("sigmoid"))

model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit(X, y, batch_size=50, epochs=50, validation_split=0.3)

Graphs looked good without any obvious overfitting.

Version 3 gave these results;
test loss, test acc: [0.03628389219510306, 0.9891333370407422]
98% accuracy with 3% loss. Getting better.
When I test a different unique set of images as test data I get;
14669 good images predicted as good – 59 good images predicted as bad – 99.60% predicted correctly
14490 bad images predicted as bad – 62 bad images predicted as good – 99.57% predicted correctly


# Version 4
# http://www.dsimb.inserm.fr/~ghouzam/personal_projects/Simpson_character_recognition.html

model = Sequential()

model.add(Conv2D(32, (3,3), input_shape = X.shape[1:]))
model.add(Conv2D(32, (3,3)))
model.add(Activation("relu"))
# BatchNormalization better than Dropout? https://www.kdnuggets.com/2018/09/dropout-convolutional-networks.html
# model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, (3,3)))
model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
# BatchNormalization better than Dropout? https://www.kdnuggets.com/2018/09/dropout-convolutional-networks.html
# model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Conv2D(128, (3,3)))
model.add(Conv2D(128, (3,3)))
model.add(Activation("relu"))
# BatchNormalization better than Dropout? https://www.kdnuggets.com/2018/09/dropout-convolutional-networks.html
# model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))

model.add(Flatten())

model.add(Dense(64))
model.add(BatchNormalization())
model.add(Activation("relu"))

model.add(Dense(32))
model.add(BatchNormalization())
model.add(Activation("relu"))

model.add(Dense(16))
model.add(BatchNormalization())
model.add(Activation("relu"))

model.add(Dense(1))
model.add(Activation("sigmoid"))

model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit(X, y, batch_size=50, epochs=50, validation_split=0.3)

For this model I threw in multiple ideas from all previous models and more.

Version 4 gave these results;
test loss, test acc: [0.031484683321298994, 0.9896000043551128]
99% accuracy with a 3% loss. Best result so far.
When I test a different unique set of images as test data I get;
14383 good images predicted as good – 119 good images predicted as bad – 99.18% predicted correctly
14845 bad images predicted as bad – 4 bad images predicted as good – 99.97% predicted correctly

For the rest of my tests I used Version 4 for all training.

Tweaking your CNN models

See the sendtex videos above for a good example of how to tweak models and see how the variations rate. Use TensorBoard to see how they compare and optimize them.

TensorBoard Graphs

TensorBoard has other interesting histograms it will generate from your training like the following. I have no idea what this is telling me yet, but they look cool. Using histograms did seem to slow down the training with extended pauses between epochs, so unless you need them I recommend disabling them.

TensorBoard Histograms

Testing the Trained Model

Now it is finally time to put the model to the test.

Randomly set a CA rule, run it for 100 generations and then use model.predict on the 100th frame. This takes around 6 seconds per random rule.

The model.predict function returns a floating point value between 0 and 1.
Between 0 and 0.2 are classified as bad.
0.2 to 0.95 are classified as unsure.
0.95 to 1 are classified as good.

The prediction accuracy is better than any of the other methods shown previously in this post.

The rules it did detect in the bad category were all bad, so it does a great job there. No interesting rules got incorrectly classified as bad from my tests. I can safely ignore rules classified as bad which speeds up the search time as I don’t have to re-run the rules and create a sample movie.

The detected good rules did have a blend of interesting and boring/chaotic, but there were a lot less of them to check. Roughly 1% of total random rules are classified as good. The rules the model incorrectly predicts as interesting can be moved into the “known bad” folder and can be added to the next trained model (another 40 hours or so of my PC churning away generating images and training a new model).

The rules it predicted in the unsure 0.2 to 0.95 range did have features that were in the range between good and bad. Some of them would have made excellent good samples if only they were not as chaotic and “busy”.

Results

Here are some examples found from overnight convolutional neural network searches.

Cellular Automaton

TF247764 – 4567/234568/3 – Brian’s Brain with islands


Cellular Automaton

TF394435 – 34/256/3 – Another Brian like rule


Cellular Automaton

TF459651 – 345/345/3 – Blobby balance between life and death


Cellular Automaton

TF174642 – 15678/12678/2 – Solid islands grow amongst static


Cellular Automaton

TF1965283 – 1235678/012567/3 – Amoebas living in chaos


Cellular Automaton

TF3392484 – 014567/34567/4 – Amoebas on a stable background


Other CNN Problems and Ideas

One problem is that CNNs seem to only detect shapes/gliders/patterns that are similar to the training data. After days of testing self searching with the CNN models there were no brand new different rules discovered. Just a bunch of very similar to existing rules and maybe a few slight tweaks. For example if a CNN is trained using only examples of Conway’s Game of Life CA then it is not going to predict Brian’s Brain is interesting if it randomly tries the rule for Brian’s Brain. The CNN needs to have previously seen the rule(s) it will detect as interesting. I did see slight variations found and scored as interesting, but for a new CA type without a lot of “good” rules to train on the CNN is not going to have problems finding new/different interesting rules. The main reason I want a “search for interesting” function is for when I have a new type of CA without a lot of known good rules. I want the search to be able to work without needing hundreds or thousands of examples of already rated good vs bad. Otherwise I need to sit there trying random rules for hours and manually rate them good or bad before training a new model specific to that CA rule.

Maybe using single frames is not the best idea. Maybe the difference between the 99th and 100th frame? Maybe a blur or average of 3 frames? This is still to be experimented with when I have another week to spend generating images and training and testing new models.

Then I thought maybe I am over training the models. If you train a neural network for too long it will overfit and then only be able to recognize the data you trained it with. This is as if it memorizes only the good data you gave it as good. It cannot generalize to detect other different good results as good. This results in new interesting CAs being potentially classified as bad. I did try lowering the training epochs from 50 to 10 to see if that helped detect more generalized interesting CA rules but it didn’t seem to make any difference. Even lowering it to 5 epochs trained a model that was still accurate at predictions. Plus the difference between random frames of good CAs shows it can detect gliders at different locations within frames.

Rather than train a model for each type of CA, train a model with examples from multiple CA types. Try and make the model more capable of general CA detection. Maybe it could then detect newer shapes/gliders in different new CA rules if it has a good general idea of what interesting CA features are from multiple different CAs. This may work? Another one for the to do list.

Convolutional Neural Networks (and neural networks in general) are not an instant win solution. You do need to do a lot of research about the various settings and do a lot of testing to get a good model which you can then use to predict the “things” you want the model to predict. But once you get a well trained model CNNs can be almost magical in how they can learn and be useful when solving problems.

The more I experiment with and learn about neural networks only makes me want to continue the journey. They really are fascinating. Using TensorFlow and Keras are a great way to get into the world of neural networks without having to code your own neural network system from scratch. I do recommend at least coding a basic feed forward neural network to get a good grip on the basics. When you jump into Keras the terminology will make more sense. YouTube has lots of good neural network related videos.




Availability to End Users

I have now included the trained (20 epochs Version 4 to hopefully leave a little room for finding more unique results) TensorFlow CNN model with Visions of Chaos. That means the end user does not need to do any image generation or training before using the CNN for searching. Python and TensorFlow need to be installed first, but after that the user can start a hands free search for interesting rules. When TensorFlow is installed and detected a search button appears on the 2D Cellular Automata dialog. Clicking Search starts a hands free random search and classification.

TensorFlow CNN CA Searching

The other search methods above are still hidden as they do not predict interesting with a high enough accuracy.




The End (for now)

If you managed to get this far, thanks for reading.

If you have some knowledge about any of the above methods that I missed please leave a reply or get in touch and let me know.

Any other ideas for cellular automata searching and classification are also welcome.

I will continue to update this post with any other methods I find in the future.

Jason.

Species

Overview

This is a relatively simple simulation of multiple species fighting for survival.

Simulation World

A terrain is generated using Perlin noise. The noise values are blurred to smooth out the edges a little.

Creatures

Multiple types of creatures are created that inhabit the world. They each have properties like;
X and Y position – where the creature is in the world
Radius – how large the creature is
Direction – what direction the creature is facing
Speed – how far the creature moves each step of the simulation
Color – what color it is so creature types can easily be distinguished from one another
Sides – creatures are shown as polygons with between 3 and 8 sides
Age – how many simulation steps has the creature lived for
Maximum Age – if a creature reaches this age it dies of old age
Minimum and Maximum Breed Ages – a range of ages that the creature can reproduce

The simulation is started by creating a bunch of random creatures in the world. They all move according to their properties.

Fighting

When 2 creatures come into contact with each other they fight for survival. At this stage I have 3 possible fight methods to determine which creature wins;
1. Random – one of the creatures in the fight is randomly chosen to die
2. Attacker wins – whichever creature first moves and hits another creature kills the creature it hits
3. Strongest wins – Creature strength goes up from birth to middle age then down again as the creature ages. This is so “babies” and “elderly” creatures are not as strong in battle against middle age creatures.

Reproduction

Creatures have a chance to duplicate themselves if they are between a minimum and maximum breed age and if there is room near them for the child creature to be born into. There is an option for the child properties to be mutated slightly (or not so slightly).

Results

Here is a sample movie showing a full run that lasts until one of the species manages to kill all others. No mutations in this example.

Species is now available as a mode within Visions of Chaos.

Jason.

Primordial Particle Systems

Primordial Particle Systems

A while back I was playing with Particle Life simulations. At that time, another video I came across was the following

Click here to read the paper “How a life-like system emerges from a simple particle motion law” that describes how it works in great detail.

Primordial Particle Systems

For a simpler overview I recommend this page by Brian H that includes snippets of the source code that helped me get my version working.

Primordial Particle Systems

My even (hopefully) simpler explanation is as follows;

1. Fill the simulation space with a bunch of particles.
2. Particles have settings for radius, alpha, beta and velocity.
– radius is how far around itself each particle can sense the other particles.
– alpha is the fixed rotation amount. Each particle turns by this amount each step of the simulation.
– beta is the proportional rotation. This is the amount the particle turns depending on its neighbor particles.
– velocity is how far the particles move forward each step.
3. Each particle maintains a heading which is the direction it is facing.
4. Each of the particles move by the following steps
– Count how many neighbor particles are within the radius
– Work out how many of them are to the left and right of the particle
– Turn towards the left or right with the larger count
– Move forward

That’s all there is. From those relatively local and simple steps you can get some nice cell like and amoeba like structures emerging.

Primordial Particle Systems

More sample images in this gallery.

The following movie shows some example results created with the latest version of Visions of Chaos.

Jason.

Physarum Simulations

Physarum Polycephalum

Physarum Polycephalum aka slime mold is made up of a vast number of individual single cell organisms. These organisms have no brains or intelligence, but complex behaviors emerge when many of them are put together. Depending on their environment they move like what seems to be a much more complex entity.

Here are some great videos about slime molds with some awesome time lapse footage.

Once you have watched those you should hopefully have a better appreciation for the simple slime mold and the rest of this post will make more sense.

Here is one final video showing time lapse footage of various Physarum

Simulating Slime Molds

I have been interested in trying to simulate slime molds fror years now and my interest was once again peaked from seeing Sage Jenson‘s Physarum page here describing his simulations.

Sage was inspired by the paper Characteristics of Pattern Formation and Evolution in Approximations of Physarum Transport Networks.

He gives this simple diagram explaining the steps.

The basic explanation is a bunch of particles move over an area turning towards spots with higher concentrations of a pheromone trail. They also leave a trail as they move. These basic steps create interesting patterns and structures.

My method

Physarum Simulation

Following the principals from Sage and the paper, this is how my take on simulating Physarum works.

Physarum Simulation

1. Create a 2D array that tracks the pheromone trail intensity at every pixel location. Initially all spots are set to 0 intensity. I tried setting various shapes and perlin noise clouds to start, but the moving particles quickly erase any starting shapes and create their own paths so I just start with an empty space. Sage’s examples show interesting patterns and structures when starting with circles or other shapes, so I need to do some more work on start patterns.

Physarum Simulation

2. Create a list of particles with properties heading (direction/angle the particle is moving), x,y (positions), sense angle (how wide the particle looks to the left and right) and sense distance (how far in front the particle looks), turn angle (how quick the particle turns towards the sensed areas). I set the number of particles to match the image width multiplied by the image height. That seems to nicely adjust the particle count when changing image sizes.

Physarum Simulation

3. Main loop

Physarum Simulation

a) Display. For display I scale the minimum and maximum trail values to between 0 and 255 for a gray scale intensity (or to be used as an index into a color palette, but simple gray scale seems to look the best).

Physarum Simulation

b) Each particle looks at the 3 locations in front of it based on the sense angle and distance. You then work out which of the left, front and right spots have the highest concentration of the pheromone trail.

Physarum Simulation

c) Turn the particle towards the highest pheromone intensity. ie if the left spot is highest then subtract turn angle from the particle heading. If the front is highest do not make any change to the particle heading. If the right is highest add turn angle to the particle heading. You can also reverse this process so the particles turn away from the highest pheromone levels.

Physarum Simulation

d) Move the particle forwards by a specified move amount.

Physarum Simulation

e) Eat/absorb. I added a setting so that particles can absorb a bit of the pheromone trail at this point.

Physarum Simulation

f) Deposit an amount of pheromone onto the trail to increase it.

Physarum Simulation

g) Blur the trail array. This simulates the pheromones diffusing over the surface. I use this quick blur with an option for a blur radius between 1 and 5.

Physarum Simulation

h) Evaporate the trail by a small amount. This slowly decays the amount of pheromone.

Physarum Simulation

Repeat the main loop as long as necessary.

Physarum Simulation

Results

See my Physarum Simulations gallery for more images.

Here is a movie with some example results showing the simulations running. For the display the pheromone trail intensities are mapped to a gray scale palette (brighter = higher intensities).

Multiple Species Physarum Simulations

Physarum Simulation

My next idea was to have multiple Physarum types in the same area. For these cases I used 3 sets of Physarum (3 groups of particles with their own unique settings) as shown in the following settings dialog.

Physarum Simulation

Each of the pheromone trail intensities are then converted to RGB color components.

Physarum Simulation

This works but the results are just 3 separate simulations that do not interact. The idea is to have each of the particle types attract to their pheromones, but move away from the other 2 types of pheromones.

Physarum Simulation

The main change is in the pheromone detection and turn code. For the single Physarum simulation the particles look left, forward and right and then turn and move based on the location with the highest pheromone concentration. For 3 particle types they take into account their pheromone concentrations but subtract the pheromone concentrations of the other 2 types. For example if the 3 trail/pheromone arrays are called rtrail, gtrail and btrail, then the red particles pheromones are calculated by using rtrail[x,y]-gtrail[x,y]-btrail[x,y]. The highest concentration of left, forward and right is then turned and moved towards.

Physarum Simulation

More example images can be seen in my Physarum Simulations Gallery.

Here is a sample movie showing some of the multiple species results.

Physarum Image Processing

This was inspired after seeing the following video from Magic Jesus.

A bunch of Physarum particles start on the surface of an image. The particle colors are based on the image color they start on.

After this let them wander around the image area following Physarum simulation rules with a slight change. In this case rather than turning left or right based on a pheromone trail intensity, they turn towards the pixel that is closest in color to themselves.

This is my result after running Physarum simulations on three colorful paintings. The first and third are from Leonid Afremov and the second by Kandinsky (same painting as in Magic Jesus’ example movie).

These would look great on a large wall in a modern art gallery. Playing slowly enough so you could just notice the changing colors (like clouds moving slow enough you don’t notice they change until you look away and back again). The exhibits with those dark rooms you enter and read the little white plaque with a blurb on what it is all about. “The slow interplay of colors represents the human condition and the struggles of how humans still cannot find a peaceful equilibrium of coexistence with themselves and the planet.”

Availability

Both single and multiple species Physarum Simulations and Physarum Pixel Flow are now included with the latest version of Visions of Chaos.

Jason.

Style Transfer GANs (Generative Adversarial Networks)

Style Transfer Generative Adversarial Networks take two images and apply the style from one image to the other image. Here are some sample results from here.

Style Transfer GAN examples

For a more technical explanation of how these work, you can refer to the following papers;

Image Style Transfer Using Convolutional Neural Networks
Artistic style transfer for videos
Preserving Color in Neural Artistic Style Transfer

Ever since first seeing this technique I wanted to add it as an image processing option within Visions of Chaos.

If you only want to play around with style transfer or only have a few photos you want to experiment with, then I recommend you use an online service like DeepArt because this can be a tedious process to setup and use on your own PC.

GPU with Cuda support

The methods in this blog will run without a graphics card processor (GPU) but are very slow using only the CPU (ten minutes for a tiny image with few iterations, hours for larger sizes with many iterations).

For fast results you need a Nvidia graphics card that supports Cuda. Check the list here. On board GPUs are not supported. AMD Radeon GPUs are not supported. You need Nvidia.

If you do not have a supported Nvidia graphics card you can continue to get the CPU supported version going if you are very patient and/or a masochist.

For these steps I made a new C:\STGAN\ folder for all the downloads, so that is what you will see referenced in the steps and screenshots.

Python

Python logo

Download the latest version of Python from here.

Install Python. NOTE: you must check the “Add Python to PATH” checkbox on the first Python installer screen.

TensorFlow

TensorFlow Logo

TensorFlow is a machine learning platform developed by Google.

To get TensorFlow support in Python type the following inside a command prompt window


pip install --no-cache-dir --ignore-installed --upgrade --force-reinstall tensorflow
pip install --no-cache-dir --ignore-installed --upgrade --force-reinstall tensorflow-gpu

Those commands add support for both CPU and GPU tensorflow

SciPy


pip install --no-cache-dir --ignore-installed --upgrade --force-reinstall scipy

OpenCV


pip install --no-cache-dir --ignore-installed --upgrade --force-reinstall opencv-python

Neural-style-tf

Now for the actual Python program that handles the style transfer.

Download and extract neural-style-tf (for this example I used the C:\STGAN\neural-style-tf-master\ folder )

Download this model and put it into the extracted neural-style-tf-master directory.

Change into the neural-style-tf-master folder with the command prompt

Now test each of these import lines (from neural-style-tf.py) one at a time to verify everything is OK
To do that just type “python” in your terminal and press enter (restart your terminal first if you haven’t after you finished installing everything). The line should now start with “>>>” instead of the directory. Copy and paste the following commands (you can copy paste them all at once to save typing them one at a time) into the Python prompt.

import tensorflow as tf
import numpy as np
import scipy.io
import argparse
import struct
import errno
import time
import cv2
import os

They should come back without errors, ie

Python test

Test Run

Here we go! If you got to here then it is time to do a quick (slow) CPU test run.

Go into a command prompt under the neural_style.py folder and run the following command


python neural_style.py --content_img golden_gate.jpg --style_imgs starry-night.jpg --max_size 1000 --max_iterations 100 --print_iterations 1 --original_colors --device /cpu:0 --verbose

You will see various stats and then after some time (on a not so new PC this took 15 minutes) you will see it finish.

You should then see the output under the C:\STGAN\neural-style-tf-master\image_output\ directory.

If you got to here then it is mostly working. The next step is to get GPU support working so the processing times can be much faster.

NVidia CUDA and cuDNN

Waiting 15 minutes per image really tests the patience. If you have a newer GPU then it can be used to speed up the calculations. Firstly you need to download the various support tools and drivers.

Nvidia logo

Download the latest version of CUDA from here and install it.

NOTE: by default the Nvidia installer wants to install extra drivers etc, you only need the libraries option checked, ie

CUDA

Download the latest cuDNN from here.

You do have to register, but if you do not want to use your real name and email to register, use a fake name and 10 minute mail to get the verification email.

Extract the cuDNN zip to a temp folder and then copy the cudnn64_6.dll (or possibly cudnn64_7.dll) into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin\

REBOOT.

Re-run the test command (note that it now specifies GPU device to use). Also note that the max_size is small here. Larger sizes need more GPU memory and power and may fail, so best to start with a small sized image as a test.


python neural_style.py --content_img golden_gate.jpg --style_imgs starry-night.jpg --max_size 250 --max_iterations 100 --print_iterations 1 --original_colors --device /gpu:0 --verbose

and you should see it is MUCH faster.

If your GPU is not supported or it does not run, you are stuck with CPU so roll back to CPU only support,


pip3 uninstall tensorflow-gpu
pip3 install --no-cache-dir --ignore-installed --upgrade tensorflow

Not all errors mean you cannot get GPU support. Read the output messages and look for any hints on what went wrong. Google error messages. If at all possible you do want to have GPU support for Style Transfer.

Style Transfer in Visions of Chaos

If you made it this far you can now experiment with style tranfer GANs in Visions of Chaos. I have added some basic wrapper code that executes the python command to apply style transfer to any fractal or other image you can create.

Generate any image, then select Image->Image Processing->Style Transfer.

Visions of Chaos Style Transfer GAN

Start with smaller image sizes to get an idea of how long the process will take on your system before going for larger sized images.

You can also select any external image file to apply the style transfer to. So dig out those cat photos and have fun. Note that if you get tired of the limited style images that come with neural-style-tf you can put any image you like under the styles folder and use those. Grab an image of your favorite artist’s works and experiment.

For some examples I used the following photo of Miss Marple.

Miss Marple

And applied some various transfer style images.

MC Escher Plane Filling II

Miss Marple Style Transfer GAN

A Mandelbrot fractal

Miss Marple Style Transfer GAN

Another Mandelbrot fractal

Miss Marple Style Transfer GAN

HR Giger Biomechanical Landscape

Miss Marple Style Transfer GAN

Kandinsky Composition VII

Miss Marple Style Transfer GAN

Mondrian

Miss Marple Style Transfer GAN

Monet

Miss Marple Style Transfer GAN

Picasso Les Femmes d’Alger

Miss Marple Style Transfer GAN

Picasso Seated Nude

Miss Marple Style Transfer GAN

Hokusai The Great Wave off Kanagawa

Miss Marple Style Transfer GAN

Munch The Scream

Miss Marple Style Transfer GAN

Turner The Wreck of a Transport Ship

Miss Marple Style Transfer GAN

van Gogh Starry Night

Miss Marple Style Transfer GAN

Troubleshooting

If you get a failed style transfer and an error message, here are a few things to try;
1. Smaller image size. Depending on the RAM in your PC and GPU you may have maxed out.
2. Reboot. Seems to always fix a stubborn error for me. The Cuda and/or cuDNN seem to be the main culprit. They get hung or locked or something and only a reboot will get them working again.

Jason.

Automatic Color Palette Creation

Fractint MAP format palette files

Going back 30 years, Fractint was a fractal generation program for DOS based systems. For its time it was the fractal program of choice for enthusiasts.

Fractint used a simple text format for its color palettes. These *.MAP files were text files with each color’s RGB values separated by spaces each on a new line. So, for example if you wanted the first color in your palette to be blue the first line would be “0 0 255”.

When I first started creating Visions of Chaos I adopted the format. The most common map files had 256 colors (you could have palettes with other color counts but I only use 256 color palettes).

The rest of this post covers the palette creation methods that have been included with Visions of Chaos. Although I use these methods specifically to create 256 color MAP files the principles could be applied to any number of colors for different sized palettes.

If you are just looking for a Fractint color palette collection, scroll down to the end of this post and grab the archive provided.

Smoothly blending colors

Visions of Chaos Color Palette Editor

This is probably the first and most obvious method to use. Take a small number of base colors (I allow up to 16) and blend them into a palette.

How you get the colors to blend can be;

1. User selects them from the standard color picker dialog.
2. User can use eye dropper functionality to pick them out of a photo.
3. Set them at random.
4. Use the color wheel. Allows selection of complmentary colors, tetrads, and other color theory based colors.

Visions of Chaos Color Palette Editor

5. Extract colors from an image. See this previous blog post explaining how that works.

Visions of Chaos Color Palette Editor

Once you have the colors there are numerous ways you can blend them;

1. Smooth blend. Smoothly interpolate the colors.

Visions of Chaos Color Palette Editor

2. Fade out blend. Fade each of the colors to black.

Visions of Chaos Color Palette Editor

3. Fade in blend. Fade each of the colors from black.

Visions of Chaos Color Palette Editor

4. Neon blend. Fade from black to the colors then back to black.

Visions of Chaos Color Palette Editor

5. Stripe blend. Alternate each color for the duration of the palette.

Visions of Chaos Color Palette Editor

Using curves to create palettes

The idea here is to use various mathematical functions to generate curves for the RGB components of the palette. The following is a list of the various methods I use so far.

Sine. Each RGB color component is its own sine wave. Randomize the wave amplitude, frequency and period.

Visions of Chaos Color Palette Editor

Multiple Sine. Add multiple sine waves together for each RGB component and then scale down to between 0 and 255.

Visions of Chaos Color Palette Editor

IQ. Idea from Inigo Quilez.

Visions of Chaos Color Palette Editor

Perlin. Use repeating noise loops as in this coding train video. Map the resulting noise values to each RGB channel. Using a looping noise function is best because it means the palette wraps around smoothly and using it for fractal zooms does not show a sharp break when the palette ends and restarts. I have only implemented this method over the last few days (at the time of writing this post), but so far it gives some really unique color palettes.

Visions of Chaos Color Palette Editor

Here are some examples palettes created using Perlin noise. Click to see the full sized image.

Visions of Chaos Color Palette Editor

Simplex. Same as Perlin, but uses Simplex noise.

Visions of Chaos Color Palette Editor

Simplex + Perlin. Create each RGB value by adding Simplex noise to Perlin noise.

Visions of Chaos Color Palette Editor

Here are some examples of Simplex and Simplex + Perlin palettes. Click for full size.

Visions of Chaos Color Palette Editor

Multiple Perlin – Add/subtract multiple Perlin Noise curves into RGB amounts.

Visions of Chaos Color Palette Editor

Random Walk. Random curve for each RGB component between index 0 and 127. Reverse for the rest of the palette. Each step the RGB is changed by +random(5)-2 to randomly go up and/or down.

Visions of Chaos Color Palette Editor

Terrain Fault. Take 2 random points between 0 and 255. Between the points randomly raise or lower by a small amount. Repeat this a number of times.

Visions of Chaos Color Palette Editor

HSL to RGB. Random HSL curves converted to RGB.

Visions of Chaos Color Palette Editor

RGB. Random curves for each RGB component. Use various easing functions to tween curve control points.

Visions of Chaos Color Palette Editor

YUV to RGB. Random YUV curves converted to RGB.

Visions of Chaos Color Palette Editor

Combine palettes. Take 2 previously created palettes and combine their RGB components by addition, subtraction or multiplication.

Visions of Chaos Color Palette Editor

Multiple RGB. Combine multiple RGB curves.

Visions of Chaos Color Palette Editor

Multiple YUV to RGB. Combine multiple YUV to RGB curves.

Visions of Chaos Color Palette Editor

Modify an existing palette

Once you have palette files, you can also use various techniques to modify them;

1. Increase or decrease the individual RGB channel amounts
2. Brightness
3. Contrast
4. Increase or decrease the individual YUV channel amounts
5. Wrap. Take the existing palette, halve it, then add the flipped half to itself. This is useful when you want a non repeating palette to wrap around.

Visions of Chaos Color Palette Editor

Visions of Chaos Color Palette Editor

6. Double. If you have a palette that is too smooth/sparse for the current fractal image, doubling can add more lines/gradients to the palette.

Visions of Chaos Color Palette Editor

Visions of Chaos Color Palette Editor

7. Blur. Just like a blur function in image processing. Averages out the palette values with neighbor colors.
8. Sharpen. Just like a sharpen function in image processing.
9. Shift RGB. R->G,G->B,B->R.

Visions of Chaos Color Palette Editor

Visions of Chaos Color Palette Editor

Visions of Chaos Color Palette Editor

10. Invert. R=255-R, G=255-G, B=255-B.
11. Reverse. Flip the order of the palette colors.
12. Histogram equalize palette. Like the auto-levels in Photoshop. My method tends to make the results slightly too bright. Needs fixing when I get a chance.

Visions of Chaos Color Palette Editor

Visions of Chaos Color Palette Editor

13. Matrix multiplication. Take a 3×3 matrix and multiply the 1×3 RGB components by the matrix to get new RGB amounts.

Visions of Chaos Color Palette Editor

Any other ideas?

If you know of any other ways to generate palettes, or have an idea for ways to create new unique color palettes, let me know.

Availability

The color palette editor shown in this post is included with Visions of Chaos.

Just give me the palettes!

If you are using another program that uses Fractint palette files you can download the 3371 color palettes I include with Visions of Chaos here. Some created by me, others found on various Internet sites over the years, some converted from gradient packs. No copyright on them so do with them as you wish.

If you do have any other sets of MAP palettes you would like to share, send me an email. You can never have enough colors when creating fractal images.

Jason.

Vorticity Confinement for Eulerian Fluid Simulations

Eulerian MAC Fluid Simulation with Vorticity Confinement

Eulerian fluid simulations simulate the flow of fluids by tracking fluid velocity and density over a set of individual (discreet) evenly spaced grid locations. One downside to this approach is that the finer details in the fluid can be smoothed out, so you lose those little swirls and vortices.

Eulerian MAC Fluid Simulation with Vorticity Confinement

A simple fix for this is to add Vorticity Confinement. If you read the Wikipedia page on Vorticity Confinement you may be no wiser on what it is or how to add it into your fluid simulations.

Eulerian MAC Fluid Simulation with Vorticity Confinement

My explanation of vorticity confinement is that it looks for curls (vortices) in the fluid and adds in velocity to help boost the swirling motion of the fluid. Adding vorticity confinement can also give more turbulent looking fluid simulations which tend to be more aesthetically pleasing in simulations (unless you are a member of team laminar flow).

Eulerian MAC Fluid Simulation with Vorticity Confinement

The code for implementing vorticity confinement is relatively simple. For 2D I used the snippet provided by Iam0x539 in this video.


function Curl(x,y:integer):double;
begin
     Curl:=xvelocity[x,y+1]-xvelocity[x,y-1] + yvelocity[x-1,y]-yvelocity[x+1, y];
end;

procedure VorticityConfinement(vorticity:double);
var dx,dy,len:double;
    x,y:integer;
begin
     for y:=2 to _h-3 do
     begin
          for x:=2 to _w-3 do
          begin
               dx:=abs(curl(x + 0, y - 1)) - abs(curl(x + 0, y + 1));
               dy:=abs(curl(x + 1, y + 0)) - abs(curl(x - 1, y + 0));
               len:=sqrt(sqr(dx)+sqr(dy))+1e-5;
               dx:=vorticity/len*dx;
               dy:=vorticity/len*dy;
               xvelocity[x,y]:=xvelocity[x,y]+timestep*curl(x,y)*dx);
               yvelocity[x,y]:=yvelocity[x,y]+timestep*curl(x,y)*dy);
          end;
     end;
end;

Eulerian MAC Fluid Simulation with Vorticity Confinement

The VorticityConfinement procedure is called once per simulation step. It looks for local curl at each fluid grid point and then increases the local x and y velocities using the curl. This is what helps preserve the little vortices and helps reduce the smoothing out of the fluid.

Eulerian MAC Fluid Simulation with Vorticity Confinement

To demonstrate how vorticity confinement changes a fluid simulation, the images within this post and the following movie add vorticity confinement to my previous Eulerian MAC Fluid Simulations code.

Eulerian MAC Fluid Simulations with Vorticity Confinement is now included in the latest version of Visions of Chaos.

Jason.