Super Resolution

The Dream

For years now you would have seen scenes in TV shows like CSI or movies like Blade Runner the “enhance” functionality of software that allows details to be enhanced in images that are only a blur or a few pixels in size. In Blade Runner, Deckard’s system even allowed him to look around corners.

The Reality

I have recently been testing machine learning neural network enhancers (aka super resolution) models. They resize an image while trying to maintain or enhance details without losing detail (or with losing a lot less detail than if the image was zoomed with an image editing tool using linear or bicubic zoom).

Some of my results with these models follows. I am using the following test image from here.

Unprocessed Test Image

To best see the differences between the algorithms I recommend you open the x4 zoomed images in new tabs and switch between them.

SRCNN – Super-Resolution Convolutional Neural Network

To see the original paper on SRCNN, click here.
I am using the PyTorch script by Mirwaisse Djanbaz here.

SRCNN x4

SRCNN x4

SRRESNET

To see the original paper on SRRESNET, click here.
I am using the PyTorch script by Sagar Vinodababu here.

SRRESNET x4

SRRESNET x4

SRGAN – Super Resolution Generative Adversarial Network

To see the original paper on SRGAN, click here.
I am using the PyTorch script by Sagar Vinodababu here.

SRGAN x4

SRGAN x4

ESRGAN – Enhanced Super Resolution Generative Adversarial Network

I am using the PyTorch script by Xintao Wang et al here.

ESRGAN x4

ESRGAN x4

PSNR

I am using the PyTorch script by Xintao Wang et al here.

PSNR x4

PSNR x4

Differences

Each of the algorithms gives different results. For an unknown source image it would probably be best to run it through them all and then see which gives you the best result. These are not the Hollywood or TV enhance magic fix just yet.

If you know of any other PyTorch implementations of super resolution I missed, let me know.

Availability

You can follow the links to the original GitHub repositories to get the software, but I have also added a simple GUI front end for these scripts in Visions of Chaos. That allows you to try the above algorithms on any image you like.

Jason.

Text-to-Image Machine Learning

Text-to-Image

Input a short phrase or sentence into a neural network and see what image it creates.

I am using DeepDaze and BigSleep from Phil Wang (@lucidrains).

Phil used the code/models from Ryan Murdock (@advadnoun). Ryan has a blog post explaining the basics of how all the parts connect up here.

The most simple explanation is that BigGAN generates images that try to satisfy CLIP which rates how closely the image matches the input phrase. BigGAN creates an image and CLIP looks at it and says “sorry, that does not look like a cat to me, try again”. As each repeated iteration is performed BigGAN gets better at generating an image that matches the desired phrase text.

BigSleep Examples

BigSleep seems to generate images clearer and quicker than DeepDaze does so I have concentrated on BigSleep.

BigSleep uses a seed number which means you can have thousands/millions of different outputs for the same input phrase. Note there is an issue with the seed not always being able to create the same images though. From my testing, even with the torch_deterministic flag set to True and setting the CUDA envirnmental variable does not help. Every time BigSleep is called it will generate a different image with the same seed. That means you will never be able to reproduce the same output in the future.

These images are 512×512 pixels square (the largest resolution BigSleep supports) and took 4 minutes each to generate on an RTX 3090 GPU. The same code takes 6 minutes 45 seconds per image on an older 2080 Super GPU.

Also note that these are the “cherry picked” best results. BigSleep is not going to create awesome art every time. For these examples or when experimenting with new phrases I usually run a batch of multiple images and then manually select the best 4 or 8 to show off (4 or 8 because that satisfies one or two tweets).

To start, these next four images were created from the prompt phrase “Gandalf and the Balrog”

BigSleep - Gandalf and the Balrog

BigSleep - Gandalf and the Balrog

BigSleep - Gandalf and the Balrog

BigSleep - Gandalf and the Balrog

Here are results from “disturbing flesh”. These are like early David Cronenberg nightmare visuals.

BigSleep - Disturbing Flesh

BigSleep - Disturbing Flesh

BigSleep - Disturbing Flesh

BigSleep - Disturbing Flesh

A suggestion from @MatthewKafker on Twitter “spatially ambiguous water lillies painting”

BigSleep - Spatially Ambiguous Water Lillies Painting

BigSleep - Spatially Ambiguous Water Lillies Painting

BigSleep - Spatially Ambiguous Water Lillies Painting

BigSleep - Spatially Ambiguous Water Lillies Painting

BigSleep - Spatially Ambiguous Water Lillies Painting

BigSleep - Spatially Ambiguous Water Lillies Painting

BigSleep - Spatially Ambiguous Water Lillies Painting

BigSleep - Spatially Ambiguous Water Lillies Painting

“stormy seascape”

BigSleep - Stormy Seascape

BigSleep - Stormy Seascape

BigSleep - Stormy Seascape

BigSleep - Stormy Seascape

After experimenting with acrylic pour painting in the past I wanted to see what BigSleep could generate from “acrylic pour painting”

BigSleep - Acrylic Pour Painting

BigSleep - Acrylic Pour Painting

BigSleep - Acrylic Pour Painting

BigSleep - Acrylic Pour Painting

“beautiful sunset”

BigSleep - Beautiful Sunset

BigSleep - Beautiful Sunset

BigSleep - Beautiful Sunset

BigSleep - Beautiful Sunset

I have always enjoyed David Lynch movies so let’s see what “david lynch visuals” results in. This one got a lot of surprises and worked great. These images really capture the feeling of a Lynchian cinematic look. A lot of these came out fairly dark so I have tweaked exposure in GIMP.

BigSleep - David Lynch Visuals

BigSleep - David Lynch Visuals

BigSleep - David Lynch Visuals

BigSleep - David Lynch Visuals

BigSleep - David Lynch Visuals

BigSleep - David Lynch Visuals

BigSleep - David Lynch Visuals

BigSleep - David Lynch Visuals

More from “david lynch visuals” but these are more portraits. The famous hair comes through.

BigSleep - David Lynch Visuals

BigSleep - David Lynch Visuals

BigSleep - David Lynch Visuals

BigSleep - David Lynch Visuals

“H.R.Giger”

BigSleep - H.R.Giger

BigSleep - H.R.Giger

BigSleep - H.R.Giger

BigSleep - H.R.Giger

BigSleep - H.R.Giger

BigSleep - H.R.Giger

BigSleep - H.R.Giger

BigSleep - H.R.Giger

“metropolis”

BigSleep - Metropolis

BigSleep - Metropolis

BigSleep - Metropolis

BigSleep - Metropolis

“surrealism”

BigSleep - Surrealsim

BigSleep - Surrealsim

BigSleep - Surrealsim

BigSleep - Surrealsim

Availability

I have now added a simple GUI front end for DeepDaze and BigSleep into Visions of Chaos, so once you have installed all the pre-requisites you can run these models on any prompt phrase you feed into them. The following images shows BigSleep in the process of generating an image for the prompt text “cyberpunk aesthetic”.

Text-To-Image GUI

After spending a lot of time experimenting with BigSleep over the last few days, I highly encourage anyone with a decent GPU to try these. The results are truly fascinating. This page says at least a 2070 8GB or greater is required, but Martin in the comments managed to generate 128×128 images on a 1060 6GB GPU after 26 (!!) minutes per image.

Jason.

Adding PyTorch support to Visions of Chaos

TensorFlow 2

Recently after getting a new 3090 GPU I needed to update TensorFlow to version 2. Going from TensorFlow version 1 to TensorFlow version 2 had way too many code breaking changes for me. Looking at other github examples for TensorFlow 2 code (eg an updated Style Transfer script) gave me all sorts of errors. Not just one git repo either, lots of supposed TensorFlow 2 code would not work for me. If it is a pain for me it is going to be a bigger annoyance for my users. I already get enough emails saying “I followed your TensorFlow instructions exactly, but it doesn’t work”. I am in no way an expert in Python, TensorFlow or PyTorch, so I need something that for most of the time “just works”.

I did manage to get the current TensorFlow 1 scripts in Visions of Chaos running under TensorFlow 2, so at least the existing TensorFlow functionality will still work.

PyTorch

After having a look around and watching some YouTube videos I wanted to give PyTorch a go.

The install is one pip command they build for you on their home page after you select OS, CUDA, etc. So for my current TensorFlow tutorial (maybe I now need to change that to “Machine Learning Tutorial”) all I needed to do was add 1 more line to the pip install section.


pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

PyTorch Style Transfer

First Google hit is the PyTorch tutorial here. After spending most of a day banging my head against the wall with TensorFlow 2 errors, that single self contained Python script using PyTorch “just worked”! The settings do seem harder to tweak to get a good looking output compared to the previous TensorFlow Style Transfer script I use. After making the following examples I may need to look for another PyTorch Style Transfer script.

Here are some example results using Biscuit as the source image.

Biscuit

Biscuit Style Transfer

Biscuit Style Transfer

Biscuit Style Transfer

Biscuit Style Transfer

PyTorch DeepDream

Next up was ProGamerGov’s PyTorch DeepDream implementation. Again, worked fine. I have used ProGamerGov‘s TensorFlow DeepDream code in the past and it worked just as well this time. It gives a bunch of other models to use too, so more different DeepDream outputs for Visions of Chaos are now available.

Biscuit DeepDream

Biscuit DeepDream

Biscuit DeepDream

Biscuit DeepDream

PyTorch StyleGAN2 ADA

Using NVIDIA’s official PyTorch implentation from here. Also easy to get working. You can quickly generate images from existing models.

StyleGAN2 ADA

Metropolitan Museum of Art Faces – NVIDIA – metfaces.pkl

StyleGAN2 ADA

Trypophobia – Sid Black – trypophobia.pkl

StyleGAN2 ADA

Alfred E Neuman – Unknown – AlfredENeuman24_ADA.pkl

StyleGAN2 ADA

Trypophobia – Sid Black – trypophobia.pkl

I include the option to train your own models from a bunch of images. Pro tip: if you do not want to have nightmares do not experiment with training a model based on a bunch of naked women photos.

Going Forward

After these early experiments with PyTorch, I am going to use PyTorch from now on wherever possible.

Jason.

TensorFlow 2 and RTX 3090 Performance

A Pleasant Surprise

Just in time for when 3090 GPUs started to become available again in Sydney I was very generously gifted the funds to finally purchase a new GeForce RTX 3090 for my main development PC. After including a 1000 Watt power supply the total cost came to around $4000 AUD ($3000 USD). Such a rip off.

GeForce RTX™ 3090 GAMING X TRIO 24G

The card itself is really heavy and solid. They include a bracket to add support to help it not sag over time which is a nice touch. Like all recent hardware parts it lights up in various RGB colors and shades. These RGB rigs are going to look so out of date once this fad goes away. After upgrading my PC parts over the last few years I now have PCs that flash and blink more than my Christmas tree does when fully setup and lit.

Who needs a Christmas tree?

Not So Fast

I naively assumed that a quick GPU swap would give me the boost in performance that previous GPU upgrades did (like when I upgraded to the 1080 and then to the 2080 Super). Not this time. I ran a few quick machine learning TensorFlow (version 1) tests from Visions of Chaos and the result was either the Python scripts ran extremely slow (around 10x to 30x SLOWER) or they just crashed. So much for a simple upgrade for more power.

Turns out the Ampere architecture the 3090 GPUs use is only supported by CUDA 11.0 or higher. After updating CUDA, cuDNN, all the various Python libraries and the Python scripts I was back to where I was before the upgrades. If you have been through the tedious process of installing TensorFlow before for Visions of Chaos, you will need to follow my new instructions to get TensorFlow version 2 support. Updating TensorFlow v1 code to TensorFlow v2 code is a pain. From now on I am going to use PyTorch scripts for all my machine learning related needs.

High Temperatures

These newer GPUs can run hot. Under 100% load (when I was doing a Style Transfer movie with repeated frames being calculated one after the other) the 3090 peaks around 80 degrees C (176 F). I do have reasonable cooling in the case, but the air being blown out is noticeably hot. The 2080 running the same test peaks around 75 degrees.

The 2080 and my older 1080 seem to push all the hot exhaust air out the rear vents of the card, but the 3090 has no rear exhaust so all the hot air goes directly into the case. I can only assume this is due to them not being able to handle all that heat going “through” the card and out the back, so it needs to just vent that heat anywhere it can. This means if the card is running hot a lot of hot air goes straight into the case. When I touched the side of the case next to the GPU it was very warm.

Apparently 80 and under is perfectly fine and safe for a GPU, but they would say that wouldn’t they. They would be bragging about low temps if they could manufacture cooler running cards.

After some experimenting with Afterburner I lowered the temp limit from the GPU default of 83 degrees down to 75 degrees. This resulted in more throttling but only a slight performance hit (style transfer took 1 minute 21 seconds rather than 1 minute 14 seconds). The case was noticeably cooler and the average temp was now down to a much more chilly 65 degrees. Afterburner allows tweaking (overclocking/underclocking) of your GPU, but the most useful feature is its graphing capabilities to see what is really going on. You can monitor temperatures and throttling as you run complex GPU operations.

Extra Cooling

I wanted to see if more case fans would help, so I removed the current 3 case fans and installed 6 of these fans (2 sucking in at the front, 3 blowing out at the top, and 1 blowing out at the rear of the case). My silent PC is no longer silent. I set the GPU back to its default profile with a temp limit of 83 degrees and started another Style Transfer movie to keep the GPU pegged as close to 100% usage as possible for an extended period of time. Watching the temp graph in Afterburner shows peaks still up to 76 degrees, but much less throttling with the core clock graph around 95% to 100% maximum possible MHz when running so that results in a better overall performance.

After a week the extra noise annoyed me too much though so I replaced the Gamdias fans with Corsair fans. 6 of these fans and one of these controllers. Setting the fans to the default “quiet” profile gets the noise back down to near silent sound levels. When I start a machine learning batch run the temp sensors detect the increased heat in the case and ramp up the fans to compensate. Watching Afterburner graphs shows they may even be slightly better at cooling than the Gamdias fans. The problem with the auto-adjust speed control is that there is this noticeable ramping up and down of the fan speeds as they compensate for temp changes. That was more annoying than always on full speed fans. After some adjustments and tests with the excellent Corsair software I settled on a fixed 1000 RPM for all fans. The noise is slightly noticeable (certainly not silent) but the sound level is constant. No GPU throttling under load and the max internal case temp is 36 degrees C (97 F).

Power Usage

Using one of those cheap inline watt meters shows the PC pulls 480 watts when the GPU is at 100% usage. Afterburner reports the card using around 290 watts under full load.

I have basically been using the 3090 24 hours a day training and testing machine learning setups since I bought it. 3 weeks with the 3090 usage made my latest quarterly electricity bill go up from $284 to $313. That works out to roughly $1.40 a day to power the GPU full time. If you can afford the GPU you should be able to afford the cost of powering it.

Final Opinion

Was it worth spending roughly three times the cost of the 2080 on the 3090? No, definitely not. These current over inflated priced GPUs are not worth the money. But if you need or want one you have to pay the price. If the prices were not so artificially inflated and they sold at the initial recommended retail prices then it would be a more reasonable price (still expensive, but not ridiculously so).

After testing the various GPU related modes in Visions of Chaos, the 3090 is only between 10% to 70% faster than the 2080 Super depending on what GPU calculations are being made, and more often on the slower end of that scale. OpenGLSL shader performance is a fairly consistent speed boost between 10% and 15%.

The main reason I wanted the 3090 was for the big jump in VRAM from 8GB to 24GB so I am now able to train and run larger machine learning models without the dreaded out of memory errors. StyleGAN2 ADA models are the first things I have now successfully been able to train. Previously the 2080 would fail instantly with out of memory errors.

StyleGAN2 ADA - Alfred E Neuman

Upgrading the 1080 in my older PC to the 2080 Super is a big jump in performance and allows me to run less VRAM intensive sessions. Can you tell I am trying to convince myself this was a good purchase? I just expected more. Cue the “Ha ha, you idiot! Serves you right for not researching first.” comments.

Jason.

GPU accelerated Root-Finding Fractals

See this post for my previous explorations of Root-Finding Fractals.

A while back I implemented a Custom Formula Editor in Visions of Chaos. The Custom Formula Editor allows users of Visions of Chaos to program their own fractal formulas which run on their GPUs for much faster performance than when using the CPU for calculations.

One of my hopes for coding the custom editor was that users of Visions of Chaos would make their own formulas and share them with me to include in future releases.

Recently I was contacted by Dr Bernd Frassek who was working with Root-Finding Fractals. See here for more of Dr Frassek’s information about these root-finding methods. For an English version see this Google translated version. Being much more of a mathematician than I am, he was able to implement a lot more root-finding methods that I had never experimented with or even heard of. The total count of methods for finding roots went from 4 in my previous version to 23 in the new mode.

With some more testing and a bit of back and forth emails we were able to get the code working and the end result is a new GPU accelerated Root-Finding Fractals mode in Visions of Chaos. You can find this new mode under “Mode->Fractals->Root-Finding Fractals 2”. All of the formulas, methods, coloring and orbit trap settings from the older Root-Finding Fractals mode have been migrated to the new mode.

If you are interested in coding these sort of fractals yourself you can see the latest GLSL shader source code (as of the 8th of March, 2021) by clicking here.

You can also check the “Show shader code” checkbox in the Root-Finding Settings dialog which will cause Visions of Chaos to show you the generated shader code before rendering the fractal image.

Here is a sample movie showing some results from the new mode.

This next movie uses orbit traps.

If you have any other formulas or root-finding methods that you would like to see implemented let me know. The structure of the shader allows new formulas and root-finding methods to be added relatively easily.

Jason.

Even more explorations with Multiple Neighborhoods Cellular Automata

History

If you are not aware what Multiple Neighborhoods Cellular Automata (aka MNCA) are, you can refer to this post and this post for some history.

Multiple Neighborhoods Cellular Automata were created by Slackermanz (see his Reddit history or his GitHub repository).

The basic principal of these cellular automata is to have multiple neighborhoods (usually in circular and/or toroidal shapes) of different sizes that are used to determine the next state of each cell in the grid. Using these more complicated neighborhoods has lead to these fascinating examples of cellular automata beyond the usual simpler versions of cellular automata I usually see or experiment with.

New Discoveries

Two years have passed since those first two blog posts. I saw Slackermanz was still generating new versions of MNCA. He shared a bunch (over 11,000) of his shaders he created as he continues to experiment with MNCA. It only took a little coding to write a converter that massaged his shaders into a format that Visions of Chaos supported. I spent a few days going through these examples and whittled it down to 162 of the “best of the best” MNCA shaders in my opinion. Here is a sample movie showing some of the newer MNCA results.

The shaders that were used for creating the above movie are included with Visions of Chaos under the “Mode->OpenGL Shading Language->Shader Editor” mode as the GLSL shaders starting with “Cellular automaton”.

Multi-scale Multiple Neighborhoods Cellular Automata

During some of his recent developments Slackermanz started to get results that look similar to Multi-scale Turing Patterns (MSTP). I find these results are more interesting with much finer structures that evolve and change more than with MSTP. MSTP tend to reach a relative stable state after a while, the small structures stabilize and only the larger sized shapes pulsate. Compare MSTP to the following example of multi-scale multiple neighborhood cellular automata (MSMNCA?)

The first 3 minutes and 20 seconds are Slackermanz original multi-scale shaders. The next 3 minutes and 20 seconds are those same shaders “zoomed in” by multiplying the neighborhood sizes by 4. The last minute are examples of the very latest experiments with using the multi-scale principals.

The shaders that were used for creating the above movie are included with Visions of Chaos under the “Mode->OpenGL Shading Language->Shader Editor” mode as the GLSL shaders starting with “Cellular automaton”.

To see the shader code than generates the multi-scale image thumbnail for the video, click here.

New Shaders Part 1

After that bunch of shaders the ever prolific Slackermanz shared another set of his new shaders with me.

To see an example shader used in the above movie click here. The only difference between the movie parts are the 32 e0 array parameters in the shader at line 141. Otherwise the shader code remains the same for all movie parts.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 2

Click here to see the shader that makes the first part of the above movie. All other parts use the same shader, only altering the 32 float values of the ubvn array at line 156.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 3

I did say Slackermanz was prolific. Here is another set of samples from his latest work.

Click here to see the shader that makes the first part of the above movie. All other parts use the same shader, only altering the 32 float values of the ubvn array at line 138.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 4

Slackermanz is not a slacker man and shared another bunch of new shaders. Here is another sample movie showing some of the latest MNCA results.

Click here to see the shader code that makes these results. The only part of the shader code that changes between the examples is the 32 float values of the ubvn array at line 136.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 5

These next bunch of Slackermanz shaders include a bit of color shading that helps bring out more of the structures within the blobs and other shapes.

See here to see the shader code that makes these results. Note that this shader code has more commenting than the above shaders so if the earlier ones didn’t make any sense this one may help. The only part of the shader code that changes between the movie examples is the 32 float values of the ubvn array at line 107.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 6

These MNCA shaders continue to be impressively intriguing with new and unique features in each new version.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117. Yes, 52 parameters in these newer shaders compared to the 32 parameters of the above examples.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 7

Another new MNCA shader from Slackermanz.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 8

New MNCA variations.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 9

More absorbing and intriguing (thanks Thesaurus.com) examples.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 10

More compelling and appealing (thanks Thesaurus.com) examples.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 11

More beautiful and astonishing (thanks Thesaurus.com) examples.

See here to see the shader code that makes these results. The only part of the shader code that changes between the movie examples is the 52 float values of the ubvn array at line 117.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 12

The parts in the following movie came from a few different shaders so no specific code this time. If you are curious you can see the shader code within Visions of Chaos when you open the preset/sample MCA files that are listed in the description of the video.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

New Shaders Part 13

The final sample movie for now. The parts in the following movie came from a few different shaders so no specific code this time. If you are curious you can see the shader code within Visions of Chaos when you open the preset/sample MCA files that are listed in the description of the video.

These MNCA are included with Visions of Chaos under the “Mode->Cellular Automata->2D->Multiple Neighborhoods Cellular Automata 2” mode.

Variations of MNCA in Visions of Chaos

The above movies show only a tiny subset of all the MNCA examples Slackermanz has experimented with. There are thousands more variations of MNCA included with Visions of Chaos to explore.

Enough With The MNCA Movies Already!

Yes, there were a lot of Multiple Neighborhoods Cellular Automata movies in this post that I have been uploading to my YouTube channel lately.

Each of the movies in order of uploading show the steps and evolution that Slackermanz went through while creating these cellular automata. Each movie is a selection of samples from one of his MNCA shaders (code links after each movie). They are all unique, or at least I have tried to pick the best unique results from each batch to create the sample movies that show off what each of the shaders can do.

These discoveries deserve to be seen by more people, especially people interested in cellular automata.

The Same As Other CA Types?

Yes, you may see structures that look like Stephan Rafler’s SmoothLife, Kellie Evans’ Larger Than Life and/or Bert Chan’s Lenia (see also here), but the techniques and components found in the construction of MNCA are unique and individual and were developed outside academia separately from the previous linked papers.

The Future

Slackermanz shows no sign of stopping his explorations and discoveries any time soon, so expect more MNCA or other new CA types in the future. I look forward to exploring them and including them in future updates of Visions of Chaos.

Jason.

3D Rule Table Cellular Automata

Origins

This cellular automaton comes from Matthew Guay who describes it in this post on r/cellular_automata on Reddit. There was no name given for these CAs so I have called them “Rule Table Cellular Automata” for my implementation in Visions of Chaos.

The usual behavior for a cellular automata with more than 2 states is that a living (state 1) cell that dies does not become a dead (state 0) cell immediately. Instead the cell goes through a refractory dying period. In a 5 state automata such as this one a state 1 cell that dies would first go through states 2 to 4 each step before finally disappearing and leaving a state 0 empty cell. Using the rule table described below this does not always happen. A cell in a refractory period can suddenly turn back into an alive cell or any other state. This opens up a much wider variety of possible results.

For a 2D example of a rule table based CA I have experimented with in the past see the Indexed Totalistic Cellular Automata.

The Rule Table Explained

This CA uses a rule table to determine how the cells update. It is a 5 state CA with rules like the following

3D Rule Table Cellular Automata

The current cell state (5 states so cell values are between 0 and 4) is shown down the left hand side. Depending on how many state 1 neighbor cells the current cell has determines the new state value.

For example, for that rule table, if a cell has a state of 3 and has 5 neighbor cells that are state 1, then the next state for the cell will be 2. Take the 4th row down for state 3, then go across until you find count 5, then use that column value for the new cell state.

Matthew also uses C and A characters.

“C” means “all states not already shown in this row”. For the 0 state row only count 4 is specified. The C in count 0 column means any cell that does not have 4 neighbor cells becomes state 0.

“A” means all possible count values. So with the above table, any cell that is in state 4 becomes state 0.

To make it easier, compare the above rule table to a state lookup arrays as follows. This is basically what I construct internally for the CA to use as the CA runs. After playing with the rules a while having the GUI interface be a grid as follows would be much easier to use and easier to understand at a glance. Maybe the idea of using C and A to reduce complexity of displaying the rules makes it more complicated to read? Anyway, I do show the rule table as above to match the original.

State 0 [0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
State 1 [2,2,2,2,2,2,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2]
State 2 [3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3]
State 3 [4,4,4,2,2,2,2,1,1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4]
State 4 [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]

That shows the 27 possible results (0 to 26 neighbors) for each cell state. Count a cell’s 26 neighbors and use that count as an index to the new state.

Results

As with all new CA rules the first challenge is finding some good examples within the usual vast search space. Seeing as I have not found a way to detect interesting rules looking for new rules comes down to repeatedly trying random rules hoping for a visually pleasing result. After experimenting with so many CA types over the years I am used to this random process by now. Put a movie on to watch as I repeatedly try random setups. Save the ones that show potential and then manually tweak them to see if anything really interesting happens.

Here is a compilation of interesting rules I found so far.

Availability

If you want to experiment with these CAs yourself they are now included with Visions of Chaos.

Jason.

5D Cellular Automata

After 4D cellular automata the next logical step was to another dimension and see what 5D cellular automata can do.

If you are familiar with lesser dimension CAs then 5D is just an additional value in the cell arrays. 3D uses [X,Y,Z], 4D uses [X,Y,Z,W], and for 5D you use [X,Y,Z,W,V]. 5D extends the number of immediate Moore neighbors of each cell to 242 (3^5-1) (4D has 80, 3D has 26).

The settings dialog gets even more checkbox chock-a-block as follows.

5D Cellular Automata Settings Dialog

Looping through the additional dimension makes calculating these 5D CAs much slower than 4D and lesser dimension CAs to process. I was able to go up to 50x50x50x50x50 sized arrays, but beyond that was too slow for my patience.

Coloring these CAs use the same methods as the 4D cellular automata. The only change is for the “4D density” display method. Rather than using the density of the 4th dimension to show a cell color, the 5D version uses the density of both the 4th and 5th array dimensions for color.

I have not found any really interesting 5D CA rules yet. Because they are so much slower and the search space is so vast, trying multiple random rules really needs to fluke it to find an interesting result. For now here is a simple example starting from a single active cell. Click the image to watch a short animated GIF.

5D Cellular Automaton

5D CAs are now available in Visions of Chaos. If you do happen to find any interesting 5D CA rules, let me know. I asked the same for 3D and 4D and got no responses, but who knows, maybe you reading this now will be the one to find a bunch of new and interesting rules for higher dimension cellular automata. Stay tuned for a YouTube sample movie once I get enough interesting 5D rules.

Jason.

GPT-2 Text Generation

What is it?

GPT-2 is a Generative Pre-Training machine learning model created by OpenAI. The basic purpose of it is to predict what word comes next after a prompt of some seed text. The model was trained on over 40 GB of Internet text. That is an enormous amount of data. Being text only without any images means a lot more text to be used. Estimations on the Internet give approximately 680,000 pages of text per GB. So the 40 GB of text GPT-2 was trained on equates to roughly 27.2 million pages of text!

Originally OpenAI was worried about releasing the AI models publicly because they feared it could be used to auto-generate copious amounts of fake news and spam etc. Since then they have generously released all their models (even the largest with 1.5 billion neural network parameters) for anyone to experiment with.

If you want to use GPT-2 outside Visions of Chaos you can download the code at their GitHub here.

Visions of Chaos front end GUI for GPT-2

I have wrapped all the GPT-2 text generation behind a simple GUI dialog now in Visions of Chaos. As long as you have all the pre-requisite programs and libraries installed. See my TensorFlow Tutorial for steps needed to get this and other machine learning systems working in Visions of Chaos.

You give the model a sentence and after a minute it spits out what it thinks the continued text should be after that prompt. Each time you run the model you get a new unique result.

There is an option for which model to use as on my 2080 Super with 8GB VRAM it cannot handle the largest 1.5 billion parameter model without getting out of memory errors. The 774 million parameter model works fine.

Some example results

What does AI need to do to get rid of us

GPT-2 Text Generation

A nightmare

GPT-2 Text Generation

The future for the human race

GPT-2 Text Generation

How to be happy

GPT-2 Text Generation

These early test results are really interesting. At first I thought the model was just assembling sentences of text it found online, but if you take random chunks of the generated text and do a Google search (in quotes so it searches for the complete sentence) you get no results. The model is really assembling these mostly grammatically correct sentences and paragraphs by itself.

It can be accurate in answering “what is” questions, but then again it can spit out grammatically correct nonsense, so don’t take anything it says as truth.

More to come

A future use I want to use GPT-2 for is a basic chat bot you can talk with. OpenAI’s MuseNet is very promising for generating music and gives much better results than my previous best LSTM results.

OpenAI have also since released GPT-3 with limited access. I hope they also release the model to the general public like they did GPT-2. There are some very impressive results I have seen using GPT-3. GPT-3’s largest model is 175 billion parameters, compared to 1.5 billion for GPT-2. Although if my 8GB GPU cannot handle the 1.5 billion GPT-2 model it will have no hope of using the 175 billion parameter model.

Jason.

Creating GLSL Animated GIF loops

Animated GIF Loop 010

This post is about creating animated GIF loops with GLSL and Visions of Chaos. It will cover the basics of the GL shading language and give some simple examples of how to start creating animated GIFs. I am nowhere near an expert when coding shaders so while hopefully being helpful to someone starting with shaders it is mainly to help myself get better at coding GLSL. There is no better way to learn something than by trying to explain it to someone else.

You have probably seen examples of animated GIFs that loop. Those short few second videos that loop seamlessly. They usually show some sort of interesting graphical display. There are endless possibilities.

Getting Started

To begin, start Visions of Chaos and then select Mode->OpenGL Shading Language->Shader Editor. If the menu is disabled you probably need to update your video card drivers so they support GLSL.

Note: All of the shaders in this tutorial are included with Visions of Chaos. They have the same names as under the preview GIF animations. You can load them and play with the code as you go along.


Creating A New Shader

Click the New button on the GLSL Shader Editor dialog and give your shader a name. I called mine “Animated GIF Loop 001” but you can use any name you like. The name you specify here is the name of the GL shader file, so pick something memorable.

You will then see a simple basic shader code created for you.


#version 120

uniform float time;
uniform vec2 resolution;

void main(void)
{
    gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}

The first line is a version declaration telling GLSL the code supports version 2.1 of the GLSL language. Version 2.1 is fairly old now so most graphics cards support it.

The two uniform lines are values that are passed to the shader from Visions of Chaos when it is run. The time variable is how many seconds the shader has been running for. Resolution contains the X and Y pixel dimensions of the image.

Next up is the main function. This is where every shader begins running at. The code within the main function is run for every pixel of the image simultaneously. This is why shaders can run so fast. You are not looping through each pixel one at a time. The graphics card GPU calculates multiple pixels at once.

gl_FragColor sets the pixel color. The vec4 is a four component vector of values representing the red, green, blue and alpha intensities. These values should be between 0 and 1. Alpha should always be 1 for our purposes. In the example code, setting all 4 values to 1 results in a white pixel.

When you click the OK button on the GLSL Shader Editor dialog the shader will run and you will see a (boring) plain white image.


Changes Over Time

The most important factor in animation is changing “things” over time. For this we use the uniform time variable. Every frame the shader is displayed Visions of Chaos passes in how much time has passed in seconds since the shader was first started. This allows you to use that variable for animation.

For a first example, let’s use the time to animate the background boring white color by changing the gl_FragColor line.


#version 120

uniform float time;
uniform vec2 resolution;

void main(void)
{
    gl_FragColor = vec4(time,0.0,0.0,1.0);
}

Now when you run the shader you will see the image color fade in from black to pure red over the period of 1 second. This is because time starts at 0 and then increases as the shader runs. Once time gets past 1 GLSL clamps it to 1 so the color stays full red intensity.

If you want the fade in to take longer (say 5 seconds), change the time in gl_FragColor to time/5.0 so it takes 5 times as long to reach full intensity red.

We can also use the fract command. Fract returns the fractional part of any number. So 1.34 returns 0.34. 543.678 returns 0.678. If we use fract on the time we get a repeating 0 to 1 value. This causes the black to red to repeat every 2 seconds.


#version 120

uniform float time;
uniform vec2 resolution;

void main(void)
{
    gl_FragColor = vec4(fract(time/2.0),0.0,0.0,1.0);
}

If this shader was saved as an animated gif movie it would be fairly boring. An increase in color from black to red then a sudden jump back to black to then increase once again to red. For animated gifs we want a smooth loop so when the gif restarts it is not noticeable.


Sine Looping

Using a Sine wave for animation makes it simpler to do nice repeating animations.

There is some sine math explanations here that helped.

The GLSL shader code now becomes


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

void main(void)
{
    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 
    gl_FragColor = vec4(sineVal,0.0,0.0,1.0);
}

The animationSeconds variable allows us to change the length the animation runs for before repeating. In this case it is 2 seconds.

Sine Wave

If you look at the above diagram the lower part of the sine wave marked “one wave cycle” (from lowest trough to the next lowest trough) is the shape the sineVal code calculates and scales it to between 0 and 1. So the value starts at 0, curves up to 1 and then back down to 0.

When we run this new shader code we get the screen smoothly fading between black to red and then back to black every 2 seconds.

We can modify the gl_FragColor line to also fade the blue color component in and out. The blue component is specified as 1.0-sineVal which means when red is at its highest intensity blue is at its lowest.


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

void main(void)
{
    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 
    gl_FragColor = vec4(sineVal,0.0,1.0-sineVal,1.0);
}

The above code results in the following GIF animation.

Animated GIF Loop 001

Animated GIF Loop 001

A quick note here. Browsers do not seem to show animated GIFs at the correct frame rate (*or maybe they do not like 60 fps GIFs). Anyway, to see the GIFs at the correct frame rate you may need to download them and open them natively.


Setting Individual Pixel Colors

OK, so far we have used the time uniform value to animate the entire image changing color. Now let’s cover how to change each pixel within the image.

This is the new shader code


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

void main(void)
{
    //uv is pixel coordinates between -1 and +1 in the X and Y axiis with aspect ratio correction
    vec2 uv = (2.0*gl_FragCoord.xy-resolution.xy)/resolution.y;

    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 

    //shade pixels across the image depending on their X and Y coordinates - animated using sineVal
    gl_FragColor = vec4(gl_FragCoord.x/resolution.x,sineVal,1.0-gl_FragCoord.y/resolution.y,1.0);
}

The uv calculation gets the current pixel X and Y coordinate scaled to between -1 and +1. So the top left corner of the image is vec2(-1.0,-1.0) and the lower right corner of the image is vec2(1.0,1.0). Note that aspect ratio is corrected here too. That means when the shader is run on a non-square image the pixels will be correctly stretched. This will be more apparent in the next circle part. The circle remains a true circle no matter what size the image is stretched to.

Changing the gl_FragColor line scales the red component on the X axis and blue component on the Y axis. Using sineVal for the green component makes the green of each pixel fade in and out. Running this shader gives you a shaded image now with interpolations of color between black, red, blue and purple tones while mixing with the fading intensities of green. Using gl_FragCoord.x/resolution.x uses the pixel coordinate over the image size in pixels. This means no matter what the image dimensions the red component will go from 0 to 1 across the image.

Animated GIF Loop 002

Animated GIF Loop 002

Drawing Circles And Squares

Next up, drawing primitive circle and square shapes.


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

void main(void)
{
    //uv is pixel coordinates between -1 and +1 in the X and Y axiis with aspect ratio correction
    vec2 uv = (2.0*gl_FragCoord.xy-resolution.xy)/resolution.y;

    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 

    float circleRadius = 0.5; //radius of circle - 0.5 = 25% of texture size (2.0) so circle will fill 50% of image
    vec2 circleCenter = vec2(-0.8,0.0); //center circle on the image
    
    float squareRadius = 0.5; //radius of circle - 0.5 = 25% of texture size (2.0) so circle will fill 50% of image
    vec2 squareCenter = vec2(0.8,0.0); //center circle on the image
    
    vec4 color = vec4(0.0); //init color variable to black

    //test if pixel is within the circle
    if (length(uv-circleCenter)<circleRadius)
    {
        color = vec4(1.0,1.0,1.0,1.0);
    } 
    //test if pixel is within the square
    else if ((abs(uv.x-squareCenter.x)<squareRadius)&&(abs(uv.y-squareCenter.y)<squareRadius))
    {
        color = vec4(1.0,1.0,1.0,1.0);
    } 
    else {
    //else pixel is the pulsating colored background
        color = vec4(uv.x,sineVal,1.0-uv.y,1.0); 
    }
   
    gl_FragColor = color;
}

The code is starting to get more lengthy now, but is still relatively simple.

The position and size for a circle and square are specified. The circle is centered to the left of the image and the square is centered to the right of the image.

With a few if else statements we can test if the current pixel being calculated is within the circle or square. Length is a built in GLSL function that returns the length of the vector passed. Basically this is the same as using sqrt(sqr(x2-x1)+sqr(y2-y1)) to find the distance between 2 points. For checking a point within a square simpler absolute value math can be used. If the pixel does not fall within either shape it is shaded using the background color.

Animated GIF Loop 003

Animated GIF Loop 003

Note that the color distortion of the lower right corners of the square and circle flaring out are a side effect of the limited 256 colors for GIF animations. Something to be aware of if you are using a wide gammut of colors like that example.


Animating Shapes Size And Position

Now let’s get the shapes moving and changing size.


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

void main(void)
{
    //uv is pixel coordinates between -1 and +1 in the X and Y axiis with aspect ratio correction
    vec2 uv = (2.0*gl_FragCoord.xy-resolution.xy)/resolution.y;

    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 

    float circleRadius = 0.2+(1.0-sineVal)*0.4; //radius of circle
    vec2 circleCenter = vec2(-0.8+sineVal*1.6,0.0); //center circle on the image
    
    float squareRadius = 0.2+sineVal*0.2; //radius of circle
    vec2 squareCenter = vec2(0.8,0.0); //center circle on the image
    
    vec4 color = vec4(0.0); //init color variable to black

    //test if pixel is within the circle
    if (length(uv-circleCenter)<circleRadius)
    {
        color = vec4(1.0,1.0,1.0,1.0);
    } 
    //test if pixel is within the square
    else if ((abs(uv.x-squareCenter.x)<squareRadius)&&(abs(uv.y-squareCenter.y)<squareRadius))
    {
        color = vec4(1.0,1.0,1.0,1.0);
    } 
    else {
    //else pixel is black
        color = vec4(0.0,0.0,0.0,1.0); 
    }
   
    gl_FragColor = color;
}

For this example I have changed the background just to black.

The main change from the last shader code is in the radius and center declarations. We now use the sineVal to animate the sizes and positions.

Animated GIF Loop 004

Animated GIF Loop 004

Blending Colors

This next example shows how to have 3 red, green and blue circles’ colors blend as they overlap.

Calculating the positions of the circles is done using the mix function. Mix takes a from and to vector and returns a value between the two. The value is a percentage of the distance passed in. In this case we use the sineVal variable that goes from 0 to 1 and back again.


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

void main(void)
{
    //uv is pixel coordinates between -1 and +1 in the X and Y axiis with aspect ratio correction
    vec2 uv = (2.0*gl_FragCoord.xy-resolution.xy)/resolution.y;

    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 

    float circle1Radius = 0.2+(1.0-sineVal)*0.2; //radius of circle
    vec2 circle1Center = mix(vec2(-0.8,0.0),vec2(0.8,0.0),sineVal);
    
    float circle2Radius = 0.2+(sineVal)*0.2; //radius of circle
    vec2 circle2Center = mix(vec2(0.0,-0.8),vec2(0.0,0.8),sineVal);
    
    float circle3Radius = 0.2+(1.0-sineVal)*0.2; //radius of circle
    vec2 circle3Center = mix(vec2(-0.8,-0.55),vec2(0.8,0.8),sineVal);
    
    vec4 color = vec4(0.0); //init color variable to black

    //default pixel color is black
    color = vec4(0.0,0.0,0.0,1.0); 
    //test if pixel is within the circle
    if (length(uv-circle1Center)<circle1Radius)
    {
        color += vec4(1.0,0.0,0.0,1.0);
    } 
    if (length(uv-circle2Center)<circle2Radius)
    {
        color += vec4(0.0,0.0,1.0,1.0);
    } 
    if (length(uv-circle3Center)<circle3Radius)
    {
        color += vec4(0.0,1.0,0.0,1.0);
    } 
   
    gl_FragColor = color;
}

Animated GIF Loop 005

Animated GIF Loop 005

Smoothing Out Rough Edges

To help reduce the aliasing jagged edges you can use the smoothstep function to blend between two other values. Similar to the mix function.

For this shader the circles are changed into torus shapes.


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

void main(void)
{
    //uv is pixel coordinates between -1 and +1 in the X and Y axiis with aspect ratio correction
    vec2 uv = (2.0*gl_FragCoord.xy-resolution.xy)/resolution.y;

    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 

    float torus1Radius = 0.2+(1.0-sineVal)*0.4;
    vec2 torus1Center = mix(vec2(-0.8,0.0),vec2(0.8,0.0),sineVal);
    
    float torus2Radius = 0.2+(sineVal)*0.4;
    vec2 torus2Center = mix(vec2(0.8,0.0),vec2(-0.8,0.0),sineVal);
    
    float torus3Radius = 0.2+(1.0-sineVal)*0.2;
    vec2 torus3Center = vec2(0.0,0.0);
    
    float torusWidth = 0.1;
    float torusSmoothsize = 0.03;

    vec4 color = vec4(0.0); //init color variable to black

    //default pixel color is black
    color = vec4(0.0,0.0,0.0,1.0); 
    float c;
    c = smoothstep(torusWidth,torusWidth-torusSmoothsize,(abs(length(uv-torus1Center)-torus1Radius)));        
    color += vec4(c,0.0,0.0,1.0);
    c = smoothstep(torusWidth,torusWidth-torusSmoothsize,(abs(length(uv-torus2Center)-torus2Radius)));        
    color += vec4(0.0,c,0.0,1.0);
    c = smoothstep(torusWidth,torusWidth-torusSmoothsize,(abs(length(uv-torus3Center)-torus3Radius)));        
    color += vec4(0.0,0.0,c,1.0);
   
    gl_FragColor = color;
}

Animated GIF Loop 006

Animated GIF Loop 006

Checkerboard

Thanks to this post on StackOverflow we can get a simple function working to calculate if a pixel should be black or white on a checkerboard pattern.


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

vec2 rotate(vec2 v, float a) {
	float angleInRadians = radians(a);
	float s = sin(angleInRadians);
	float c = cos(angleInRadians);
	mat2 m = mat2(c, -s, s, c);
	return m * v;
}

vec3 checker(in float u, in float v, in float checksPerUnit)
{
  float fmodResult = mod(floor(checksPerUnit * u) + floor(checksPerUnit * v), 2.0);
  float col = max(sign(fmodResult), 0.0);
  return vec3(col, col, col);
}

void main(void)
{
    //uv is pixel coordinates between -1 and +1 in the X and Y axiis with aspect ratio correction
    vec2 uv = (2.0*gl_FragCoord.xy-resolution.xy)/resolution.y;

    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 

    //rotate the uv coordinates between 0 and 180 degrees during the animationSeconds time length
    vec2 rotated_uv = rotate(uv,-time/animationSeconds*180);

    //get the pixel checker color by passing the rotated coordinate into the checker function
    vec4 color = vec4(checker(rotated_uv.x, rotated_uv.y, 5.0 * sineVal),1.0);

    gl_FragColor = color;
}

Using the sineVal as previously gets the size of the checkers growing and shrinking for a clean loop.

For the rotation, time divided by animationSeconds is used. This ensures the checkerboard rotates continuously in the same direction for a cleaner loop.

Animated GIF Loop 007

Animated GIF Loop 007

Multiple RGB Checkerboards

Now let’s use 3 checkerboards in red, green and blue, overlapped.


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

vec2 rotate(vec2 v, float a) {
    float angleInRadians = radians(a);
    float s = sin(angleInRadians);
    float c = cos(angleInRadians);
    mat2 m = mat2(c, -s, s, c);
    return m * v;
}

float checker(in float u, in float v, in float checksPerUnit)
{
  float fmodResult = mod(floor(checksPerUnit * u) + floor(checksPerUnit * v), 2.0);
  float col = max(sign(fmodResult), 0.0);
  return col;
}

void main(void)
{
    //uv is pixel coordinates between -1 and +1 in the X and Y axiis with aspect ratio correction
    vec2 uv = (2.0*gl_FragCoord.xy-resolution.xy)/resolution.y;

    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 

    //rotate the uv coordinates between 0 and 180 degrees during the animationSeconds time length
    vec2 rotated_uv = rotate(uv,time/animationSeconds*180);

    //get the pixel checker color by passing the rotated coordinate into the checker function
    vec4 color = vec4(checker(rotated_uv.x, rotated_uv.y, 5.0 * sineVal),checker(rotated_uv.x, rotated_uv.y, 3.0 * sineVal),checker(rotated_uv.x, rotated_uv.y, 4.0 * sineVal),1.0);

    gl_FragColor = color;
}

The checker function is changed to return a single float value. 0.0 for black or 1.0 for white.

Color is now calculated by each of the RGB components being a different sized checkerboard.

Animated GIF Loop 008

Animated GIF Loop 008

Multiple RGB Checkerboards Tweaked

Same as previous, but now the red checkerboard does not rotate while the green and blue checkerboards rotate in opposite directions.


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

vec2 rotate(vec2 v, float a) {
    float angleInRadians = radians(a);
    float s = sin(angleInRadians);
    float c = cos(angleInRadians);
    mat2 m = mat2(c, -s, s, c);
    return m * v;
}

float checker(in float u, in float v, in float checksPerUnit)
{
  float fmodResult = mod(floor(checksPerUnit * u) + floor(checksPerUnit * v), 2.0);
  float col = max(sign(fmodResult), 0.0);
  return col;
}

void main(void)
{
    //uv is pixel coordinates between -1 and +1 in the X and Y axiis with aspect ratio correction
    vec2 uv = (2.0*gl_FragCoord.xy-resolution.xy)/resolution.y;

    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 

    //rotate the uv coordinates between 0 and 180 degrees during the animationSeconds time length
    vec2 rotated_uv = rotate(uv,time/animationSeconds*180);
    vec2 rotated_uv2 = rotate(uv,-time/animationSeconds*180);

    //get the pixel checker color by passing the rotated coordinate into the checker function
    vec4 color = vec4(checker(uv.x, uv.y, 5.0 * sineVal),checker(rotated_uv.x, rotated_uv.y, 3.0 * sineVal),checker(rotated_uv2.x, rotated_uv2.y, 4.0 * sineVal),1.0);

    gl_FragColor = color;
}

The checker function is changed to return a single float value. 0.0 for black or 1.0 for white.

Animated GIF Loop 009

Animated GIF Loop 009

Multiple RGB Checkerboards Within Checkerboards

Same as previous, but now the “white” checkbaord squares are further divided into smaller checkboards.


#version 120

uniform float time;
uniform vec2 resolution;

float animationSeconds = 2.0; // how long do we want the animation to last before looping
float piTimes2 = 3.1415926536*2.0;

vec2 rotate(vec2 v, float a) {
    float angleInRadians = radians(a);
    float s = sin(angleInRadians);
    float c = cos(angleInRadians);
    mat2 m = mat2(c, -s, s, c);
    return m * v;
}

float checker(in float u, in float v, in float checksPerUnit)
{
  float fmodResult = mod(floor(checksPerUnit * u) + floor(checksPerUnit * v), 2.0);
  if (fmodResult > 0.0) { fmodResult = mod(floor(checksPerUnit * u * 4) + floor(checksPerUnit * v * 4), 2.0); }
  float col = max(sign(fmodResult), 0.0);
  return col;
}

void main(void)
{
    //uv is pixel coordinates between -1 and +1 in the X and Y axiis with aspect ratio correction
    vec2 uv = (2.0*gl_FragCoord.xy-resolution.xy)/resolution.y;
	
    // sineVal is a floating point value between 0 and 1
    // starts at 0 when time = 0 then increases to 1.0 when time is half of animationSeconds and then back to 0 when time equals animationSeconds
    float sineVal = sin(piTimes2*(time-0.75)/animationSeconds)/2.0+0.5; 

    //rotate the uv coordinates between 0 and 180 degrees during the animationSeconds time length
	uv.x += sineVal*2.0-1.0;
    vec2 rotated_uv = rotate(uv,time/animationSeconds*180);
    vec2 rotated_uv2 = rotate(uv,-time/animationSeconds*180);

    //get the pixel checker color by passing the rotated coordinate into the checker function
    vec4 color = vec4(checker(uv.x, uv.y, 5.0 * sineVal),checker(rotated_uv.x, rotated_uv.y, 4.0 * sineVal),checker(rotated_uv2.x, rotated_uv2.y, 6.0 * sineVal),1.0);

    gl_FragColor = color;
}

Animated GIF Loop 010

Animated GIF Loop 010

How To Generate These GIFs Using Visions of Chaos

1. Start Visions of Chaos.

2. Select Mode->OpenGL Shading Language->Shader Editor

3. You can load any of the above samples “Animated GIF Loop” or click the New button and start creating your own.

4. Once you have a loop you like, check the “Create movie frames” checkbox.

5. Click OK.

6. Once the “Create Frame Settings” dialog appears, change the stop after to 120.

7. Click OK to start running the shader and generating the frames. It will auto-stop after 120 frames.

8. The Build Movie Settings dialog will be showing.

9. Change the Movie Format dropdown to Animated GIF and then click Build to generate the GIF file.

10. Post your awesome GIF loop on Twitter etc to be the envy of your fellow nerds.


The Future

This post has only covered the very basics of 2D GLSL shaders for looped animation purposes. I would like to cover 3D in the future.

If this helped you get some nice animated GIF loops going let me know.

For inspiration, check out the most awesome @beesandbombs Twitter channel. Dave has loads of seamlessly looping GIF animations that show what real talent can produce.

Jason.