Video Feedback Simulation Version 3

Once again I have delved into simulating video feedback.

Here is a 4K resolution 60 fps movie with some samples of what the new simulations can do.

This third attempt is fairly close to second version but with a few changes I will explain here.

Visions of Chaos Video Feedback 3 Settings

The main change is being able to order the effects. This was the idea that got me programming version 3. A shuffle button is also provided that randomly orders the effects. Allowing the effect order to be customised gives a lot of new results compared to the first 2 video feedback simulation modes.

Since the above screenshot, the settings have continued to grow.

Visions of Chaos Video Feedback 3 Settings

Here are some explanations for the various effects.

HSL Flow

Takes a pixel’s RGB values, converts them into HSL values and then uses the HSL values in formulas to select a new pixel color. For example if the pixel is red RGB(255,0,0), then this converts to HSL(0,1,0.5) assuming all HSL values range from 0 to 1. The length formula above is H*360 and the length formula is s*5. So in this case the new pixel value read would be 5 pixels away at the angle 0 degrees. Changing these formulas allows the image to “flow” depending on the colors.


Sharpens the image by blurring the image twice using different algorithms (in my case I use a QuickBlur (Box Blur) and a weighted convolution kernel). The second blur value is subtracted from the first and then using the following formula the target pixel value is found. newr=trunc(r1+amount*(r1-r2))


Uses a standard gaussian blur.


Combines the last “frame” and the current frame. Various blend options change how the layers are combined.


Standard image contrast setting. Can also be set for negative contrasts. Uses the following formula for each of the RGB values r=r+trunc((r-128)*amount/100)


Standard brightness. Increases or decreases the pixel color RGB values.


Adds a random value to each pixel. Adding a bit of noise can help stop a simulation dying out to a single color.


Guess what this does?


Uses a histogram of the image to auto-brightness. Can help the image from getting too dark or too light.


Zooms the image. Various options determine the algorithm used to zoom the image.

Image Processing

Allows the various image processing functions in Visions of Chaos to be injected into the mix for even more variety.

Less Is Sometimes More

Another feature I added recently was to randomly remove some of the effects in the “order of effects” list. This means there is less processing done each frame, but having only 3 or more effects has given many new unique output patterns and results. So rather than throw every possible image processing step you can think of at it, pick a smaller subset of them.

Here is a more recent sample with some of the newer simulation settings I found while trying endless random settings.

Even More Enhancements

The next step I tried was breeding existing parameters to make new simulations. This is done by loading one of the vf3 parameter files and then another 2. When the second 2 files are loaded each setting has a 33% chance of being loaded. This results in roughly one third of the settings from each file being applied.

Another change is to allow effects in the effects list to occur more than once. So for example rather than an effects list of blur, sharpen and blend the list could now be blur, sharpen, blur, blend, sharpen. This also leads to new results.

With the new settings, the dialog has been expanded as follows

Visions of Chaos Video Feedback 3 Settings

Here is another sample movie showing results from crossbreeding the existing sample files into new results.


The following tutorial explains the Video Feedback 3 mode in more detail.

The End – For Now

As always, you can experiment with the new VF3 mode in the latest version of Visions of Chaos.

I would be interested in seeing any unique results you come up with.

For the next version 4 simulation I would like to chain various GLSL shaders together that make the blends, blurs etc. That will allow the user to fully customise the simulation and insert new effects that I did not even consider. Also GLSL for speed. Rendering the above movie frames took days at 4K resolution.


Video Feedback Simulation Take 2

I have been interested in video feedback and simulating video feedback on and off for years.

I recently stumbled across this amazing 4k demo. The exe file that generates the following movie in realtime is less than 4096 bytes in size!!

Awesome result. The makers kindly wrote up an explanation page that describes how the video feedback like effects were created.

So based on those principals I used a bunch of (slower) non shader software based image processing routines and got the following results.

These are much faster to generate than my previous experiments with simulating video feeback.

Generation Steps

1. Create 2 bitmaps for a main layer and a temp layer
2. Fill them both with random static

Main loop
1. Each pixel RGB in the main bitmap is converted to HSL. The HSL values are used as an angle and a distance. The angle and distance then points to a new pixel location on the temp bitmap. The main bitmap pixel is the colored using the temp bitmap pixel color.
2. Sharpen the main bitmap.
3. Blur the main bitmap.
4. Display the main bitmap.
5. Blend the main bitmap with the temp bitmap.
6. Rotate the temp bitmap.
7. Histogram equalise the temp bitmap. This is similar to how Photoshop does auto-contrast.
8. Zoom the temp bitmap.

Here is a full 1080p HD sample with 10 minutes of footage showing the types of results this new algorithm creates.

If you are not a coder and want to play with this download Visions Of Chaos.


Video Feedback

Video feedback is an interesting phenomenon. Take a video camera and plug it into the TV so it shows what the video camera is filming. Then point the camera at the screen. You will then see repeating patterns that create some unique fractal like patterns. This also works with a webcam on the PC if you point the webcam at the screen area that shows what it is filming, but using a real video camera an TV gives much more interesting results.

Here are a few examples of real video feedback. Depending on the brand of camera and TV you will always get unique results.

I encourage everyone to try real Video Feedback if you have the equipment. In the past I have shown the process to anyone with a camera and the right cables to connect the camera to their TV, and they all have had a wow moment when seeing it for the first time.

If you are interested in trying it for real, here are a few tips;
1. Do it in a dark room or with minimal lighting. Trying it during daylight seems to cause a “white out” as there is too much background light that bleeds into the image and is amplified repeatedly.
2. Turn the camera upside down (ie 180 degress to the TV image it is filming) as this gives better patterns usually.
3. If your camera has effects for inverting the image, use it. Try all the various filters the camera provides. Adjust the contrast and brightness. Play with all the available controls. Also adjust the TV brightness and contrast.
4. Try slight camera rotations rather than big movements. A nice pattern can be killed with rapid movements of the camera. Same for zooming.
5. If the image dies out, flicking the room lights on and off momentarily can get it working again. Alternatively putting a lighter or candle flame between the camera and TV can get a dead display running again.

A few years back I got into trying to attempt to simulate Video Feedback in software. The results so far are promising. Here is a sample movie of the results.

For more info about the simulation techniques used, see my Video Feedback page.