Style Transfer Generative Adversarial Networks take two images and apply the style from one image to the other image. Here are some sample results from here.
For a more technical explanation of how these work, you can refer to the following papers;
Image Style Transfer Using Convolutional Neural Networks
Artistic style transfer for videos
Preserving Color in Neural Artistic Style Transfer
Ever since first seeing this technique I wanted to add it as an image processing option within Visions of Chaos.
If you only want to play around with style transfer or only have a few photos you want to experiment with, then I recommend you use an online service like DeepArt because this can be a tedious process to setup and use on your own PC.
How It Works
Behind the scenes the style transfer processing uses Cameron Smith‘s excellent Python script from here. After trying various Style Transfer related scripts that one gives the sharpest and most interesting results. See that link if you want to run these sort of style transfers yourself from the command line outside Visions of Chaos.
Installing Style Transfer Prerequisites
If you want to use style transfer from within Visions of Chaos you need to follow these steps to get Python, Python Libraries, CUDA and CuDNN installed.
Style Transfer in Visions of Chaos
Generate any image, then select Image->Image Processing->Style Transfer.
The first time you select Style Transfer it will download the 500 MB neural network model that is used for all the style transfer magic.
Start with smaller image sizes to get an idea of how long the process will take on your system before going for larger sized images.
You can also select any external image file to apply the style transfer to. So dig out those cat photos and have fun. Note that if you get tired of the limited style images that come with Visions of Chaos, you can put any image you like under the Style Transfer folder (by default this will be C:\Users\\AppData\Roaming\Visions of Chaos\Examples\TensorFlow\Style Transfer\) and use those. Grab an image of your favorite artist’s works and experiment.
For these next examples I used the following photo of Miss Marple.
And applied some various transfer style images.
A Mandelbrot fractal
Another Mandelbrot fractal
HR Giger Biomechanical Landscape
Hokusai The Great Wave off Kanagawa
Turner The Wreck of a Transport Ship
If you get a failed style transfer and an error message, here are a few things to try;
1. Smaller image size. Depending on the RAM in your PC and GPU you may have maxed out.
2. Wait 30 seconds and try again. This seems to help sometimes.
3. Reboot. If all else fails. Seems to always fix a stubborn error for me. The Cuda and/or cuDNN seem to be the main culprit. They get hung or locked or something and only a reboot will get them working again.
Style Transfer Movies
I have now added options to create style transfer movies. This works by starting with an image of random RGB noise. Style transfer is applied, then the resulting image is slightly stretched to cause the zooming in. The style transfer and stretch are repeated for each frame of the movie.
I created a movie using this technique that used a tasteful nude image as the style image. After repeatedly iterating the style transfer over the previous output it started to create some disturbing imagery. I thought this was an interesting result (although it did make me go “ewwwww” while looking at it) so I uploaded it to my YouTube channel.
Within a few minutes the video had been removed by one of the YouTube bots and flagged as pornography. “Pornographic or sexually explicit content that is meant to be sexually gratifying is not allowed on YouTube.” Now, I don’t know about you, but the following pictures do not cause any arousal or sexual gratification in me.
If you think you see any intimate lady parts in the above images they were not in the style transfer source image. All the evil looking sores, boils, and other nasty looking pareidolia features all came from the depths of the neural network. The only thing that seems to be a direct copy from the source style image are the flesh tones.
I decided to lodge an appeal explaining how the imagery was the result of a neural network style transfer and not pornography (all in the limited 300 characters they give you to plead your case). I was hoping a real person would take a look at the movie and decide on if it was porn or not and respond to my appeal. All that happened was shortly after the appeal was lodged the movie info showed “Appeal rejected. No further action can be taken on your part.” So, after nearly 13 years on YouTube I now have my first warning and no way of talking with a real person to discuss what happened.
I understand that from YouTube’s perspective even if they employed a million dedicated staff to watch and manually review videos that get flagged it would still probably not be enough, but relying on a neural network and/or “AI” as the decider without any human intervention is not the answer. Maybe they should at least have manual reviews for abuse claims/detection on channels that have been active for over 5 or 10 years without any prior warnings or strikes. I am not sure there is a workable solution to the problem.
Anyway, I joined Bit Chute so you can now watch the movie in all its controversial glory here.
I also created the following tutorial that covers style transfer within Visions of Chaos in much more detail.