Super Resolution

The Dream

For years now you would have seen scenes in TV shows like CSI or movies like Blade Runner the “enhance” functionality of software that allows details to be enhanced in images that are only a blur or a few pixels in size. In Blade Runner, Deckard’s system even allowed him to look around corners.

The Reality

I have recently been testing machine learning neural network enhancers (aka super resolution) models. They resize an image while trying to maintain or enhance details without losing detail (or with losing a lot less detail than if the image was zoomed with an image editing tool using linear or bicubic zoom).

Some of my results with these models follows. I am using the following test image from here.

Unprocessed Test Image

To best see the differences between the algorithms I recommend you open the x4 zoomed images in new tabs and switch between them.

SRCNN – Super-Resolution Convolutional Neural Network

To see the original paper on SRCNN, click here.
I am using the PyTorch script by Mirwaisse Djanbaz here.

SRCNN x4

SRCNN x4

SRRESNET

To see the original paper on SRRESNET, click here.
I am using the PyTorch script by Sagar Vinodababu here.

SRRESNET x4

SRRESNET x4

SRGAN – Super Resolution Generative Adversarial Network

To see the original paper on SRGAN, click here.
I am using the PyTorch script by Sagar Vinodababu here.

SRGAN x4

SRGAN x4

ESRGAN – Enhanced Super Resolution Generative Adversarial Network

I am using the PyTorch script by Xintao Wang et al here.

ESRGAN x4

ESRGAN x4

PSNR

I am using the PyTorch script by Xintao Wang et al here.

PSNR x4

PSNR x4

Real-ESRGAN

This is the best super sampler here. I am using the executable by Xintao Wang et al here.

Real-ESRGAN x4

Real-ESRGAN x4

Real-ESRNET

I am using the executable by Xintao Wang et al here.

Real-ESRNET x4

Real-ESRNET x4

SwinIR

Very nice results. May be equal to or better than Real-ESRGAN depending on the input image. I am using the code from this colab.

SwinIR x4

SwinIR x4

SPSR

Another method from here.

SPSR x4

SPSR x4

ruDALL-E Real-ESRGAN

From here.

ruDALL-E Real-ESRGAN x4

SPSR x4

Differences

Each of the algorithms gives different results. For an unknown source image it would probably be best to run it through them all and then see which gives you the best result. These are not the Hollywood or TV enhance magic fix just yet.

If you know of any other PyTorch implementations of super resolution I missed, let me know.

Availability

You can follow the links to the original GitHub repositories to get the software, but I have also added a simple GUI front end for these scripts in Visions of Chaos. That allows you to try the above algorithms on single images or batch process a directory of images.

Jason.

Text-to-Image Machine Learning

NOTE: Make sure you also see this post that has a summary of all the Text-to-Image scripts supported by Visions of Chaos with example images.

Text-to-Image

Input a short phrase or sentence into a neural network and see what image it creates.

I am using Big Sleep from Phil Wang (@lucidrains).

Phil used the code/models from Ryan Murdock (@advadnoun). Ryan has a blog post explaining the basics of how all the parts connect up here. Ryan has some newer Text-to-Image experiments but they are behind a Patreon paywall, so I have not played with them. Hopefully he (or anyone) releases the colabs publicly sometime in the future. I don’t want to experiment with a Text-to-Image system that I cannot share with everyone, otherwise it is just a tease.

The most simple explanation is that BigGAN generates images that try to satisfy CLIP which rates how closely the image matches the input phrase. BigGAN creates an image and CLIP looks at it and says “sorry, that does not look like a cat to me, try again”. As each repeated iteration is performed BigGAN gets better at generating an image that matches the desired phrase text.

Big Sleep Examples

Big Sleep uses a seed number which means you can have thousands/millions of different outputs for the same input phrase. Note there is an issue with the seed not always being able to create the same images though. From my testing, even with the torch_deterministic flag set to True and setting the CUDA envirnmental variable does not help. Every time Big Sleep is called it will generate a different image with the same seed. That means you will never be able to reproduce the same output in the future.

These images are 512×512 pixels square (the largest resolution Big Sleep supports) and took 4 minutes each to generate on an RTX 3090 GPU. The same code takes 6 minutes 45 seconds per image on an older 2080 Super GPU.

Also note that these are the “cherry picked” best results. Big Sleep is not going to create awesome art every time. For these examples or when experimenting with new phrases I usually run a batch of multiple images and then manually select the best 4 or 8 to show off (4 or 8 because that satisfies one or two tweets).

To start, these next four images were created from the prompt phrase “Gandalf and the Balrog”

Big Sleep - Gandalf and the Balrog

Big Sleep - Gandalf and the Balrog

Big Sleep - Gandalf and the Balrog

Big Sleep - Gandalf and the Balrog

Here are results from “disturbing flesh”. These are like early David Cronenberg nightmare visuals.

Big Sleep - Disturbing Flesh

Big Sleep - Disturbing Flesh

Big Sleep - Disturbing Flesh

Big Sleep - Disturbing Flesh

A suggestion from @MatthewKafker on Twitter “spatially ambiguous water lillies painting”

Big Sleep - Spatially Ambiguous Water Lillies Painting

Big Sleep - Spatially Ambiguous Water Lillies Painting

Big Sleep - Spatially Ambiguous Water Lillies Painting

Big Sleep - Spatially Ambiguous Water Lillies Painting

Big Sleep - Spatially Ambiguous Water Lillies Painting

Big Sleep - Spatially Ambiguous Water Lillies Painting

Big Sleep - Spatially Ambiguous Water Lillies Painting

Big Sleep - Spatially Ambiguous Water Lillies Painting

“stormy seascape”

Big Sleep - Stormy Seascape

Big Sleep - Stormy Seascape

Big Sleep - Stormy Seascape

Big Sleep - Stormy Seascape

After experimenting with acrylic pour painting in the past I wanted to see what BigSleep could generate from “acrylic pour painting”

Big Sleep - Acrylic Pour Painting

Big Sleep - Acrylic Pour Painting

Big Sleep - Acrylic Pour Painting

Big Sleep - Acrylic Pour Painting

I have always enjoyed David Lynch movies so let’s see what “david lynch visuals” results in. This one got a lot of surprises and worked great. These images really capture the feeling of a Lynchian cinematic look. A lot of these came out fairly dark so I have tweaked exposure in GIMP.

Big Sleep - David Lynch Visuals

Big Sleep - David Lynch Visuals

Big Sleep - David Lynch Visuals

Big Sleep - David Lynch Visuals

Big Sleep - David Lynch Visuals

Big Sleep - David Lynch Visuals

Big Sleep - David Lynch Visuals

Big Sleep - David Lynch Visuals

More from “david lynch visuals” but these are more portraits. The famous hair comes through.

Big Sleep - David Lynch Visuals

Big Sleep - David Lynch Visuals

Big Sleep - David Lynch Visuals

Big Sleep - David Lynch Visuals

“H.R.Giger”

Big Sleep - H.R.Giger

Big Sleep - H.R.Giger

Big Sleep - H.R.Giger

Big Sleep - H.R.Giger

Big Sleep - H.R.Giger

Big Sleep - H.R.Giger

Big Sleep - H.R.Giger

Big Sleep - H.R.Giger

“metropolis”

Big Sleep - Metropolis

Big Sleep - Metropolis

Big Sleep - Metropolis

Big Sleep - Metropolis

“surrealism”

Big Sleep - Surrealsim

Big Sleep - Surrealsim

Big Sleep - Surrealsim

Big Sleep - Surrealsim

“colorful surrealism”

Big Sleep - Colorful Surrealsim

Big Sleep - Colorful Surrealsim

Big Sleep - Colorful Surrealsim

Big Sleep - Colorful Surrealsim

Availability

I have now added a simple GUI front end for Big Sleep into Visions of Chaos, so once you have installed all the pre-requisites you can run these models on any prompt phrase you feed into them. The following images shows Big Sleep in the process of generating an image for the prompt text “cyberpunk aesthetic”.

Text-to-Image GUI

After spending a lot of time experimenting with Big Sleep over the last few days, I highly encourage anyone with a decent GPU to try these. The results are truly fascinating. This page says at least a 2070 8GB or greater is required, but Martin in the comments managed to generate a 128×128 image on a 1060 6GB GPU after 26 (!!) minutes.

Jason.

Adding PyTorch support to Visions of Chaos

TensorFlow 2

Recently after getting a new 3090 GPU I needed to update TensorFlow to version 2. Going from TensorFlow version 1 to TensorFlow version 2 had way too many code breaking changes for me. Looking at other github examples for TensorFlow 2 code (eg an updated Style Transfer script) gave me all sorts of errors. Not just one git repo either, lots of supposed TensorFlow 2 code would not work for me. If it is a pain for me it is going to be a bigger annoyance for my users. I already get enough emails saying “I followed your TensorFlow instructions exactly, but it doesn’t work”. I am in no way an expert in Python, TensorFlow or PyTorch, so I need something that for most of the time “just works”.

I did manage to get the current TensorFlow 1 scripts in Visions of Chaos running under TensorFlow 2, so at least the existing TensorFlow functionality will still work.

PyTorch

After having a look around and watching some YouTube videos I wanted to give PyTorch a go.

The install is one pip command they build for you on their home page after you select OS, CUDA, etc. So for my current TensorFlow tutorial (maybe I now need to change that to “Machine Learning Tutorial”) all I needed to do was add 1 more line to the pip install section.


pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

PyTorch Style Transfer

First Google hit is the PyTorch tutorial here. After spending most of a day banging my head against the wall with TensorFlow 2 errors, that single self contained Python script using PyTorch “just worked”! The settings do seem harder to tweak to get a good looking output compared to the previous TensorFlow Style Transfer script I use. After making the following examples I may need to look for another PyTorch Style Transfer script.

Here are some example results using Biscuit as the source image.

Biscuit

Biscuit Style Transfer

Biscuit Style Transfer

Biscuit Style Transfer

Biscuit Style Transfer

PyTorch DeepDream

Next up was ProGamerGov’s PyTorch DeepDream implementation. Again, worked fine. I have used ProGamerGov‘s TensorFlow DeepDream code in the past and it worked just as well this time. It gives a bunch of other models to use too, so more different DeepDream outputs for Visions of Chaos are now available.

Biscuit DeepDream

Biscuit DeepDream

Biscuit DeepDream

Biscuit DeepDream

PyTorch StyleGAN2 ADA

Using NVIDIA’s official PyTorch implentation from here. Also easy to get working. You can quickly generate images from existing models.

StyleGAN2 ADA

Metropolitan Museum of Art Faces – NVIDIA – metfaces.pkl

StyleGAN2 ADA

Trypophobia – Sid Black – trypophobia.pkl

StyleGAN2 ADA

Alfred E Neuman – Unknown – AlfredENeuman24_ADA.pkl

StyleGAN2 ADA

Trypophobia – Sid Black – trypophobia.pkl

I include the option to train your own models from a bunch of images. Pro tip: if you do not want to have nightmares do not experiment with training a model based on a bunch of naked women photos.

Going Forward

After these early experiments with PyTorch, I am going to use PyTorch from now on wherever possible.

Jason.

TensorFlow 2 and RTX 3090 Performance

A Pleasant Surprise

Just in time for when 3090 GPUs started to become available again in Sydney I was very generously gifted the funds to finally purchase a new GeForce RTX 3090 for my main development PC. After including a 1000 Watt power supply the total cost came to around $4000 AUD ($3000 USD). Such a rip off.

GeForce RTX™ 3090 GAMING X TRIO 24G

The card itself is really heavy and solid. They include a bracket to add support to help it not sag over time which is a nice touch. Like all recent hardware parts it lights up in various RGB colors and shades. These RGB rigs are going to look so out of date once this fad goes away. After upgrading my PC parts over the last few years I now have PCs that flash and blink more than my Christmas tree does when fully setup and lit.

Who needs a Christmas tree?

Not So Fast

I naively assumed that a quick GPU swap would give me the boost in performance that previous GPU upgrades did (like when I upgraded to the 1080 and then to the 2080 Super). Not this time. I ran a few quick machine learning TensorFlow (version 1) tests from Visions of Chaos and the result was either the Python scripts ran extremely slow (around 10x to 30x SLOWER) or they just crashed. So much for a simple upgrade for more power.

Turns out the Ampere architecture the 3090 GPUs use is only supported by CUDA 11.0 or higher. After updating CUDA, cuDNN, all the various Python libraries and the Python scripts I was back to where I was before the upgrades. If you have been through the tedious process of installing TensorFlow before for Visions of Chaos, you will need to follow my new instructions to get TensorFlow version 2 support. Updating TensorFlow v1 code to TensorFlow v2 code is a pain. From now on I am going to use PyTorch scripts for all my machine learning related needs.

High Temperatures

These newer GPUs can run hot. Under 100% load (when I was doing a Style Transfer movie with repeated frames being calculated one after the other) the 3090 peaks around 80 degrees C (176 F). I do have reasonable cooling in the case, but the air being blown out is noticeably hot. The 2080 running the same test peaks around 75 degrees.

The 2080 and my older 1080 seem to push all the hot exhaust air out the rear vents of the card, but the 3090 has no rear exhaust so all the hot air goes directly into the case. I can only assume this is due to them not being able to handle all that heat going “through” the card and out the back, so it needs to just vent that heat anywhere it can. This means if the card is running hot a lot of hot air goes straight into the case. When I touched the side of the case next to the GPU it was very warm.

Apparently 80 and under is perfectly fine and safe for a GPU, but they would say that wouldn’t they. They would be bragging about low temps if they could manufacture cooler running cards.

After some experimenting with Afterburner I lowered the temp limit from the GPU default of 83 degrees down to 75 degrees. This resulted in more throttling but only a slight performance hit (style transfer took 1 minute 21 seconds rather than 1 minute 14 seconds). The case was noticeably cooler and the average temp was now down to a much more chilly 65 degrees. Afterburner allows tweaking (overclocking/underclocking) of your GPU, but the most useful feature is its graphing capabilities to see what is really going on. You can monitor temperatures and throttling as you run complex GPU operations.

Extra Cooling

I wanted to see if more case fans would help, so I removed the current 3 stock case fans and installed 6 of these fans (2 sucking in at the front, 3 blowing out at the top, and 1 blowing out at the rear of the case). My silent PC is no longer silent. I set the GPU back to its default profile with a temp limit of 83 degrees and started another Style Transfer movie to keep the GPU pegged as close to 100% usage as possible for an extended period of time. Watching the temp graph in Afterburner shows peaks still up to 76 degrees, but much less throttling with the core clock graph around 95% to 100% maximum possible MHz when running so that results in a better overall performance.

After a week the extra noise annoyed me too much though so I replaced the Gamdias fans with Corsair fans. 6 of these fans and one of these controllers. Setting the fans to the default “quiet” profile gets the noise back down to near silent sound levels. When I start a machine learning batch run the temp sensors detect the increased heat in the case and ramp up the fans to compensate. Watching Afterburner graphs shows they may even be slightly better at cooling than the Gamdias fans. The problem with the auto-adjust speed control is that there is this noticeable ramping up and down of the fan speeds as they compensate for temp changes all the time (not just when the GPU is 100%). That was more annoying than always on full speed fans. After some adjustments and tests with the excellent Corsair software I settled on a custom temp curve that only cranked up as necessary when I start full 100% GPU load processing. Once the GPU usage drops back to normal the fans ramp down and are silent again.

Power Usage

Using one of those cheap inline watt meters shows the PC pulls 480 watts when the GPU is at 100% usage. Afterburner reports the card using around 290 watts under full load.

I have basically been using the 3090 24 hours a day training and testing machine learning setups since I bought it. 3 weeks with the 3090 usage made my latest quarterly electricity bill go up from $284 to $313. That works out to roughly $1.40 a day to power the GPU full time. If you can afford the GPU you should be able to afford the cost of powering it.

Final Advice

When experimenting with one of these newer GPUs use all the monitoring you need to make sure the card is not overheating and throttling performance. Afterburner is great to setup a graph showing GPU temp, usage, MHz and power usage. Monitor levels when the GPU is under 100% load and over an extended period of time.

Temperature controlled fans like the Corsair Commander Pro setup can work as a set and forget cooling solution once you tweak a custom temp curve that suits your usage and hardware.

Final Opinion

Was it worth spending roughly three times the cost of the 2080 on the 3090? No, definitely not. These current over inflated priced GPUs are not worth the money. But if you need or want one you have to pay the price. If the prices were not so artificially inflated and they sold at the initial recommended retail prices then it would be a more reasonable price (still expensive, but not ridiculously so).

After testing the various GPU related modes in Visions of Chaos, the 3090 is only between 10% to 70% faster than the 2080 Super depending on what GPU calculations are being made, and more often on the slower end of that scale. OpenGLSL shader performance is a fairly consistent speed boost between 10% and 15%.

The main reason I wanted the 3090 was for the big jump in VRAM from 8GB to 24GB so I am now able to train and run larger machine learning models without the dreaded out of memory errors. StyleGAN2 ADA models are the first things I have now successfully been able to train.

StyleGAN2 ADA - Alfred E Neuman

Upgrading the 1080 in my older PC to the 2080 Super is a big jump in performance and allows me to run less VRAM intensive sessions. Can you tell I am trying to convince myself this was a good purchase? I just expected more. Cue the “Ha ha, you idiot! Serves you right for not researching first.” comments.

Jason.