DeepDream – Part 3

DeepDream

This is the third part in a series of posts. See Part 1 and Part 2.

DeepDream

ProGamerGov Code

The script from Part 2 supports rendering 59 layers of the Inception model. Each of the 59 DeepDream layers have multiple channels that allow many more unique patterns and outputs.

I found this out thanks to ProGamerGov‘s script here.

DeepDream

There are 7,548 channels total. A huge number of patterns to explore and create movies from. If I followed the same principal as in part 2 and created a movie changing the channel every 10 seconds that would result in a movie almost 21 hours long. If each frame took around 25 seconds to render it would take 1310 DAYS to render all the frames. Not even I am that patient.

DeepDream

Channel Previews

The following links are previews of each layer and available channels within them. The layer, channel and other settings are included so you can reproduce them in Visions of Chaos if required.

DeepDream

As the layers get deeper the images get more complex. If you notice a layer name shown twice it is because it had too many channels within that layer to render into a valid image file so it had to be split into two separate images.

DeepDream

conv2d0_pre_relu
conv2d1_pre_relu
conv2d2_pre_relu

DeepDream

head0_bottleneck_pre_relu
head1_bottleneck_pre_relu

DeepDream

mixed3a_1x1_pre_relu
mixed3a_3x3_bottleneck_pre_relu
mixed3a_3x3_pre_relu
mixed3a_5x5_bottleneck_pre_relu
mixed3a_5x5_pre_relu
mixed3a_pool_reduce_pre_relu

DeepDream

mixed3b_1x1_pre_relu
mixed3b_3x3_bottleneck_pre_relu
mixed3b_3x3_pre_relu
mixed3b_5x5_bottleneck_pre_relu
mixed3b_5x5_pre_relu
mixed3b_pool_reduce_pre_relu

DeepDream

mixed4a_1x1_pre_relu
mixed4a_3x3_bottleneck_pre_relu
mixed4a_3x3_pre_relu
mixed4a_5x5_bottleneck_pre_relu
mixed4a_5x5_pre_relu
mixed4a_pool_reduce_pre_relu

DeepDream

mixed4b_1x1_pre_relu
mixed4b_3x3_bottleneck_pre_relu
mixed4b_3x3_pre_relu
mixed4b_5x5_bottleneck_pre_relu
mixed4b_5x5_pre_relu
mixed4b_pool_reduce_pre_relu

DeepDream

mixed4c_1x1_pre_relu
mixed4c_3x3_bottleneck_pre_relu
mixed4c_3x3_pre_relu
mixed4c_5x5_bottleneck_pre_relu
mixed4c_5x5_pre_relu
mixed4c_pool_reduce_pre_relu

DeepDream

mixed4d_1x1_pre_relu
mixed4d_3x3_bottleneck_pre_relu
mixed4d_3x3_pre_relu
mixed4d_3x3_pre_relu
mixed4d_5x5_bottleneck_pre_relu
mixed4d_5x5_pre_relu
mixed4d_pool_reduce_pre_relu

DeepDream

mixed4e_1x1_pre_relu
mixed4e_3x3_bottleneck_pre_relu
mixed4e_3x3_pre_relu
mixed4e_3x3_pre_relu
mixed4e_5x5_bottleneck_pre_relu
mixed4e_5x5_pre_relu
mixed4e_pool_reduce_pre_relu

DeepDream

mixed5a_1x1_pre_relu
mixed5a_3x3_bottleneck_pre_relu
mixed5a_3x3_pre_relu
mixed5a_3x3_pre_relu
mixed5a_5x5_bottleneck_pre_relu
mixed5a_5x5_pre_relu
mixed5a_pool_reduce_pre_relu

DeepDream

mixed5b_1x1_pre_relu
mixed5b_1x1_pre_relu
mixed5b_3x3_bottleneck_pre_relu
mixed5b_3x3_pre_relu
mixed5b_3x3_pre_relu
mixed5b_5x5_bottleneck_pre_relu
mixed5b_5x5_pre_relu
mixed5b_pool_reduce_pre_relu

Individual Sample Images

DeepDream

I was going to render each layer/channel combination as a 4K single image to really show the possible results, but after seeing it would take 15 minutes to generate each image I was looking at nearly 79 days to render all the example images. HDV 1920×1080 resolution will have to do for now (at least until the next generation of hopefully much faster consumer GPUs are released by Nvidia).

DeepDream

Even using two PCs (one with a 1080 GPU and one with a 2080 Super GPU) these images still took nearly 3 weeks to generate. Each image took 6 minutes on a 1080 GPU and 5 minutes on a 2080 Super GPU. Since working with neural networks and GPU computations (especially these week long all day sessions) I can see they do have a noticeable impact on my power bill. These GPUs are not electricity friendly.

DeepDream

See this gallery for all of the individual 7,548 channel images. Starts at page 4 to skip the more plain images from the initial layers.

DeepDream

Movie Samples

The following movies use a series of channels that follow a basic theme.

Eye imagery.

Architecture imagery.

Furry imagery.

Trypophobia imagery.

Availability

DeepDream Dialog

As long as you setup the TensorFlow pre-requisites you can run DeepDream processing from within Visions of Chaos.

Tutorial

The following tutorial goes into much more detail on using the DeepDream functionality within Visions of Chaos.

Jason.

DeepDream – Part 2

Part 2 of my DeepDream series of posts. See here for Part 1.

Darshan Bagul

This time the DeepDream code I experimented with is thanks to Darshan Bagul‘s HalluciNetwork code here.

Darshan’s code allows all of the DeepDream layers to be visualized (59 total) and from my tests the resulting images are much more rich in color and more “trippy”. It also does not require the auto-brightness tweak of the previous code in part 1.

Layers Visualized

Here are the results of processing a gray scale noise image with 10 of the neural network layers. Settings for rendering are 6 octaves, 200 iterations per octave, 1.5 step size and 2.0 rescale factor.

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

See this gallery for all of the example layer images.

Image Processing

These examples use the following photo of Miss Marple.

A selection of 10 processed images.

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

See this gallery for all of the example image processed images.

Movie Results

The following movie cycles through the layers of the DeepDream network going to a deeper layer every 10 seconds. The frames for this movie took almost 2 full weeks to render using an Nvidia 1080 GPU.

Availability

DeepDream Dialog

As long as you setup the TensorFlow pre-requisites you can run DeepDream processing from within Visions of Chaos.

Tutorial

The following tutorial goes into much more detail on using the DeepDream functionality within Visions of Chaos.

Jason.

DeepDream – Part 1

Another venture into the world of neural networks. This time experimenting with DeepDream. The original blog post describing DeepDream by Alex Mordvintsev is here and if you want to have a look at the original code see this github.

I have split these DeepDream posts into parts based on the source python code I was experimenting with at the time.

My basic explanation of DeepDream is that a convolutional neural network enhances what it thinks it sees within an image. So if the network detects that a part of an image looks like a bear’s head then that area will be enhanced and tweaked towards looking more like a bear’s head. Different layers of the neural network detect different patterns and give you different visual results.

Part 1

For this first part the most awesome sentdex got me going with deep dreaming after watching the following two YouTube videos of his;

Magnus Pederesen’s DeepDream Code

The sentdex videos use this code from Magnus Erik Hvass Pedersen.

Problem

One problem with the code is that the resulting images tend to be slightly darker than the source. This is not a big issue for single images, but when creating movies it results in the movie getting progressively darker over time.

A fix from the sentdex video is to add an amount to the RGB image values after deep dream processing, ie


img_result[:, :, 0] += brightness; #R
img_result[:, :, 1] += brightness; #G
img_result[:, :, 2] += brightness; #B

The problem with adding a fixed value is that you need to manually adjust it to suit each movie or source image and the frames still eventually tend towards black or white.

Fix

The first fix I tried was to scale the image array values to fit between 0 and 255 prior to saving the image.


#scale the passed array values to between x_min and x_max
def scale(X, x_min, x_max):
    range = x_max - x_min
    res = (X - X.min()) / X.ptp() * range + x_min
    return res

That works, but the overall brightness can change noticeably between frames resulting in an annoying strobing effect as it auto-adjusts the brightness each frame.

A tweak to fix the strobing is the following scaling.


#scale the passed array values to between x_min and x_max
def scale(X, x_min, x_max):
    range = x_max - x_min
    #res = np.sqrt((X - X.min())) / np.sqrt(X.ptp()) * range + x_min
    res = (X - X.min()) / X.ptp() * range + x_min
    # go one fifth of the way towards the desired scaled result
    res = X + (res - X) / 5
    return res

Rather than jumping straight to the scaled brightness value, the above code nudges or bumps the brightness 1/5th of the distance to the target brightness. This helps avoid the strobing brightness when creating DeepDream movie frames.

Final Code

Here is my final hacked together front end script that passes all the settings to Magnus’ Python script.


#front end to https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/14_DeepDream.ipynb
from deepdreamer import model, load_image, save_image, recursive_optimize
import sys
import numpy as np

#scale the passed array values to between x_min and x_max
def scale(X, x_min, x_max):
    range = x_max - x_min
    res = (X - X.min()) / X.ptp() * range + x_min
    # go half-way towards the desired scaled result to help decrease frames brightness "strobing"?
    # res = (X + res) / 2
    res = X + (res - X) / 5
    return res

#arguments passed in
sourceimage = str(sys.argv[1])
layernumber = int(sys.argv[2])
iterations = int(sys.argv[3])
stepsize = float(sys.argv[4])
rescalefactor = float(sys.argv[5])
passes = int(sys.argv[6])
blendamount = float(sys.argv[7])
autoscale = int(sys.argv[8])
outputimage = str(sys.argv[9])

layer_tensor = model.layer_tensors[layernumber]
file_name = sourceimage
img_result = load_image(filename='{}'.format(file_name))

img_result = recursive_optimize(layer_tensor=layer_tensor, image=img_result,
                 num_iterations=iterations, step_size=stepsize, rescale_factor=rescalefactor,
                 num_repeats=passes, blend=blendamount)

#auto adjust brightness
if autoscale:
    img_result = scale(img_result, 0, 255)

save_image(img_result,outputimage)

print("DeepDream processing complete")

Layer Images

Here are the results of running each of the DeepDream layers on an image of random gray scale noise.

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

DeepDream

Image Processing Results

DeepDream can be used to process single images. The DeepDream result is not just a texture overlay for the whole image. The processing will detect and follow contours and shapes within the image being processed.

These examples use the following photo of Miss Marple.

Here are the results of processing that photo using each of the available layers in DeepDream model.

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

DeepDream Image Processing

Movie Results

If you repeatedly use the output of DeepDream as the input of another DeepDream you can make movies. Stretch each frame slightly and you get a nice zooming while morphing result. For this movie I changed the DeepDream layer every 300 frames (10 seconds) so the movie starts simple and gets more complex as deeper layers of the neural network are used for the visualizations. Rendering time was around 2 and a half days to generate all the frames using a Nvidia GTX 2080 Super.

Availability

DeepDream Dialog

As long as you setup the TensorFlow pre-requisites you can run DeepDream processing from within Visions of Chaos.

Tutorial

The following tutorial goes into much more detail on using the DeepDream functionality within Visions of Chaos.

Jason.