# DeepDream – Part 1

Another venture into the world of neural networks. This time experimenting with DeepDream. The original blog post describing DeepDream by Alex Mordvintsev is here and if you want to have a look at the original code see this github.

I have split these DeepDream posts into parts based on the source python code I was experimenting with at the time.

My basic explanation of DeepDream is that a convolutional neural network enhances what it thinks it sees within an image. So if the network detects that a part of an image looks like a bear’s head then that area will be enhanced and tweaked towards looking more like a bear’s head. Different layers of the neural network detect different patterns and give you different visual results.

Part 1

For this first part the most awesome sentdex got me going with deep dreaming after watching the following two YouTube videos of his;

Magnus Pederesen’s DeepDream Code

The sentdex videos use this code from Magnus Erik Hvass Pedersen.

Problem

One problem with the code is that the resulting images tend to be slightly darker than the source. This is not a big issue for single images, but when creating movies it results in the movie getting progressively darker over time.

A fix from the sentdex video is to add an amount to the RGB image values after deep dream processing, ie

``````
img_result[:, :, 0] += brightness; #R
img_result[:, :, 1] += brightness; #G
img_result[:, :, 2] += brightness; #B
```
```

The problem with adding a fixed value is that you need to manually adjust it to suit each movie or source image and the frames still eventually tend towards black or white.

Fix

The first fix I tried was to scale the image array values to fit between 0 and 255 prior to saving the image.

``````
#scale the passed array values to between x_min and x_max
def scale(X, x_min, x_max):
range = x_max - x_min
res = (X - X.min()) / X.ptp() * range + x_min
return res
```
```

That works, but the overall brightness can change noticeably between frames resulting in an annoying strobing effect as it auto-adjusts the brightness each frame.

A tweak to fix the strobing is the following scaling.

``````
#scale the passed array values to between x_min and x_max
def scale(X, x_min, x_max):
range = x_max - x_min
#res = np.sqrt((X - X.min())) / np.sqrt(X.ptp()) * range + x_min
res = (X - X.min()) / X.ptp() * range + x_min
# go one fifth of the way towards the desired scaled result
res = X + (res - X) / 5
return res
```
```

Rather than jumping straight to the scaled brightness value, the above code nudges or bumps the brightness 1/5th of the distance to the target brightness. This helps avoid the strobing brightness when creating DeepDream movie frames.

Final Code

Here is my final hacked together front end script that passes all the settings to Magnus’ Python script.

``````
#front end to https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/14_DeepDream.ipynb
from deepdreamer import model, load_image, save_image, recursive_optimize
import sys
import numpy as np

#scale the passed array values to between x_min and x_max
def scale(X, x_min, x_max):
range = x_max - x_min
res = (X - X.min()) / X.ptp() * range + x_min
# go half-way towards the desired scaled result to help decrease frames brightness "strobing"?
# res = (X + res) / 2
res = X + (res - X) / 5
return res

#arguments passed in
sourceimage = str(sys.argv)
layernumber = int(sys.argv)
iterations = int(sys.argv)
stepsize = float(sys.argv)
rescalefactor = float(sys.argv)
passes = int(sys.argv)
blendamount = float(sys.argv)
autoscale = int(sys.argv)
outputimage = str(sys.argv)

layer_tensor = model.layer_tensors[layernumber]
file_name = sourceimage

img_result = recursive_optimize(layer_tensor=layer_tensor, image=img_result,
num_iterations=iterations, step_size=stepsize, rescale_factor=rescalefactor,
num_repeats=passes, blend=blendamount)

if autoscale:
img_result = scale(img_result, 0, 255)

save_image(img_result,outputimage)

print("DeepDream processing complete")
```
```

Layer Images

Here are the results of running each of the DeepDream layers on an image of random gray scale noise.          Image Processing Results

DeepDream can be used to process single images. The DeepDream result is not just a texture overlay for the whole image. The processing will detect and follow contours and shapes within the image being processed.

These examples use the following photo of Miss Marple. Here are the results of processing that photo using each of the available layers in DeepDream model.          Movie Results

If you repeatedly use the output of DeepDream as the input of another DeepDream you can make movies. Stretch each frame slightly and you get a nice zooming while morphing result. For this movie I changed the DeepDream layer every 300 frames (10 seconds) so the movie starts simple and gets more complex as deeper layers of the neural network are used for the visualizations. Rendering time was around 2 and a half days to generate all the frames using a Nvidia GTX 2080 Super.

Availability As long as you setup the TensorFlow pre-requisites you can run DeepDream processing from within Visions of Chaos.

Tutorial

The following tutorial goes into much more detail on using the DeepDream functionality within Visions of Chaos.

Jason.