Part 2 of my DeepDream series of posts. See here for Part 1.
Darshan Bagul
This time the DeepDream code I experimented with is thanks to Darshan Bagul‘s HalluciNetwork code here.
Darshan’s code allows all of the DeepDream layers to be visualized (59 total) and from my tests the resulting images are much more rich in color and more “trippy”. It also does not require the auto-brightness tweak of the previous code in part 1.
Layers Visualized
Here are the results of processing a gray scale noise image with 10 of the neural network layers. Settings for rendering are 6 octaves, 200 iterations per octave, 1.5 step size and 2.0 rescale factor.
See this gallery for all of the example layer images.
Image Processing
These examples use the following photo of Miss Marple.
A selection of 10 processed images.
See this gallery for all of the example image processed images.
Movie Results
The following movie cycles through the layers of the DeepDream network going to a deeper layer every 10 seconds. The frames for this movie took almost 2 full weeks to render using an Nvidia 1080 GPU.
Availability
As long as you setup the TensorFlow pre-requisites you can run DeepDream processing from within Visions of Chaos.
Tutorial
The following tutorial goes into much more detail on using the DeepDream functionality within Visions of Chaos.
Jason.