Part 2 of my DeepDream series of posts. See here for Part 1.
Darshan’s code allows all of the DeepDream layers to be visualized (59 total) and from my tests the resulting images are much more rich in color and more “trippy”. It also does not require the auto-brightness tweak of the previous code in part 1.
Here are the results of processing a gray scale noise image with each of the neural network layers. Settings for rendering are 6 octaves, 200 iterations per octave, 1.5 step size and 2.0 rescale factor.
Here are some example image processing samples of the 59 layers. These examples use the following photo of Miss Marple.
Going from the first conv2d0_pre_relu layer down to the last mixed5b_pool_reduce_pre_relu layer.
The following movie cycles through the layers of the DeepDream network going to a deeper layer every 10 seconds. The frames for this movie took almost 2 full weeks to render using an Nvidia 1080 GPU.