I made a music video this week! It's been a while since my last one. I used Google's neural network driven image categorisation software to generate much of the artwork for the piece.
To talk tools for a moment, I recorded the music in Apple's Garageband, (playing bass and guitar, arranging keyboard parts and singing) and filmed the raw footage using a Mobius Actioncam as well as an iPhone6 (In particular the slow motion parts). I used Final Cut Pro to assemble the footage and export individual frames, that were then made available to the deep dream software.
I wrote custom versions of the software so that I could finely control how the algorithms were applied. Rather than simply running the footage through the algorithms, in some cases I used shape masking to build up the final images from components, including some parts of the original footage- for example the sky.