How To Create Mind-Melting Videos With Google’s DeepDream Neural Network

Want to make acid trips of your favourite memories? Now you can take your own videos and pop them through Google’s trippy dream machine, and see what you come up with.

Borne of an attempt to teach artificial neural networks how to process and understand images, the DeepDream project uses the same technology to generate images, and combine them with new images you give it. It’s not quite superimposing with 50% opacity, as the images are actually combined, as understood by the artificial brain. By giving it some constraints, we alter what it “sees”. And when it’s trying to make sense of what you give it by applying its experience in imagery, we get some pretty far out pictures.

People have been putting clips and films like Fear and Loathing in Las Vegas through an artificial neural network created by Google, to see what it makes of it. The result is something very worthy of Fear and Loathing indeed.

Want to make one yourself?

Firstly, there’s a list of which software libraries you’ll need to have installed here. For stuff more advanced than feeding the system a single image, you’re going to need a bit of programming and video editing experience.

As the Github page on the Fear & Loathing video recommends, you can extract 25 frames per second from your video, feed them one by one into the dream tool and then reconstruct the frames as a video afterwards.

python:
Extract 25 frames a second from the source movie
 
./1_movie2frames [ffmpeg|avconv] [movie.mp4] [directory]
 
Let a pretrained deep neural network dream on it frames, one by one, taking each new frame and adding 0-50% of the old frame into it for continuity of the hallucinated artifacts, and go drink your caffe.
 
python 2_dreaming_time.py -i frames -o processed
 
or using ipython: ipython 2_dreaming_time.py -i frames -o processed
 
or if all fails (so many different environments you’re all using!) ./2_dreaming_time.py -i frames -o processed
 
Once enough frames are processed (the script will cut the audio to the needed length automatically) or once all frames are done, put the frames + audio back together:
 
./3_frames2movie.sh [frames_directory] [original_video_with_sound]

There’s also an alternative list of instructions on how to do this in the Vagrant dev environment here.

People have even started to create their own image datasets, so the dream machine will substitute those images, instead of the frequent dogs and chalices in the current version:

As the Reddit post points out, some of the tools quite popular at the moment as this trend catches on, and pages experience heavy traffic, so there might be some delays. You can find out more about Google’s Deep Dream neural network here.


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments