Tuesday, February 14, 2017

Deep Dreams



Neural networks have been black boxes for a long time. This is a problem that caused enterprise applications to limit neural network utilization. Enterprise application will not trust neural networks until the application is capable of highlighting that the prediction or the output is derived based on which activation sequence within the neural network. Somehow in 2015 Google brain team published an article visualizing what happens within a neural network. Where they named findings as “Inceptionism”.




A sudden glance of deep dream images may convince someone that these are psychedelic trippy arts. These images represent how a neural network perceive their inputs, particularly input in the form of images. 



When training a neural network with back-propagation, algorithm starts to detect different features in different layers. Layers at the beginning of the network would pay attention to features like detecting edges whereas layers at the end would identify large features like detecting objects. Therefore, the idea behind this it to maximize the output of a layer at the end of the neural network. Let’s say we are going to train a network to detect cats. Instead of feeding images with single cat, we could train the model for images with multiple cats (like below image). Then the network starts to guess more features in whatever the image we are going to feed in and there we go!


An image with multiple target feature can be used for training. 
 


A simple explanation of deep dreaming can be found in below YouTube video.



The code that I used for this is available in GitHub. I’m intending to deep dive in to convolutional neural networks in another post.
 

No comments:

Post a Comment