Update - 13/07/2015
Images in this blog post are licensed by Google Inc. under a Creative Commons Attribution 4.0 International License. However, images based on places by MIT Computer Science and AI Laboratory require additional permissions from MIT for use.
Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.
We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.
One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees.
One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.


Indeed, in some cases, this reveals that the neural net isn’t quite looking for the thing we thought it was. For example, here’s what one neural net we designed thought dumbbells looked like:

Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.
![]() |
Left: Original photo by Zachi Evenor. Right: processed by Günther Noack, Software Engineer |
![]() |
Left: Original painting by Georges Seurat. Right: processed images by Matthew McNaughton, Software Engineer |


![]() |
The original image influences what kind of objects form in the processed image. |
We must go deeper: Iterations
If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network, as seen in the following images:
![]() |
Neural net “dreams”— generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory. See our Inceptionism gallery for hi-res versions of the images above and more (Images marked “Places205-GoogLeNet” were made using this network). |
Related Post:
computer
- Take a better selfie with Lily
- Free Lecture The Psychology of Computer Insecurity
- MOOC Research and Innovation
- Calculating Ada The Countess of Computing
- When can Quantum Annealing win
- Creating a templated Binary Search Tree Class in C
- Projecting without a projector sharing your smartphone content onto an arbitrary display
- Will a robot take your job
- Facebook Introduces ‘Hack ’ the programming language of the future
- High Resolution Scary Haunted House Wallpapers for Desktop
- TYBSC IT Sem V Question Papers 2009 Mumbai University
- Home automation update
- Very easy to download youtube videos audio mp3 format
- HD Dark Desktop Background Wallpapers Download
- Launching the Quantum Artificial Intelligence Lab
- Syrias children learn to code with the Raspberry Pi
- Running omxplayer from the command line easily using alias
- Largest collection of Google Logos on the web Set 7
- Collection of SQL queries with Answer and Output Set 2
- Prevent access to specific partition or drive
- Summer Games Learn to Program
- PiAUISuite Update and Voicecommand v3 1
- Sign in to edx org with Google and Facebook and
- Large Scale Machine Learning for Drug Discovery
- Hacker Tricks from Insiders A Threat to ERP Systems
0 comments:
Post a Comment