Bloggers

How to derive a brain

Posted on: April 18th, 2014 by Peter Tingley

On April 14, Dr. Mark Albert from our own CS department gave a great talk in the undergraduate colloquium series. Here is one audience member’s view of what happened!

The goal of this talk was to understand why the brain works the way it does. Well, that is obviously too big a question, and Dr. Albert actually focused on some fairly low-level data processing, mainly in the primary visual cortex. He discussed two questions:

1) Experiments on animals reveal that there are “simple cells” in the brian which (in the primary visual cortex) fire in response to particular patterns in the visual field. They typically respond to light in a pattern that looks like a “Gabor wavelet:”

 

Gabor

Here the peaks are areas where more light stimulates the neuron to fire, and the valleys places where light stops it from firing. Why this is an efficient setup?

To answer this, Dr. Albert applied standard algorithms to “natural” inputs (i.e. pictures of trees, hills, rocks, flowers…) to figure out how to efficiently code (that is, store) them. Here “efficient” was taken to mean the code should try to reduce the number of bits used, and, more importantly minimize the number of “1s” (as opposed to zeros); this condition makes sense biologically because “1″s correspond to a firing neurons, and firing a neuron uses significant resources, so the brain tries not to fire too many. He also talked about an independence hypothesis: that having a 1 in one location should be uncorrelated with having a 1 in another position. Using standard techniques and these requirements, you end up storing the data using things very similar to Gabor wavelets. That is, you recover more or less what the brain actually does. Awesome!

2) Before a new-born animal’s eyes open, one can observe complex patterns of optical nerve firings called “retinal waves,” which seem to be important to the development of normal vision. How can such signals (which have nothing to do with the environment) be helpful, and how does the brain/eye figure out which signals to make?

There is evidence that simple cells “program” themselves in response to the input they get early in life. This needed “practice” would ideally come from observing the actual environment. But there is a problem: many animals need to be able to see from birth to survive, so they must program simple cells before practice is possible. One theory is that retinal waves are used as simulated practice. But this is very strange: to be effective these waves would have to be like natural images, but the brain doesn’t know what natural images are yet! Dr. Albert suggests the following solution, using a process called percolation: If a system of adjacent cells is set up so that, if one fires, nearby ones will fire with some fixed probability, then a single random firing will (sometimes) cause a cascade. If things are set up carefully one sees complex patterns form that look like fractals…which means they look a bit like “natural” objects, at least in the sense that they have similar statistical properties. The idea is that the brain programs itself using these random fractals, and ends up in pretty good shape for seeing the real world. Dr. Albert tested this by running compression algorithms on random patterns generated this way, and found “Gabor wavelets” similar to what he got using natural inputs. Furthermore, there is physical evidence for this type of behavior in the retina. So it seems to work. Also awesome!

Comments are closed.