The Courage of Our Conniptions

Musings on religion, politics and other unmentionables.

Neuroscientists Learn to Read Minds

Scientists recreate the movies playing in our heads.

Like many a child, I spent a lot of time dreaming of a world where imagined inner realities could somehow manifest in the outer realm. Once I wished so hard for the gift of flight that I began flapping my wings in public (long past the age when this would have been cute). I quickly decided to stop being so observably strange, but the dream endured. If the popularity of movies like Inception is any indication, I am not alone.

Dreamers and sci-fi fans alike can take heart—the future is now. Scientists at the Gallant Lab of UC Berkeley published a paper in Current Biology last month presenting the first successful approach for reconstructing natural movies from brain activity. Studies using fMRI technology have reproduced static images in the past, but the fMRI measures changes in blood volume, not actual neural activity. When neurons are busy firing, they require oxygen-rich blood to fuel their activity. Luckily for scientists, this blood has slightly different magnetic properties, correlates with the activity of neuron populations and can be measured using an fMRI. Unfortunately, the changes in blood flow are very slow compared to the incredibly complex and rapid activity of neurons firing. (The smallest measurable unit the voxel-like pixels but for brain imaging- includes about one million neurons!) While measuring blood flow gives us a wealth of information, it's just not fast enough to accurately reflect what is happening with the neuron populations in our visual system. Neuroscientists are always on the lookout for ways to refine the neural correlates to changes in hemodynamic activity.

Find a Therapist

Search for a mental health professional near you.

The brain is composed of approximately one hundred billion individual neurons firing electric impulses that travel between 0.5 to 100 meters per second and last for about 1 millisecond. That's actually about 2,000,000 times slower than a fast computer (which explains Watson's success levels on Jeopardy). But what we lack in speed we make up for in complexity. We're awash in interconnected neural relationships that are constantly refining themselves to produce more accurate and adaptive responses to external stimuli (i.e. learning).

Our visual processing passes through a hierarchy of increasingly sophisticated stages, discarding irrelevant information along the way. Luminance, the first order, catches the differentiation of an object from its surroundings—i.e. a black circle on a white page. Subsequent orders take into account contours, textures and so on with the most advanced levels (not dealt with in this study) handling object recognition. If contemplating this makes you feel dizzy, imagine trying to come up with a mathematical algorithm so complex that it could reasonably predict the activity of the neuron populations that create our realities!

Past studies have used algorithms to successfully recreate the comparatively simple static, black and white images in the early part of the visual process, but recreating dynamic vision or natural movies has thus far been the province of sci fi and medical fantasy. The team of researchers at the Gallant Lab came up with two major innovations to overcome obstacles previously believed insurmountable by many.

First, they came up with a motion-energy encoding model designed to work with the fMRI to refine the limited information provided by the sluggish blood oxygen level dependant (BOLD) signals, reflecting the separate contributions of the underlying neurons when coupled with hemodynamic activity. Motion perception has proven very difficult to represent from a computational perspective. If you're versed in statistics and would like to read more about their algorithm, see here.

But the motion-energy encoding model was only half the battle. They used a Bayesian decoding model (a standard probability model used in computational neuroscience among other fields) to fill in the blanks from the slow and relatively sparse information gathered from the BOLD signals. Next, it was You Tube to the rescue. To make the Bayesian probability model work, they had to create a database with no fewer than 18,000,000 You Tube clips. These clips provided a baseline ‘prior' for the computer. After the fMRI brain activity was collected from the 3 participants (the subjects were actually fellow researchers as the process currently involves spending hours in an fMRI) the computer program picked one hundred clips that were the most similar and used the closest matches to regenerate what it predicted the subject had seen with eerie accuracy. With more clips to draw from, computer models could generate even more realistic mental scenes. More recently, scientists affiliated with the lab announced that they'd been able to decode auditory signals as well (ie eavesdrop on the brain).

The researchers were careful to note that we are still decades away (if not more) from using brain-decoding technology to reconstruct what a witness saw at a crime scene or watch our dreams, but the breakthrough has significant ramifications for future work in multiple fields. It could potentially be used to help everyone from coma patients to paralysis and stroke victims. For the everyday artist, dreamer and hobbyist—perhaps an IPhone app to generate little indie Inceptionesque films, a 5 minute ‘creativity scan' for MFA programs to assess candidate potential, Pepsi-sponsored ‘send us your dreams' contests. I'll stop before I start flapping my wings (or get depressed about the thought of poets and writers being replaced by computer programs); but it's interesting to consider a world where the gap between impulse and execution is flattened into the space of an algorithm to be replicated at will and contemplate what this might mean for the future of fields as diverse as medicine and the arts.

Sarah Estes Graham is a freelance writer based in Los Angeles.

more...

Subscribe to The Courage of Our Conniptions

Current Issue

Just Say It

When and how should we open up to loved ones?