I read a fascinating fMRI study published in the 20 March 2008 issue of Nature. The study is titled, "Identifying natural images from human brain activity" and was performed by a group at UC Berkeley. While in the scanner, subjects were shown a series of natural scenes, like the one shown here.
Seems fairly simple. Using fMRI, the investigators acquired a template response from each subject's visual cortex and then used this template to "decode" the brain's response to novel images.
Without going into the details of how they did this, in one subject the technique correctly identified 110 out of 120 images (92%) that the individual was viewing.
Think about this. Simply by scanning the visual cortex of an individual with fMRI and using a fairly straightforward computer algorithm, the investigators were able to determine with a high degree of accuracy what the person was looking at. This is about as close to mind-reading as it gets.
Of course, there are a few caveats. The way the experiment was setup meant that the algorithm simply had to take the brain activity and pick from a known set of images which one matched best. This is not quite the same as taking brain activity and reconstructing, de novo, what the person was seeing. But it is the first step. Surely with a large enough library of images this could be done.