The recent proliferation of commercial online “brain-training” services that promise to enhance intelligence and other cognitive abilities is understandable: Who wouldn’t want to be smarter and have greater working memory and inhibitory control? Seeing the potential for low-cost and reliable measurement of performance, some corporations have begun using similar tools to assess potential hires and evaluate employees (“people analytics”). No doubt there is some amount of benefit to be gained on both fronts. After all, people have an amazing capacity to develop expertise with practice in a huge range of skills (think video games, driving, or crosswords), and it is an open secret that qualitative interviews, the dominant tool currently used for evaluating new hires, are subject to bias and don’t predict job performance in the first place.
Despite this potential, independent studies on brain-training services provide (at best) equivocal support for their effectiveness. This is true for a number of their claims, but particularly the implicit understanding that performance gains earned on the training tasks will generalize to untrained tasks (so-called “transfer effects”). It’s one thing to get better at a particular task, but a more rigorous standard is whether users improve on other ones. Does practicing Tetris make you better at Pac-Man? The best work debunking studies claiming to produce training effects has been done by Randall Engel, Zach Shipstead, and their colleagues at Georgia Tech, who find that practice indeed improves skills at the trained tasks, but doesn’t transfer to untrained tasks when adequate control groups are used. They also raise concerns about the durability of the training over longer periods of time used in the research (usually 3 or 6 months).
From a broad perspective, a major impediment to understanding what’s really going on here is a lack of a model of how training is supposed to work. This is a perfect example of a time when, as Kurt Lewin wrote, “there’s nothing so practical as a good theory”. Scientists would have some idea about where to start looking for the problems with a given training program if we had an idea of the mechanisms targeted by that program. With an eye toward that gap, my students Lauren Kahn and Junaid Merchant and I recently published a study using neuroimaging to understand what happens in the brain during training. We were particularly interested to know how training influenced one kind of cognitive ability related to self-control—inhibitory control—and why that training might not transfer to new contexts. Intriguingly, we found that training caused activity in parts of the brain system associated with inhibitory control to shift earlier in time, coming online before control was actually needed. This “proactive shift” improved performance on the training task itself because proactive control is more efficient than reactive control, but there was a catch: with training, the brain activity became linked to specific cues that predicted when inhibitory control might be needed. This result explains how brain training improves performance on a given task and also why the performance boost doesn’t generalize beyond that task. A compelling next step in this research is to develop an intervention that features cues from real environments where inhibitory control is desirable with the goal of facilitating the transfer of improvements gained during training to those contexts.
Training causes a shift from proactive to reactive control in the rIFG
Participants in our study were randomly assigned to practice either an inhibitory control task (the “stop-signal task”) or a control task that didn’t involve inhibitory control every other day for three weeks. Performance improved more in the training group than in the control group. We also measured participants’ neural activity during the stop-signal task before and after using functional magnetic resonance imaging. Activity in the inferior frontal gyrus and the anterior cingulate cortex, which monitor for and trigger inhibitory control, decreased during inhibitory control but increased immediately before it in the training group more than in the control group.
Our data provide good, initial evidence that inhibitory control training causes a proactive shift in those regions. However, our study focused exclusively on inhibitory control, and therefore does not necessarily extend to other kinds of executive function. This is a subtle but important point relevant to commercial brain-training services, which typically claim to improve several cognitive abilities at once. There is little empirical reason to believe that a training that improves one skill (e.g., inhibitory control) would also improve others (e.g., working memory or intelligence), that the same brain regions are involved, or even that the same model of change applies. Without knowing a great deal more about them, we can’t know whether a given training program—delivered with the same timing, format, and duration—could ever produce general and lasting improvements in the range of cognitive abilities that it claims.
Follow me on Twitter @Psychologician
My Social & Affective Neuroscience Lab at the University of Oregon