By Bruce Grierson, published on March 9, 2015 - last reviewed on March 6, 2017
Simon Lovell was 31 and a professional con man who had spun the gambling tricks he’d learned from his grandfather into a lucrative if bloody-minded business fleecing strangers. Without hesitation or remorse, he left his marks broken in hotels all over the world.
Nothing suggested that this day in 1988 would be any different. Lovell, in Europe, had spotted his victim in a bar, plied him with drinks, and drawn him into a “cross”—a classic con game in which the victim is made to believe he’s part of a foolproof get-rich scheme. The con went perfectly. “I took him for an extremely large amount of money,” Lovell said later.
Lovell hustled the drunken man out of the hotel room and left him in the hallway for security to deal with. But then something unexpected happened. The mark went to pieces. “I’d never seen a man break down that badly, ever,” Lovell recalled. “He was just sliding down the wall, weeping and wailing.”
What followed was a moment Lovell would look back on as the hinge point of his life. “It was as if a light suddenly went on. I thought: This. Is. Really. Bad. For the first time, I actually felt sorry for someone.”
Lovell’s next move was hard even for him to believe. He returned the guy his money. Then he went back inside the hotel room, sat down, poured a drink, and declared himself done with this dodge. “There was an absolute epiphany that I just couldn’t do it anymore.” The next day he felt different. Lighter. “I had become,” he said, “a real human being again.” He never ran another con.
In the decades that followed, Lovell turned his gift for smooth patter and sleight-of-hand into a successful one-man show that ran off-Broadway for eight years. After he suffered a stroke, good wishes and cash donations for his care poured in from friends and fellow magicians. In his professional world and well beyond it, Lovell became respected, even beloved. His rehabilitation was complete.
But a central mystery remained. That moment in the hotel was Lovell’s wake-up call. But what is a wake-up call? What could possibly explain an event so unexpected, forceful, and transformative that it cleaves a life in two: before and after?
Most of the time, ideas develop from the steady percolation and evaluation of thoughts and feelings. But every so often, if you’re lucky, a blockbuster notion breaks through in a flash of insight that’s as unexpected as it is blazingly clear. So-called “aha moments” can be deeply personal and even existential, prompting the realization that you should quit your job, divorce your spouse, move to another city, mend a broken relationship, abandon an addictive behavior, or, like Lovell, redirect your moral compass. They can also be creative, generating the brilliant idea for a tech startup, the theme of a musical composition, the plot point of a novel, or the answer to an engineering quandary. In all cases, you apprehend something that you were blind to before.
The early-20th-century psychologist William James described such personal moments of clarity, in The Varieties of Religious Experience, as a snap-resolution of the “divided self.” It’s as if a whole lifetime’s worth of growth is compressed into a single instant as dense as a collapsed star.
That’s how it felt to Leroy Schulz. Driving home from a wedding in Canada late one night, Schulz glimpsed a ghostly form on the highway median surging toward his headlights. He didn’t have time to brake. He barely had time to turn his face away from the flying glass as the moose’s head hit the windshield. “Had I been a half-second slower, the whole mass of it would have come into the car,” Schulz said recently. “I have no doubt I’d have been decapitated.” Several motorists who witnessed the crash approached the wreck in shock. “I can’t believe you’re alive,” one gasped.
There was no life-changing epiphany at that precise moment or even in the immediate aftermath. It was more like numb shock that gripped Schulz. But his near-fatal experience seeded something, and what followed a few weeks later when he was going about his daily routine “was one of those panoramic moments when you get your bearings and decide whether you’re on the right path or not,” he says.
Schulz thought: What advice would the 90-year-old me give to the me of right now? He was a technology consultant who had long dabbled in photography. “I said to myself that if I don’t take the path of being a full-time photographer, I will regret it.”
So he went for it. His background interest elbowed its way to the front, and he became a successful portrait and commercial photographer. Although he can’t prove it, Schulz believes that hitting the moose actually changed his biochemistry, unlocking something in his brain that prompted his shift in perspective. “I’ve often wondered: If I hadn’t hit the moose, would I be a full-time photographer right now?” he reflects. “I don’t think so. I think I would have continued on the path I was on, and still there would be a part of me that would wonder, What if? It was that raw reality of facing what could have been imminent death that pushed me over the edge.”
For his co-authored book, Quantum Change, William Miller, an emeritus professor of psychology and psychiatry at the University of New Mexico, interviewed 55 people who had experienced sudden realizations and life transformations. He found that by no means were all of the triggers, or even most of them, as dramatic as Schulz’s encounter with the moose or Lovell’s confrontation with his emotionally shattered victim. Many were downright banal. Among the things people were doing during or immediately preceding their moments of quantum change were walking to a nightclub, cleaning a toilet, watching TV, lying in bed, and preparing to shower.
There was a striking similarity, however, in how the moments felt, with many subjects reporting that it seemed more like a message revealed to them from outside than something their own minds had ginned up. It felt foreign, mystical even. Which may explain why so many historical accounts of this nature have been interpreted as communications from the Divine.
These days, no scientists consider the supernatural as a probable explanation for aha moments. And in the past 10 years or so, studies of the cognitive neuroscience of insight have begun to give us clues as to what they really are.
In 2004, Mark Beeman, a cognitive neuroscientist and leading investigator of insight and creative cognition, first gave a group of experimental subjects the “remote associates test” in his lab at Northwestern University. A kind of brain-teaser designed to produce associative leaps of thought, the test asks subjects to provide the missing link among three seemingly unrelated words—say, pine, sauce, and tree. (People sometimes literally exclaim “aha!” when the word apple pops to mind.)
The subjects were wired to electroencephalograms, and the electrical activity revealed a little of the brain’s inner workings. “A second and a half or two seconds before the conscious insight, we see this burst of activity over the back of the brain,” Beeman says. The brain, he thinks, “is blocking visual input, which helps allow weaker information to compete for attention.” When the solution arrives in the conscious brain—aha!—the subjects’ neocortex lights up like a Christmas tree. The conscious brain takes credit, one could say, for the heavy lifting done behind the scenes.
The results seemed to confirm what Colin Martindale, a psychologist at the University of Maine, found two decades earlier when he studied the neural correlates of creativity more generally. When he asked subjects to cook up a story, he found the same pattern: low cortical arousal in the front brain as it powered down to let the creative work happen, and then a burst of activity in the front again as the neocortex got down to the business of editing.
The brain in “idle,” it turns out, is actually far more active than the brain in conscious engagement. This was the 2001 discovery of Washington University neuroscientist Marcus Raichle, who, in observing the resting brain, saw that there was essentially a party going on in the dark. The default mode network, as Raichle came to call it, is exploding with neurogenesis, crackling with interconnectivity, and burning perhaps 20 times the metabolic resources of the “conscious” brain. The brain’s resting-state circuitry (which is turned on, paradoxically, when you stop thinking and just veg out) is thus very likely the best place to park a problem, for it employs the best, wisest, and most creative (though not necessarily fastest-working) mechanics.
As an experimental investigation of insight, Beeman’s setup had the advantage of control (the exact aha moment could be snared) but the disadvantage of contrivance. Solving a puzzle in a lab is not quite like solving an existential dilemma. Chief among the differences, perhaps, is the depth of feeling that so often accompanies real-life epiphanies.
William Miller likes to recount psychologist David Premack’s case study of a fiercely addicted smoker who pulled to the curb in front of a public library one day to pick up his kids. He rummaged in the glove compartment for his cigarettes without success. He looked under the seats, but could not find the damn smokes. It was starting to rain. The kids would be out in a second. But wait—there was a store not far away. He could zip over there and be back in just a few minutes. It wasn’t raining hard. The kids wouldn’t get too wet.
Then something shifted in this man. “He thought, Dear Heaven, I am the kind of father who would let his kids stand in the rain while he chased a drug.” The insight was powerful enough to break through years of denial. “And that was it,” Miller says. “He never smoked again.”
Miller found that there was often a moral dimension to stories of quantum change—just like the moral dimension to Simon Lovell’s U-turn from the ugliness of his life of crime. The same pro-social shift seemed to be happening, from selfishness to compassion, from an ethic of power to an ethic of care. “It’s as though they got a fast-forward in self-actualization,” Miller says, “and their values changed.”
In almost half of his subjects, the big epiphany was preceded by intense psychological pressure. “They were at the end of their rope, and the rope broke,” he says. Things simply could not continue as before; they couldn’t not change. That would seem to make such epiphanies a different animal from simply cracking a word puzzle.
Yet researchers disagree about whether they really are. Miller believes they are different partly because of the force with which his subjects—almost universally—reported how different their aha moments felt from merely “coming to a conclusion or reasoning something through,” he says. “The moment it happened, they knew they had gone through a one-way door—there was no going back.” (Indeed, when Miller’s co-author, Janet C’de Baca, followed up with them a decade later, not a single one had returned to the pre-epiphany life. Their aha moments really had changed them irrevocably.)
Subjective experience, however, is still not proof that what goes on during a big aha moment is radically different from what happens during a little one. “I’ve seen nothing to convince me that [it] actually requires a different kind of thinking,” says Beeman of personal epiphany. The brain is widely considered to be a prediction machine, and it handles all ideas and feelings in much the same way: It constructs little models of everything it expects to think, do, and feel, then rapidly recalculates as it’s hit with novelty.
To the brain, then, an epiphany about existence may not be categorically different from the sudden insight that you can tell the freshness of a loaf of supermarket bread by the color of the bag-tag. But no one knows for sure.
There’s little doubt that the unfocused brain is a great tool for the job of problem solving. But what happens in there has the disadvantage of being frustratingly beyond our control. Is it possible, some wonder, to jump cognitive tracks to a place where genuinely novel solutions lurk, without putting the executive brain into neutral? In other words, if you’re struggling with a thorny problem, instead of spending time on a mountaintop incubating a solution, could you instead just keep doggedly trying things?
One group of Chinese researchers believes it’s possible.
To Ailing Chen and her colleagues, the hunt for creative solutions to the world’s ever-more complex problems is too important to be left to genius or chance. It needs a blueprint that anybody can follow. Last year, Chen, a computer scientist at Hebei Chemical & Pharmaceutical College in Shijiazhuang, presented a conference paper titled, “On the Systematic Method to Enhance the Epiphany Ability of Individuals.”
Chen isn’t convinced that an idle incubation stage is always necessary. “The purpose of ‘roaming’ is to think out of the box and create new connections,” she says. “If some method can replace roaming and speed up the process, why not try it?”
As a computer scientist, Chen’s vocation makes her stand out amid the psychologists and neuroscientists who dominate the study of insight. But she believes her field is actually well suited for the job. Thinking is algorithmic, after all. To “think better,” as Steve Jobs put it, is a skill limited in part by the constraints of working memory. There are only so many pieces of furniture we can move around in our minds at one time. Boost the computational power and we can vastly increase our chances of a leap—at least in theory.
The strategy leans heavily on a fairly obscure problem-solving theory called “extenics,” which means “the rules and methods for opening up things.” (Extenics itself owes much to a forecasting tool called “TRIZ,” which was developed in the 1940s by Russian scientist Genrich Altshuller, who reviewed tens of thousands of patent abstracts to decipher patterns in the inventors’ creative leaps. In principle, there are many small “contradictions” hidden in any big problem. The goal is to identify them and then follow a set of rules to resolve them as a computer program might: If A dead-ends, then go to B.)
This deliberate mode of attacking a problem is familiar to all of us: It’s the one we typically try first. And if solutions were unfailingly found this way, we would never need the spontaneous mode. The problem is, truly novel solutions are hardly ever discovered purposefully. If a searched-for solution is outside our familiar experience—which is shaped by beliefs, culture, and biases—the conscious mind will likely never find it. A deliberate approach can search the whole box, but not outside of it.
Indeed, research suggests that thinking about a problem too methodically is often an impediment to solving it because we actually block potential solutions from floating into consciousness, a phenomenon known as “cognitive inhibition.” As University of California, Santa Barbara neuroscientist Jonathan Schooler discovered, if you ask people to articulate an idea they’re just hatching, the idea—zoop!—vanishes.
“It’s a bit like trying to look at a dim star,” Beeman says. “You have to turn your head and spy it out of the corner of your eye; if you look at it directly, it disappears.” In lab experiments, subjects who are given a brain-teaser and sleep on the problem or otherwise back away from it are usually more likely to solve it than if they just keep pounding away.
But here’s the other side: Incubating a conundrum isn’t enough on its own. A puzzle will never be solvable if you don’t have all the pieces. The moment when the ancient Greek scholar Archimedes is said to have stepped into a bath, had an insight, and uttered the original “Eureka!” came only after many long years of cogitating.
“You accumulate all this experience and background,” Raichle says, “and then all of a sudden there’s an association that your brain has rather cleverly pulled off.” He isn’t speaking just theoretically; it happened to him. In 2001, Raichle was walking from his office to a nearby conference room to meet with colleagues after their paper had been rejected for publication. All of a sudden, he cracked the nut. He knew how to explain how the resting brain could be active without having been deliberately activated. He had, you might say, an aha about ahas.
“Ten years’ worth of work on activation was suddenly relevant to solving the default-mode problem,” Raichle says. The leap would amount to the biggest breakthrough of his career—his paper on the default mode has been cited more than 4,000 times. It’s an affirmation of Louis Pasteur’s famous line: “Chance favors the prepared mind.”
Rather than betting on Chen’s hope of bypassing the incubation stage, Beeman imagines a different way to approach the task of cultivating an aha. Timing is critical. If we stay in the deliberate mode too long, we can drive the solution away. But if we back off a problem too soon, before we have all the puzzle pieces, we prevent the solution from coalescing. The key may be knowing when to zoom in tight on a problem and when to pull back, so as not to crush the tender shoot of an insight just as it’s emerging.
“I think that part of the formula is the tension between the two modes, this back-and-forth between being very focused and not,” Beeman says. Drawing back from the problem puts us in a position to “boost the underlying signal” of the hunch that’s quietly developing, so that it penetrates the conscious mind. You might call this “training our intuition.”
Known as a somatic marker, a hunch is “a physiological clue to what to do next,” as University of Southern California neurobiologist Antonio Damasio has put it. We ignore gut instinct at our peril, for it’s the product of evolutionary hardwiring. Like budding thoughts, budding feelings are evaluated based on their biological significance. Only the fittest are selected to reach consciousness. Strong emotions create loud signals. They tell the brain: There’s something important here—you’d better put some horses on this.
A hunch, then, is a kind of pre-aha. If intuition is indeed a trainable faculty, then it would seem to involve sharpening our emotional sensitivity. Get good at the care and feeding of hunches, and we might prime ourselves for insight.
This may be what prompted Maya Wang’s* epiphany when she stumbled upon a Facebook photo of a couple she barely knew. Something about the way the happy duo looked, the way they just fit together, hit her like a gut punch. “I caught my breath,” she says, “and then I freaked out.” She called a friend and blurted: “I think I married the wrong person.”
At the time, Wang was taking classes that taught a particularly intense form of emotion-based Method acting—and as a result she had cracked open a lot of bottled-up feelings. Before the classes, she had prided herself on her hyper-rationality; indeed, she had functioned “almost like the producer in my own marriage,” pencil poised to tick off everything that needed to be done: Get settled, get pregnant, build a life. “But something about the photo triggered what I think of as the right brain,” Wang says. “It was like, Oh. My. God.”
The acting lessons had tapped a deep reservoir of emotions. From the moment she started applying them on the stage, she says, “I felt a door just open wide.” It was the door—there’s no other way to put it—to truth. Now all the things that were wrong about her own pairing suddenly sprang into relief, but at a level other than conscious evaluation. “It was simply, ‘This person isn’t right for me. Even if he changes all the things that are issues for us, he is not right for me.’”
Over the following months, her rational mind accepted the insight that had hit her in a flash. She got divorced and committed herself to living authentically. Like Simon Lovell and countless others who have experienced aha moments, Wang turned her life around.
*Name has been changed
Submit your response to this story to firstname.lastname@example.org. If you would like us to consider your letter for publication, please include your name, city, and state. Letters may be edited for length and clarity. For more stories like this one, subscribe to Psychology Today, where this piece originally appeared.