Does Science Need Snake Dream Breakthroughs?

Do new articles on theory development in psychology limit creativity?

Posted Apr 21, 2020

Photo from Pixabay on Pexels.
Kekule was not a cat, he was a chemist. But he did have a snake dream breakthrough.
Source: Photo from Pixabay on Pexels.

The most memorable achievement of 19th-century chemist August Kekule was discovering the shape of the benzene molecule, a six-membered ring of carbon atoms. What’s memorable about the discovery to non-chemists is how he claimed to have made it: it came to him in a dream about the mythic ouroboros, a snake eating its own tail. Kekule’s dream is often cited as an example of how we shouldn’t care where a scientific hypothesis comes from. Any new idea is worth testing.

This view fits with Karl Popper’s vision of the scientific process. Popper is famous for saying that the core of the scientific process is trying to falsify scientific theories. It doesn’t matter if your scientific idea came from carefully thinking through chemical processes or a snakey dream. If you can test it, it can be part of science. The problem with this view is that it doesn’t work.

In a new manuscript, Denny Borsboom and colleagues explain that the Popper-fueled, snake-enabling vision of science has some key flaws. First, the scientific method is meant to be a set of principles that can reliably lead to a better picture of reality. By saying “anything goes” when it comes to thinking up new ideas, this view of science effectively puts theory development outside of the realm of science. If snakey dreams are as good a grounds for a new prediction as careful analysis of a problem, then you’re giving up on there being a set of guiding principles for theory development in science.

Second, this Popper-inspired vision of science only really has one way of testing whether a theory is right: does it make predictions that come out true. This is another way of saying “have people tried to falsify it, but found that they couldn’t?” Making accurate predictions is certainly an important job of a theory, but Borsboom and colleagues point out that falsification alone can’t evaluate whether a theory is a good explanation. Don’t we want to be able to take into account whether one theory is more plausible than another? Whether one has some logical inconsistency? Whether one explanation is simpler than another? If your only tool is setting up experiments to try to falsify something, you can’t get at these ideas directly.

Photo from Startup Stock Photos on Pexels
New research suggests guidelines for developing scientific theory in psycholgy.
Source: Photo from Startup Stock Photos on Pexels

In contrast, Borsboom and colleagues argue that psychology needs to have more training and guiding principles on developing theory. They lay out a set of five steps needed for developing theory, which are distinct from the processes needed to design and conduct experiments. Recent research by Olivia Guest and Andrea Martin lays out a different, but related set of six steps that need to take place (and includes conducting experiments). (More on their work here.) Both sets of researchers provide key examples of scientific success using these guiding principles for theory development. Rather than just saying “anything goes” when it comes to predictions, they suggest that we need guiding principles.

These principles are not in line with the classic approach you’re taught in PhD school. Borsboom and colleagues challenge the idea that science progresses through constructing tests and falsifying them, but Guest and Martin upend traditional thinking on another basic approach: the idea that predictions always need to come before the data.

One of the big no-no’s that psychologists are taught—and a major criticism leveled at much psychology research—is that scurrilous researchers will “hypothesize after the results are known” or HARK. In plain English, this means that some researchers see an unusual quirk or trend in the data, make up a story to explain it, and then try to present the study as if it were carefully designed to test whether that story is true. This is indeed a dishonest practice, as the general rule is that the same data can’t be used to both come up with a new idea and test whether that idea is true. But the urge to explain surprising regularities in data is not itself wrong or anti-scientific.

In Guest and Martin’s model, the various stages of the scientific process have a back-and-forth character. If observed data surprise us, or show something unexpected, we do need to go back to the drawing board and modify our theory to make sense of it. (Of course, before we do that we need to make sure that observed results are reliable, not just flukes; that’s why replications are so important. But surprising and reliable results do need an explanation.) Good scientific theories should be able to account for many different patterns of results, so we should try to adjust them if the data show they’re off. (This view appears to be popular among mathematical modelers: two excellent blog posts by Danielle Navarro support these general points.)

 Photo by Polina Tankilevitch from Pexels
Surprising results need to be accounted for by theories, and are reasons to revise theories themselves.
Source: Photo by Polina Tankilevitch from Pexels

In the classic Karl Popper approach, you just throw out the theory when it makes a wrong prediction—I imagine stamping it with a big “falsified” label. Certainly there are times when a theory is just so far off that you do need to throw it out. It’s totally wrong. But among researchers who try to develop guidelines and methods for building theory, that can feel like throwing the baby out with the bathwater. You might want to see if, instead of throwing out that water, you can add a nice drain to the bathtub so that your baby stays safe and healthy while you get rid of your gross bathwater. The way to do that is to use specific guiding principles and formal models to generate predictions.

But what about snake dream breakthroughs? Didn’t chemistry get a great insight by allowing Kekule to test a far-out idea that didn’t come from any formal model? Well, not quite. Kekule’s dream wasn’t a prediction about what would happen in a new study, or a stroke of insight about a hypothesis to falsify. It was an explanation that helped connect a set of known but confusing facts about the benzene molecule (such as that the ratio of hydrogen and carbon molecules in it is 1:1). The snake was just a metaphor for a ring structure, where a string of carbon atoms connected back on itself.

Kekule was on Borsboom and colleagues’ theory development track, not the experimental track. His work was an attempt to change theory to make sense of several reliable facts; in other words, to add a drain to the bathtub so we didn’t have to throw out the precious theoretical baby. The moral isn’t that predictions can come from anywhere, but that the solutions to puzzles—how to fit together a set of facts into a simple explanation—requires creative insight. There is room for creativity, but it’s creativity in finding consistent, coherent explanations—not in making wild predictions unconnected to theory.

References

Borsboom, D., van der Maas, H., Dalege, J., Kievit, R., & Haig, B. (2020). Theory Construction Methodology: A practical framework for theory formation in psychology. Preprint on PsyArXiv: https://psyarxiv.com/w5tp8/

Guest, O., & Martin, A. E. (2020). How computational modeling can force theory building in psychological science. Preprint on PsyArXiv: https://psyarxiv.com/rybh9/