Getting Off Your Phone: Benefits?
Why do we have better memories for things we didn't photograph?
Posted Jun 11, 2018
If you’ve been out to any sort of live event lately – be it a concert or other similar gathering; something interesting–you’ll often find yourself looking out over a sea of camera phones (perhaps through a camera yourself) in the audience. This has often given me a sense of general unease at times, namely for two reasons: first, I’ve taken such pictures before in the past and, generally speaking, they come out like garbage. Turns out it’s not the easiest thing in the world to get clear audio in a video at a loud concert, or even a good picture if you’re not right next to the stage. But, more importantly, I’ve found such activities to detract from the experience; either because you’re spending time on your phone instead of just watching what you’re there to see, or because it signals an interest to showing other people what you’re doing rather than just doing it and enjoying yourself. Some might say all those people taking pictures aren’t quite living for the moment, so to speak.
In fact, it has been suggested (Soares & Storm, 2018) that the act of taking a picture can actually make your memory for the event worse at times. Why might this be? There are two candidate explanations that come to mind: first, and perhaps most intuitively, screwing around on your phone is a distraction. When you’re busy trying to work the camera and get the right shot, you’re just not paying attention to what you’re photographing as much. It’s a boring explanation, but perfectly plausible, just like how texting makes people worse drivers; their attention is simply elsewhere.
The other explanation is a bit more involved, but also plausible. The basics go like this: memory is a biologically-costly thing. You need to devote resources to attending to information, creating memories, maintaining them, and calling them to mind when appropriate. If we remembered everything we ever saw, for instance, we would likely be devoting lots of resources to ultimately irrelevant information (no one really cares how many windows each building you pass on your way home from work has, so why remember it?), and finding the relevant memory amidst a sea of irrelevant ones would take more time. Those who store memories efficiently might thus be favored by selection pressures as they can more quickly recall important information with less investment. What does that have to do with taking pictures? If you happen to snap a picture, you now have a resource you could later consult for details. Rather than store this information in your head, you can just store it in the picture and consult the picture when needed. In this sense, the act of taking a picture may serve as a proximate cue to the brain that information needs to be attended to less deeply and committed less firmly to memory.
Worth noting is that these explanations aren’t mutually exclusive: it could both be true that taking a picture is a cue you don’t need to remember information as well and that taking pictures is distracting. Nevertheless, both could explain the same phenomenon, and if you want to test to see if they’re true, you need a way of differentiating them; a context in which the two make opposing predictions about what would happen. As a spoiler warning, the research I wanted to cover today tries to do that, but ultimately fails at the task. Nevertheless, the information is still interesting, and appreciating why the research failed at its goal is useful for future designs, some of which I will list at the end.
Let’s begin with what the researchers did: they followed a classic research paradigm in this realm and had participants take part in a memory task. They were shown a series of images and then given a test about them to see how much they remembered. The key differentiating variable here was that some of the time participants would watch without taking pictures, take a picture of each target before studying it, or take a picture and delete it before studying the target. The thinking here was that – if the efficiency explanation was true – participants who took pictures in a way they knew they wouldn’t be able to consult later – such as when they are snapchatted or deleted – would instead commit more of the information to memory. If you can’t rely on the camera to have the pictures, it’s an unreliable source of memory offloading (the official term), and so we shouldn’t offload. By contrast, if the mere act of taking the picture was distracting and interfered with memory in some way because of that, whether the picture was deleted or not shouldn’t matter. The simple act of taking the picture should be what causes the memory deficits, and similar deficits should be observed regardless of whether the picture was saved or deleted.
Without going too deeply into the specifics, this is basically what the researchers found: when participants had merely taken a picture – regardless of whether it was deleted or stored – the memory deficits were similar. People remembered these images better when they weren’t taking pictures. Does this suggest that taking pictures is simply an attention problem on forming memories, rather than an offloading one?
Not quite, and here’s why: imagine an experiment where you were measuring how much participants salivated. You think that the mere act of cooking will get people to salivate, and so construct two conditions: one in which hungry people cook and then get to eat the food after, and another in which hungry people cook the food and then throw it away before they get to eat (and they know in advance they will be throwing it away). What you’ll find in both cases is that people will salivate when cooking because the sights and smells of the food are proximate cues of getting to eat. Some part of their brains are responding to those cues that signal food availability, even if those cues do not ultimately correspond to their ability to eat it in the future. The part of the brain that consciously knows it won’t be getting food isn’t the same part responding to those proximate cues. While one part of you understands you’ll be throwing the food away, another part disagrees and thinks, “these cues mean food is coming,” and you start salivating anyway because of it.
This is basically the same problem the present research ran into. Taking a picture may be a proximate cue that information is stored somewhere else and so you don’t need to remember it as well, even if that part of the brain that is instructed to delete the picture believes otherwise. We don’t have one mind, but rather a series of smaller minds that may all be working with different assumptions and sets of information. Like a lot of research, then, the design here focuses too heavily on what people are supposed to consciously understand, rather than on what cues the non-conscious parts of the brain are using to generate behavior.
Indeed, the authors seem to acknowledge as much in their discussion, writing the following:
”Although the present results are inconsistent with an “explicit” form of offloading, they cannot rule out the possibility that through learned experience, people develop a sort of implicit transactive memory system with cameras such that they automatically process information in a way that assumes photographed information is going to be offloaded and available later (even if they consciously know this to be untrue). Indeed, if this sort of automatic offloading does occur then it could be a mechanism by which photo-taking causes attentional disengagement”
All things considered, that’s a good passage, but one might wonder why that passage was saved for the end of their paper, in the discussion section. Imagine instead that this passage appeared in the introduction:
“While it is possible that operating a camera taking a picture disrupts participants attention and results in a momentary encoding deficit, it is also completely possible that the mere act of taking picture is a proximate cue used by the brain to determine how thoroughly (largely irrelevant) information needs to be encoded. Thus, our experiment doesn’t actually differentiate between these alternative hypotheses, but here’s what we’re doing anyway…”
Does your interest in the results of the paper go up or down at that point? Because that would effectively be the same thing the discussion section said. As such, it seems probable that the discussion passage may well represent an addition made to the paper after the fact, per a reviewer request. In other words, the researchers probably didn’t think the idea through as fully as they might like. With that in mind, here are a few other experimental conditions they could have run which would have been better at the task of separating the hypotheses:
- Have participants do something with a phone that isn’t taking a picture to distract themselves. If this effect *isn’t* picture specific, but people simply remember less when they’ve been messing around on a phone (like typing out a word, then looking at the picture), then the attention hypothesis would look better, especially if the impairments to memory are effectively identical.
- Have an experimenter take the pictures instead of the participant. That way participants would not be distracted by using a phone at all, but still have a cue that the information might be retrievable elsewhere. However, the experimenter could also be viewed as a source of information themselves, so there could be another condition where an experimenter is simply present doing something that isn’t taking a picture. If an experimenter taking a picture results in worse memory as well, then it might be something about the knowledge of a picture in general causing the effect.
- Better yet, if messing around with the phone is only temporarily disrupting encoding, then having participants take a picture of the target briefly and then wait a period (say, a minute) before viewing the target for the 15 seconds proper should help differentiate the two hypotheses. If the mere act of taking a picture in the past (whether deleted or not) causes participants to encode information less thoroughly because of proximate cues for efficient offloading, then this minor time delay shouldn’t alleviate those memory deficits. By contrast, if messing with the phone is just distracting people momentarily, the time delay should help counteract the effect.
These are all productive avenues that could be explored in the future for creating conditions where these hypotheses make different predictions, especially the first and third ones. Again, both could be true, and that could show up in the data, but these designs give the opportunity for that to be observed.
And, until the research is conducted, do yourself a favor and enjoy your concerts instead of viewing them through a small phone screen. (The caveat here is that it’s unclear whether such results would generalize, as in real life people decide what to take pictures of, rather than taking pictures of things they probably don’t really care about).
References: Soares, J. & Storm, B. (2018). Forgot in a flash: A further investigation of the photo-taking-impairment effect. Journal of Applied Research in Memory & Cognition, 7, 154-160