Skip to main content

Verified by Psychology Today

Dogs and Humans Interpret What They See Differently

Brain scans show differences in the visual cognition of dogs and humans.

Key points

  • Humans are much more reliant upon vision than dogs.
  • Naturalistic video clips from a dog's point of view were shown to dogs undergoing fMRI brain scans.
  • Artificial neural network programming was used to analyze how the brains of humans and dogs process visual inputs.
  • Humans process objects and actions equally well, while dogs tend to focus primarily on actions.
Alvin Trusty / Flickr
Source: Alvin Trusty / Flickr

For human beings, vision is the most important sensory system. Therefore our brains do more elaborate processing of visual information than for any other sensory modality. This is illustrated by the fact that when we understand something we typically say "I see what you mean", even if the information comes from something which was said and thus the information was received through our ears. Dogs rely more on their sense of smell than on their vision. Their visual system is more limited than ours in terms of color processing and ability to see details. Since vision is less important to canines it makes sense to hypothesize that the brains of dogs might process aspects of their visual environment in a way that is fundamentally different than humans.

A new set of data allows us to actually compare how humans and dogs perceive their world. This new information comes from a research project led by Erin Phillips of Emory University in Atlanta, Georgia. The project was inspired by recent work looking at how the human brain analyzes visual information. It depends upon recent advances in the use of fMRI brain scans and also the development of new computer analytic systems.

Studying the Dog's Visual Brain

The first problem the investigators faced was to come up with visual content that a dog might find interesting enough to watch for an extended period. To do this they created a series of 256 short video clips which were filmed from a "dog's eye view" (about knee height for a human). They wanted these to be naturalistic so they included scenes of walking, playing, feeding, humans interacting with each other or with dogs, dogs interacting with each other, vehicles in motion, and also images of non-dog animals. These were then edited into three different half-hour video segments.

The second problem was finding dogs whose brains could be scanned and recorded while watching these videos. Gregory Berns, the senior author of this study, developed the training procedures which resulted in dogs that can lie still in an MRI scanner and observe ongoing events while their brain activity is being recorded. This is not easy, since such brain scans are accompanied by noisy and unpredictable clanks and whirring from the MRI machine. For this particular study, only two dogs that had been trained for experiments in an fMRI had the focus and temperament to lie perfectly still and watch a 30-minute video without a break, and to do this for three separate sessions in order to collect the 90 minutes worth of data that was needed.

For comparison, two human subjects (who obviously did not need extensive training to be in the MRI scanner) also observed these same videos and had their brain activity recorded.

Learning to Interpret the Data

Having the fMRI data is not enough. Until recently the analytic tools that allow us to understand how the brain is actually processing the incoming images were not available. This form of analysis requires the use of artificial neural networks, which are computer programs that operate in a manner inspired by physiological neural networks in the brain. Although the theoretical basis for such neural network processing was developed in 1943 by the neurophysiologist Warren McCulloch at the University of Illinois and the mathematician William Pitts at the University of Chicago, there simply wasn't enough computational power available to make practical use of these ideas possible at that time.

The real significance of artificial neural networks is that they can learn. This means that rather than being explicitly written into the program, the neural network's ability to process information comes about through its interactions with various situations. The program is asked to analyze the information that it receives. It is then told when it is correct and when it is in error. Using this information the program adjusts itself to be more accurate. In this way the neural net learns, effectively reprogramming itself. Some 50 years after the original theoretical work was done, assisted by the availability of high-powered computers, a resurgence of interest in such neural nets led to breakthroughs in many areas such as facial recognition and speech recognition programs. Such programs simply cannot be written in a step-by-step process by a human programmer, but rather they rely on the "deep learning" of the artificial neural network. It is the network itself, which isolates variables of importance and figures out how to optimally process them often using algorithms that human programmers might never have thought of.

Since the Emory University study was a first look into decoding the visual cognition of dogs, the researchers kept things relatively simple. They marked the fMRI data with time stamps indicating what the video was presenting at each moment. Individual objects were identified, such as a dog, car, human, or cat, as opposed to specific actions, such as sniffing, running, playing or eating. The task of the neural net was to see if it could learn to accurately classify what was being seen based on the fMRI data.

Ultimately the program was able to map the brain scans from the humans with 99% accuracy for both actions and objects. For the dogs, the effects were a little bit more complex. To begin with, the neural net had no success at all for object recognition. However, the program did a lot better for actions, mapping visual inputs to brain activity within an accuracy range between 75% and 88%. The inability of the neural network program to accurately identify objects based on the fMRI data from dogs suggests that this aspect of visual perception was less systematic and of lower priority for processing in the canine brain.

Objects Versus Actions

These results suggest major differences in how the brains of humans and dogs work when analyzing the visual world.

“We humans are very object oriented,” Berns says. “There are 10 times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects. Dogs appear to be less concerned with who or what they are seeing and more concerned with the action itself.”

We already knew, from a variety of studies, that dogs are more sensitive to movement than human beings, and that dogs even have a slightly higher density of visual receptors in the eye designed to detect motion.

Thus Berns goes on to say, “It makes perfect sense that dogs’ brains are going to be highly attuned to actions first and foremost. Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.”

The important take-home message is that dogs and humans are not analyzing their visual environment in the same way. Human visual perception is concerned with "who" and "what" as well as monitoring which activities are unfolding, while dogs are much more focused on ongoing actions. This may help to explain why sometimes your well-loved pet dog, who has never been struck or abused, may suddenly seem to freak out if you rush in their direction. Who you are is not being processed as well in his visual brain as the fact that something is quickly surging toward him.

Copyright SC Psychological Enterprises Ltd. May not be reprinted or reposted without permission.

References

Phillips, E.M., Gillette, K.D., Dilks, D.D., Berns, G.S. (2022). Through a Dog's Eyes: fMRI Decoding of Naturalistic Videos from the Dog Cortex. Journal of Visualized Experiments. (187), e64442, doi:10.3791/64442

advertisement