AI Interprets What Rodents Are Saying
“DeepSqueak” enables researchers to understand rodent vocalizations.
Posted Feb 23, 2019
Artificial intelligence (AI) has improved greatly in recent years largely due to advances in deep learning, a method of machine-based learning. Deep learning’s superior pattern-recognition has spawned a number of advancements in computer vision, translation, speech recognition, and other purposes. Deep learning algorithms are being applied in many industries for a variety of purposes. Last month, researchers in the Psychiatry and Behavioral Science department at the University of Washington School of Medicine announced the creation of “DeepSqueak,” a deep learning system that can detect and analyze the vocalizations of rodents.
Why rodent chit-chat?
Modern science depends on laboratory rodents to serve as mammalian proxies to human test subjects. Research studies conducted in vitro with cultured cells tend to lack the breadth and depth of information that an in vivo study in a living organism can provide.
This is particularly relevant for neuroscience, as finding human volunteers for brain research is a bit of a non-starter. When neuroscience tests are studied in vivo on humans, it is with the consent of those who are undergoing brain surgery typically for other purposes not related to the study. For example, neuroscience research studies have been done on consenting epileptic patients undergoing brain surgery to remove areas responsible for seizures. These types of opportunities are sporadic, and in short supply, compared to the vast demands of research scientists worldwide. As a result, rodents are frequently used in research.
However, unlike human test subjects, rodents are not able to communicate with researchers. Having the ability to study the vocalizations of laboratory rodents provides additional insights that may be combined as another data point of reference with behavioral observations. This is particularly beneficial for behavior neuroscience studies for addiction, depression, anxiety, fear, reward system, drug abuse, aging, and neurodegenerative diseases according to the paper.
What’s the science behind DeepSqueak?
The software for DeepSqueak was designed and coded by Kevin Coffey and Russell Marx, two scientists at the lab of John Neumaier, professor of psychiatry and behavioral sciences at the University of Washington School of Medicine. Neumaier, who also contributed to the research study, is the associate director of the Alcohol and Drug Abuse Institute, and head of the Division of Psychiatric Neurosciences.
The researchers used deep learning, specifically regional convolutional neural networks (Faster-RCNN), to detect rodent vocalizations, and published their research in the January 2019 issue of Neuropsychopharmacology.
According to the research paper, rats and mice vocalize across a wide range of frequencies (20–115 kHz). When rats are engaged in positive, happy experiences such as playing, tickling, and enjoying treats, they tend to make higher frequency sounds in the 50 khz range. When rats are fearful or stressed, they make sounds in a lower frequency around 22-khz.
When audio file recordings of rodent vocalizations are input in DeepSqueak, the system converts the sound files into images (sonograms). The input recordings can be either individual or a large batch of sound files. The recordings are converted to an image format for processing by a state-of-the-art deep learning visual algorithm, the same technology used for self-driving cars called Faster-RCNN. The team initially trained DeepSqueak with manually labeled calls. The neural network distinguishes and isolates rodent vocalizations from ambient noise.
The researchers discovered that rodents have an estimated twenty types of vocalizations. The rodents exhibited vocalizations in the happy range when they were at play with other rodents, or expecting a treat like sugar. The team also found that the vocalizations for male mice became more complex if a female mouse was nearby. When two male mice are together, they make the same type of less complex vocalizations repetitively.
The research team has developed DeepSqueak so that it is flexible and easy-to-use for researchers, not just the tech-savvy. They have made DeepSqueak available in an open repository in hopes of helping other scientists worldwide improve their research.
Copyright © 2019 Cami Rosso All rights reserved.
Coffey, Kevin R., Marx,Russell G., Neumaier, John F..” DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations.” Neuropsychopharmacology. 4 January 2019.