Friday, April 3, 2026

Seals and sea lions provide clues to evolution of vocalization

A harbor seal relaxing on shore. (Wikipedia Commons, Charles J. Sharp)

Neuroscientists uncovered new insights into a key evolutionary question: Why can humans talk when most animals can’t? 

The journal Science published the research led by Emory University and the New College of Florida. The findings suggest that seals and sea lions may have vocal flexibility as a side effect of developing a brain “bypass” for voluntary breath control. This same bypass allowed them to adapt to aquatic life. 

The comparative study examined the brains of coyotes along with those of sea lions, elephant seals and harbor seals — marine carnivores with varying degrees of vocal control that are evolutionary cousins to canines. 

Seals are among the few animal species known to have the super vocal flexibility that allows them to mimic human voices. Sea lions have also demonstrated good vocal plasticity on a more limited scale. The neurobiology of these capabilities, however, was not known. 

Senior author Gregory Berns, Emory professor of psychology, and first author Peter Cook, a former Emory postdoctoral fellow, used the technique of diffusion magnetic resonance imaging (MRI) on post-mortem animal brains, giving them a view of connective neural pathways across species. 

All the brains used in the study came from wild animals that died naturally in rehabilitation facilities or had to be euthanized due to injuries.


Related:

Accuracy test for protein language models shines light into AI 'black box'

Yana Bromberg, right, professor of biology and computer science, and R. Prabakaran, a postdoctoral fellow in the Bromberg lab, are developing computational techniques to study biological complexity. (Photo by Carol Clark)

AI language models, used to generate human-like text to power chatbots and create content, are also revolutionizing biology by treating complex biological data like a language. Language models are increasingly used, for example, to find patterns in DNA and proteins to make predictions and speed research into biological complexity. 

A critical gap, however, is the lack of a method to estimate the reliability of these predictions. 

Computational biologists at Emory University have bridged this gap, developing a simple way to test the accuracy of a language model’s understanding of proteins. Nature Methods published their system, which scores the reliability of a model’s predictions by comparing how it “embeds,” or numerically codifies, synthetic random proteins versus proteins found in nature. 

“To the best of our knowledge, our framework is the first generalized method to quantify protein sequence embedding reliability,” says Yana Bromberg, senior author of the paper and Emory professor of biology and computer science. 

“Our method is a simple, elegant solution to a complex problem,” adds R. Prabakaran, first author of the study and a postdoctoral fellow in the Bromberg lab. “It’s a foundational method with a lot of scope for a range of language models in science.” 


Related:


How the brain charts emotion in a map-like way

Co-authors Philip Kragel, assistant professor of psychology, and Yumeng Ma, a PhD student in Kragel's Emotion Cognition and Computation Lab. (Photos by Carol Clark)

It is well established in psychology that humans conceptualize emotions by features known as valence (the degree of pleasantness or unpleasantness) and arousal (the intensity of bodily reactions, such as rapid breathing or a racing heart). 

If you think of “pleasantness” as longitude and “bodily reaction” as latitude, you can imagine a “mental map,” with nodes that “chart” knowledge of emotion. 

The neural mechanisms giving rise to this configuration, however, have remained unclear. 

Now, a new study reveals that hippocampal-prefrontal circuits — neural structures implicated in forming other types of cognitive maps — could support the mental mapping of emotion. 

Nature Communications published the research by neuroscientists at Emory University. The results showed how the hippocampus represents emotion concepts in a structured hierarchy of “nodes” of pleasantness and bodily reaction, while the ventromedial prefrontal cortex more accurately tracks relationships between these different nodes, or how they are distributed on the mental map.


Related: