Friday, April 3, 2026

Seals and sea lions provide clues to evolution of vocalization

A harbor seal relaxing on shore. (Wikipedia Commons, Charles J. Sharp)

Neuroscientists uncovered new insights into a key evolutionary question: Why can humans talk when most animals can’t? 

The journal Science published the research led by Emory University and the New College of Florida. The findings suggest that seals and sea lions may have vocal flexibility as a side effect of developing a brain “bypass” for voluntary breath control. This same bypass allowed them to adapt to aquatic life. 

The comparative study examined the brains of coyotes along with those of sea lions, elephant seals and harbor seals — marine carnivores with varying degrees of vocal control that are evolutionary cousins to canines. 

Seals are among the few animal species known to have the super vocal flexibility that allows them to mimic human voices. Sea lions have also demonstrated good vocal plasticity on a more limited scale. The neurobiology of these capabilities, however, was not known. 

Senior author Gregory Berns, Emory professor of psychology, and first author Peter Cook, a former Emory postdoctoral fellow, used the technique of diffusion magnetic resonance imaging (MRI) on post-mortem animal brains, giving them a view of connective neural pathways across species. 

All the brains used in the study came from wild animals that died naturally in rehabilitation facilities or had to be euthanized due to injuries.


Related:

Accuracy test for protein language models shines light into AI 'black box'

Yana Bromberg, right, professor of biology and computer science, and R. Prabakaran, a postdoctoral fellow in the Bromberg lab, are developing computational techniques to study biological complexity. (Photo by Carol Clark)

AI language models, used to generate human-like text to power chatbots and create content, are also revolutionizing biology by treating complex biological data like a language. Language models are increasingly used, for example, to find patterns in DNA and proteins to make predictions and speed research into biological complexity. 

A critical gap, however, is the lack of a method to estimate the reliability of these predictions. 

Computational biologists at Emory University have bridged this gap, developing a simple way to test the accuracy of a language model’s understanding of proteins. Nature Methods published their system, which scores the reliability of a model’s predictions by comparing how it “embeds,” or numerically codifies, synthetic random proteins versus proteins found in nature. 

“To the best of our knowledge, our framework is the first generalized method to quantify protein sequence embedding reliability,” says Yana Bromberg, senior author of the paper and Emory professor of biology and computer science. 

“Our method is a simple, elegant solution to a complex problem,” adds R. Prabakaran, first author of the study and a postdoctoral fellow in the Bromberg lab. “It’s a foundational method with a lot of scope for a range of language models in science.” 


Related:


How the brain charts emotion in a map-like way

Co-authors Philip Kragel, assistant professor of psychology, and Yumeng Ma, a PhD student in Kragel's Emotion Cognition and Computation Lab. (Photos by Carol Clark)

It is well established in psychology that humans conceptualize emotions by features known as valence (the degree of pleasantness or unpleasantness) and arousal (the intensity of bodily reactions, such as rapid breathing or a racing heart). 

If you think of “pleasantness” as longitude and “bodily reaction” as latitude, you can imagine a “mental map,” with nodes that “chart” knowledge of emotion. 

The neural mechanisms giving rise to this configuration, however, have remained unclear. 

Now, a new study reveals that hippocampal-prefrontal circuits — neural structures implicated in forming other types of cognitive maps — could support the mental mapping of emotion. 

Nature Communications published the research by neuroscientists at Emory University. The results showed how the hippocampus represents emotion concepts in a structured hierarchy of “nodes” of pleasantness and bodily reaction, while the ventromedial prefrontal cortex more accurately tracks relationships between these different nodes, or how they are distributed on the mental map.


Related:


Monday, March 16, 2026

Turning Over a New Leaf in Analyses of Natural Products

Emory graduate student William Crandall loves working at the nexus of nature and cutting-edge technology. (Photo by Tharanga Samarakoon)

Scientists developed a new way to help understand what happens in the body when people consume a plant product and the many chemicals it contains. The American Chemical Society’s Journal of Natural Products published the method to quickly analyze the effects of a natural product, developed at Emory University. 

As a test case, the paper focused on biotransformation of chemicals from the kratom plant by human liver cells in a laboratory dish. The researchers developed an automated method — based on high-resolution mass spectrometry and molecular network mapping — to gain a detailed, big-picture view of the resulting metabolites, or chemicals produced. 

The new, streamlined methodology can be broadly applied to nutrition and dietary supplement research, filling a critical gap in the field. 

“Plants evolved extraordinarily complex chemical defenses and signaling systems,” says Cassandra Quave, co-senior author of the study and professor of dermatology at Emory School of Medicine and the Center for the Study of Human Health. “Our new approach in molecular mapping gives us a way to follow how that chemical complexity is reshaped by human metabolism.” 

“Our technique does not just look at how one compound in this plant is metabolized,” adds William Crandall, first author of the study and a PhD student of molecular and systems pharmacology in Emory’s Laney Graduate School. “It shows how dozens of compounds are metabolized at one time.”

“This method marks a major, transformative step in natural products research,” says Dean Jones, co-senior author of the paper and professor in Emory School of Medicine. “A process that used to require years of work now takes just days.”


Related:







Wednesday, January 14, 2026

'Periodic table' for AI methods aims to drive innovation

Eslam Abdelaleem led the work as an Emory graduate student. The day of the final breakthrough, the AI health tracker on his watch recorded his racing heart as three hours of cycling. "That's how it interpretated the level of excitement I was feeling," Abdelaleem says. (Photo by Barbara Conner)

Artificial intelligence is increasingly used to integrate and analyze multiple types of data formats, such as text, images, audio and video. One challenge slowing advances in multimodal AI, however, is the process of choosing the algorithmic method best aligned to the specific task an AI system needs to perform. 

Scientists have developed a unified view of AI methods aimed at systemizing this process. The Journal of Machine Learning Research published the new framework for deriving algorithms, developed by physicists at Emory University. 

“We found that many of today’s most successful AI methods boil down to a single, simple idea — compress multiple kinds of data just enough to keep the pieces that truly predict what you need,” says Ilya Nemenman, Emory professor of physics and senior author of the paper. “This gives us a kind of ‘periodic table’ of AI methods. Different methods fall into different cells, based on which information a method’s loss function retains or discards.”