Wednesday, April 14, 2021

Physicists develop theoretical model for neural activity of mouse brain

"One of the wonderful things about our model is that it's simple," says Mia Morrell, who did the research as an Emory senior majoring in physics. Morrell graduated last year and is now in New Mexico, above, where she is completing a post-baccalaureate physics program at Los Alamos National Laboratory.

By Carol Clark

The dynamics of the neural activity of a mouse brain behave in a peculiar, unexpected way that can be theoretically modeled without any fine tuning, suggests a new paper by physicists at Emory University. Physical Review Letters published the research, which adds to the evidence that theoretical physics frameworks may aid in the understanding of large-scale brain activity. 

“Our theoretical model agrees with previous experimental work on the brains of mice to a few percent accuracy — a degree which is highly unusual for living systems,” says Ilya Nemenman, Emory professor of physics and biology and senior author of the paper. 

The first author is Mia Morrell, who did the research for her honors thesis as an Emory senior majoring in physics. She graduated from Emory last year and is now in a post-baccalaureate physics program at Los Alamos National Laboratory in New Mexico. 

“One of the wonderful things about our model is that it’s simple,” says Morrell, who will start a Ph.D. program in physics at New York University in the fall. “A brain is really complex. So to distill neural activity to a simple model and find that the model can make predictions that so closely match experimental data is exciting.” 

The new model may have applications for studying and predicting a range of dynamical systems that have many components and have varying inputs over time, from the neural activity of a brain to the trading activity of a stock market. 

Co-author of the paper is Audrey Sederberg, a former post-doctoral fellow in Nemenman’s group, who is now on the faculty at the University of Minnesota. 

The work is based on a physics concept known as critical phenomena, used to explain phase transitions in physical systems, such as water changing from liquid to a gas. 

In liquid form, water molecules are strongly correlated to one another. In a solid, they are locked into a predictable pattern of identical crystals. In a gas phase, however, every molecule is moving about on its own. 

“At what is known as a critical point for a liquid, you cannot distinguish whether the material is liquid or vapor,” Nemenman explains. “The material is neither perfectly ordered nor disordered. It’s neither totally predictable nor totally unpredictable. A system at this ‘just right’ Goldilocks spot is said to be ‘critical.’” 

Very high temperature and pressure generate this critical point for water. And the structure of critical points is the same in many seemingly unrelated systems. For example, water transitioning into a gas and a magnet losing its magnetism as it is heated up are described by the same critical point, so the properties of these two transitions are similar. 

In order to actually observe a material at a critical point to study its structure, physicists must tightly control experiments, adjusting the parameters to within an extraordinarily precise range, a process known as fine-tuning. 

In recent decades, some scientists began thinking about the human brain as a critical system. Experiments suggest that brain activity lies in a Goldilocks spot — right at a critical transition point between perfect order and disorder. 

“The neurons of the brain don’t function just as one big unit, like an army marching together, but they are also not behaving like a crowd of people running in all different directions,” Nemenman says. “The hypothesis is that, as you increase the effective distance between neurons, the correlations between their activity are going to fall, but they will not fall to zero. The entire brain is coupled, acting like a big, interdependent machine, even while individual neurons vary in their activity.” 

Researchers began searching for actual signals of critical phenomena within brains. They explored a key question: What fine tunes the brain to reach criticality? 

In 2019, a team at Princeton University recorded neurons in the brain of a mouse as it was running in a virtual maze. They applied theoretical physics tools developed for non-living systems to the neural activity data from the mouse brain. Their results suggested that the neural activity exhibits critical correlations, allowing predictions about how different parts of the brain will correlate with one another over time and over effective distances within the brain. 

For the current paper, the Emory researchers wanted to test whether fine-tuning of particular parameters were necessary for the observation of criticality in the mouse brain experiments, or whether the critical correlations in the brain could be achieved simply through the process of it receiving external stimuli. The idea came from previous work that Nemenman’s group collaborated on, explaining how biological systems can exhibit Zipf’s law — a unique pattern of activity found in disparate systems. 

“We previously created a model that showed Zipf’s law in a biological system, and that model did not require fine tuning,” Nemenman says. “Zipf’s law is a particular form of criticality. For this paper, we wanted to make that model a bit more complicated, to see if could predict the specific critical correlations observed in the mouse experiments.” 

The model’s key ingredient is a set of a few hidden variables that modulate how likely individual neurons are to be active. 

Morrell wrote the computer code to run simulations and test the model on her home desktop computer. “The biggest challenge was to write the code in a way that would allow it to run fast even when simulating a large system with limited computer memory without a huge server,” she says. 

The model was able to closely reproduce the experimental results in the simulations. The model does not require the careful tuning of parameters, generating activity that is apparently critical by any measure over a wide range of parameter choices. 

“Our findings suggest that, if you do not view a brain as existing on its own, but you view it as a system receiving stimuli from the external world, then you can have critical behavior with no need for fine tuning,” Nemenman says. “It raises the question of whether something similar could apply to non-living physical systems. It makes us re-think the very notion of criticality, which is a fundamental concept in physics.” 

The computer code for the model is now available online, so that anyone with a laptop computer can access it and run the code to simulate a dynamic system with varying inputs over time. 

“The model we developed may apply beyond neuroscience, to any system in which widespread coupling to hidden variables is extant,” Nemenman says. “Data from many biological or social systems are likely to appear critical via the same mechanism, without fine-tuning.” 

The current paper was partially supported by grants from the National Institutes of Health and the National Science Foundation.

Related:

Physicists eye neural fly data, find formula for Zipf's law

Biophysicists take small step in quest for 'robot scientist'

Tuesday, April 6, 2021

Chemists develop tools that may help improve cancer diagnostics, therapeutics

A process known as methylation helps regulate on-and-off switches to keep a host of systems in the body functioning normally. "But the process can get hijacked, creating modifications that may lead to diseases," explains Ogonna Nwajiobi (above), an Emory Ph.D. student in chemistry and first author of the paper.

By Carol Clark

Chemists developed a method to detect changes in proteins that may signal the early stages of cancer, Alzheimer’s, diabetes and other major diseases. Angewandte Chemie published the work, led by chemists at Emory University and Auburn University. The results offer a novel strategy for studying links between unique protein modifications and various pathologies. 

“The knowledge we gain using our new, chemical method holds the potential to improve the ability to detect diseases such as lung cancer earlier, when treatment may be more effective,” says Monika Raj, senior author of the paper and Emory associate professor of chemistry. “A detailed understanding of protein modifications may also help guide personalized, targeted treatment for patients to improve a drug’s efficacy against cancer.” 

The researchers provided a proof of concept for using their method to detect single protein modifications, or monomethylation. Their lab experiments were conducted on the protein lysine expressed from E.coli and other non-human organisms. 

Lysine is one of the nine essential amino acids that is critical to life. After lysine is synthesized in the human body, changes to the protein, known as methylation, can occur. Methylation is a biochemical process that transfers one carbon atom and three hydrogen atoms from one substance to another. Such modifications can occur in single (monomethylation), double (dimethylation) or triple (trimethylation) forms. Demethylation reverses these modifications. 

The small tweaks of methylation and demethylation regulate biological on-off switches for a host of systems in the body, such as metabolism and DNA production. 

“In a normal state, the methylation process creates modifications that are needed to keep your body functioning and healthy,” says Ogonna Nwajiobi, an Emory Ph.D. student in chemistry and first author of the paper. “But the process can get hijacked, creating modifications that may lead to diseases.”

Modifications to lysine, in particular, he adds, have been linked to the development of many cancers and other diseases in humans. 

Sriram Mahesh, from Auburn University is co-first author of the paper. Xavier Streety, also from Auburn, is a co-author. 

The Raj lab, which specializes in developing organic chemistry tools to understand and solve problems in biology, wanted to devise a method to detect monomethylation marks to lysine that have been expressed by an organism. Monomethylation is especially challenging to detect since it leaves negligible changes in the bulk, charge or other characteristics of a lysine modification.

The researchers devised chemical probes, electron-rich diazonium ions, that couple only with monomethlyation sites at certain biocompatible conditions that they can control, including a particular pH level and electron density. They used mass spectroscopy and nuclear magnetic resonance techniques to show that they had selectively hit the correct targets, and to confirm the coupling of atoms at the sites. 

The method is unique because it directly targets the monomethylation sites. Another unique feature of the method is that it is reversible under acidic conditions, allowing the researchers to uncouple the atoms and regenerate the original state of a monomethylation site. 

The Raj lab now plans to collaborate with researchers at Emory’s Winship Cancer Institute to test the new method on tissue samples taken from lung cancer patients. The goal is to home in on differences in lysine monomethylation sites of people with and without lung cancer. 

“It’s like a fishing expedition,” Nwajiobi explains. “The first step is to use our method to find the lysine monomethylation sites in tissue samples, which is difficult to do because of their low abundance. Once we’ve found the sites, our method then allows us to reverse the coupling with our chemical probe, so the functions of the sites can be studied in their intact, original forms.” 

Practical methods for early detection of many diseases, like lung cancer, are needed to help improve patient outcomes. “If we can develop more ways to identify lung cancer earlier, that may open the door for treatments that greatly improve the survival rate,” Raj says. 

The researchers hope to study lysine monomethylation differences between samples taken from patients at different stages of lung cancer, between patients with or without a family history of the disease, and between those who have smoked and those who have not. Knowledge gained from such analyses could set the stage for more personalized, targeted treatments, Raj says. 

Her lab is also developing chemical tools to selectively detect lysine dimethylation and trimethylation sites, in order to help more fully characterize the role of lysine methylation in disease. 

“We hope that other researchers will also apply our methods, and the chemical tools we are developing, to better understand a range of cancers and many other diseases associated with lysine methylation,” Raj says. 

The work was funded by the National Science Foundation.

Related:

Biologists unravel another mystery of what makes DNA go 'loopy'

Tuesday, March 30, 2021

Screams of 'joy' sound like 'fear' when heard out of context

"Our work intertwines language and non-verbal communication in ways that haven't been done in the past," says Emory psychologist Harold Gouzoules, senior author of the study.

By Carol Clark

People are adept at discerning most of the different emotions that underlie screams, such as anger, frustration, pain, surprise or fear, finds a new study by psychologists at Emory University. Screams of happiness, however, are more often interpreted as fear when heard without any additional context, the results show. 

PeerJ published the research, the first in-depth look at the human ability to decode the range of emotions tied to the acoustic cues of screams. 

“To a large extent, the study participants were quite good at judging the original context of a scream, simply by listening to it through headphones without any visual cues,” says Harold Gouzoules, Emory professor of psychology and senior author of the study. “But when participants listened to screams of excited happiness they tended to judge the emotion as fear. That’s an interesting, surprising finding.” 

First author of the study is Jonathan Engelberg, an Emory Ph.D. student of psychology. Emory alum Jay Schwartz, who is now on the faculty of Western Oregon University, is co-author. 

The acoustic features that seem to communicate fear are also present in excited, happy screams, the researchers note. “In fact, people pay good money to ride roller coasters, where their screams no doubt reflect a blend of those two emotions,” Gouzoules says. 

He adds that the bias towards interpreting both of these categories as fear likely has deep, evolutionary roots. 

“The first animal screams were probably in response to an attack by a predator,” he says. “In some cases, a sudden, loud high-pitched sound might startle a predator and allow the prey to escape. It’s an essential, core response. So mistaking a happy scream for a fearful one could be an ancestral carryover bias. If it’s a close call, you’re going to err on the side of fear.” 

The findings may even provide a clue to the age-old question of why young children often scream while playing. 

“Nobody has really studied why young children tend to scream frequently, even when they are happily playing, but every parent knows that they do,” Gouzoules says. “It’s a fascinating phenomenon.” 

While screams can convey strong emotions, they are not ideal as individual identifiers, since they lack the more distinctive and consistent acoustic parameters of an individual’s speaking voice. 

“It’s just speculative, but it may be that when children scream with excitement as they play, it serves the evolutionary role of familiarizing a parent to the unique sound of their screams,” Gouzoules says. “The more you hear your child scream in a safe, happy context, the better able you are to identify a scream as belonging to your child, so you will know to respond when you hear it.” 

Gouzoules first began researching the screams of non-human primates, decades ago. Most animals scream only in response to a predator, although some monkeys and apes also use screams to recruit support when they are in a fight with other group members. “Their kin and friends will come to help, even if some distance away, when they can recognize the vocalizer,” he says. 

In more recent years, Gouzoules has turned to researching human screams, which occur in a much broader context than those of animals. His lab has collected screams from Hollywood movies, TV shows and YouTube videos. They include classic performances by “scream queens” like Jaime Lee Curtis, along with the screams of non-actors reacting to actual events, such as a woman shrieking in fear as aftershocks from a meteor that exploded over Russia shake a building, or a little girl’s squeal of delight as she opens a Christmas present. 

In previous work, the lab has quantified tone, pitch and frequency for screams from a range of emotions: Anger, frustration, pain, surprise, fear and happiness. 

For the current paper, the researchers wanted to test the ability of listeners to decode the emotion underlying a scream, based solely on its sound. A total of 182 participants listened through headphones to 30 screams from movies that were associated with one of the six emotions. All of the screams were presented six times, although never in sequence. After hearing a scream, the listeners rated how likely it was associated with each of six of the emotions, on a scale of one to five. 

The results showed that the participants most often matched a scream to its correct emotional context, except in the case of screams of happiness, which participants more often rated highly for fear. 

“Our work intertwines language and non-verbal communication in a way that hasn’t been done in the past,” Gouzoules says. 

Some aspects of non-verbal vocal communication are thought to be precursors for language. The researchers hypothesize that it may be that the cognitive underpinnings for language also built human capacity in the non-verbal domain. “It’s probably language that gives us this ability to take a non-verbal vocalization and discern a wide range of meanings, depending on the acoustic cues,” Gouzoules says.

Related:

Screams contain a 'calling card' for the vocalizer's identity

What is a scream? The acoustics of a primal human call

Sunday, March 21, 2021

Heritable traits that appear in teen years raise risk for adult cannabis use

Some of the risk for repeated cannabis use into adulthood can be attributed to the genetic effects of neuroticism, risk tolerance and depression, the study found. "While this work marks an important step in identifying genetic factors that can increase the risk for cannabis use, a substantial portion of the factors that raise the risk remain unexplained," says Emory psychologist Rohan Palmer.

By Carol Clark

While some youth experiment with marijuana but don’t go on to long-term use, others develop a problematic pot habit that continues into adulthood. A major new analysis shows that at least a small portion of the risk for developing into an adult marijuana user may be related to inherited behaviors and traits that appear during adolescence. 

The journal Addiction published the findings by researchers at Emory and Brown University. 

“Our analysis suggests that some early adolescent behaviors and traits — like depression, neuroticism and acting out — can be indicative for cannabis use later in life,” says Rohan Palmer, senior author of the paper and assistant professor in Emory’s Department of Psychology, where he heads the Behavioral Genetics of Addiction Laboratory

“Decades of research has shown that behaviors can have a genetic component,” adds Leslie Brick, lead author and assistant professor in the Department of Psychiatry and Human Behavior in Brown’s Alpert Medical School. “And while there is not one genetically-influenced trait that determines whether you’re going to be a long-term cannabis user, our paper indicates that there are polygenic effects across multiple inherited behaviors and traits that show a propensity for increased risk.” 

Brick, a long-time collaborator with Palmer, also holds an adjunct faculty appointment in Emory’s Department of Psychology. 

The Transmissible Liability Index is a well-known measure for a constellation of heritable traits that may appear during the developmental years that are associated with the risk of a substance use disorder. For the current paper, the researchers wanted to tease out which of these heritable characteristics might be associated with repeated marijuana use later in life. 

“Cannabis use has been less studied than tobacco and alcohol,” Palmer says. “For one thing, it’s harder to get people to answer detailed questionnaires honestly about cannabis, since it’s an illegal substance. And it’s also much more difficult to standardize the amount of cannabis consumed, as compared to cigarettes and liquor.” 

Cannabis use, however, is widespread among adolescents and young adults. In 2018, more than 35 percent of high school seniors surveyed reported having used marijuana during the past year and more than 20 percent reported doing so during the past month, according to the National Institute on Drug Abuse (NIDA). 

As cultural norms have shifted, including the legalization of marijuana for adult recreational use in many states, teens’ perceptions of the risks of marijuana use have declined. 

Those risks, however, are real. 

“Adolescence is a major period of brain development,” Brick says. “In fact, our brains don’t stop developing until we are around 25 years old. Research indicates that cannabis has some major impacts on our biology, although its full effects are still not well understood.” 

The researchers drew data from the National Longitudinal Study of Adolescent Health, or Add Health, which includes a nationally representative sample of 20,000 adolescents in grades 7 to 12 in the United States who have been followed into adulthood. Comprehensive data from early adolescence to adulthood was collected on health and health-related behavior, including substance use, personality and genetics. 

For the current paper, the researchers identified a large homogenous subgroup of individuals from the Add Health study, about 5,000 individuals of European ancestry, for their final analytic sample. They then leveraged existing genome-wide association studies to examine whether certain heritable behavioral traits noted during adolescence were associated with the Transmissible Liability Index, and whether any of these traits were also associated with risk for later cannabis use. 

The results showed that a small portion of the risk for repeated cannabis use into adulthood can be attributed to the genetic effects of neuroticism, risk tolerance and depression that can appear during adolescence. 

“While this work marks an important step in identifying genetic factors that can increase the risk for cannabis use, a substantial portion of factors that raise the risk remain unexplained,” Palmer says. “We’ve shown how you can use existing data to assess the utility of a polygenic risk score. More studies are needed to continue to identify unique genetic and other environmental sources for the risk of long-term, problematic use of cannabis.” 

“Better understanding of what behaviors and traits may give someone a pre-disposition for long-term cannabis use gives us a better shot of identifying those most at risk so we can home in on effective interventions,” Brick says. 

A major limitation of the current study, the researchers add, is that it focused on individuals of European ancestry, because no sample size large enough for the genome-wide analysis was available for other ancestral groups. 

Co-authors of the study include the following members of Emory’s Behavioral Genetics of Addiction Laboratory: Graduate students Lauren Bertin, Kathleen Martin and former undergraduate Victoria Risner (now an Emory alum); and Chelsie Benca-Bachman, associate director of research projects in the lab. 

The work was supported by an Avenir grant from the National Institute on Drug Abuse.

Related:

Tuesday, March 9, 2021

Water temperature key to schistosomiasis risk and prevention strategies

Karena Nguyen, a post-doctoral fellow in Emory's Department of Biology, shown with two of the freshwater snails that serve as intermediate hosts for the parasites that cause schistosomiasis. (Photo by Rachel Hartman)

By Carol Clark

About one billion people worldwide are at risk for schistosomiasis — a debilitating disease caused by parasitic worms that live in fresh water and in intermediate snail hosts. A new study finds that the transmission risk for schistosomiasis peaks when water warms to 21.7 degrees centigrade, and that the most effective interventions should include snail removal measures implemented when the temperature is below that risk threshold. 

The Proceedings of the National Academy of Sciences published the results, led by Emory University, the University of South Florida and the University of Florida. 

“We’ve shown how and why temperature matters when it comes to schistosomiasis transmission risk,” says Karena Nguyen, a post-doctoral fellow in Emory University’s Department of Biology and a first author of the study. “If we really want to maximize human health outcomes, we need to consider disease transmission in the context of regional temperatures and other environmental factors when developing intervention strategies.” 

The findings indicate that climate change will increase schistosomiasis risk in regions where surface water moves closer to 21.7 degrees centigrade, or 71 degrees Fahrenheit. The researchers also found, however, that implementing snail control measures decreases transmission but raises the temperature for peak transmission risk to 23 degrees centigrade, or 73 degrees Fahrenheit. 

Co-first author of the paper is Philipp Boersch-Supan, an expert in ecological systems at the University of Florida and the British Trust for Ornithology. 

Nguyen is a member of the lab of David Civitello, Emory assistant professor of biology and a co-author of the PNAS paper. The Civitello lab studies the ecological dynamics of disease, aquatics and agricultural ecology through a combination of experiments, field surveys and models. 

“The control of schistosomiasis currently relies on treating infected people,” Civitello says. “However, there is renewed awareness that the ecological factors surrounding the disease also need to be considered. Our paper is a beautiful example of the potential power of uniting ecology with human disease interventions and control measures.” 


Click on graphic of the life cycle of the schistosomiasis parasite, above, to enlarge.

Schistosomiasis is one of the most devasting water-based diseases in developing countries, with more than 200 million people infected worldwide, leading to around 200,000 deaths annually. It is caused by Schistosoma parasites that have a complex life cycle. Freshwater becomes contaminated by the parasite’s eggs when infected people urinate or defecate in the water. After the eggs hatch, the parasites enter freshwater snails where they develop and multiply. More mature parasites are able to leave the snails and re-enter the water. These free-swimming parasites can then burrow into the skin of people who are wading, swimming, bathing, washing or doing agricultural work in contaminated water.

Children who are repeatedly infected can develop anemia, malnutrition and learning difficulties. Over the long term, the parasites can also damage the liver, intestine, lungs and bladder. 

“Schistosomiasis is treatable — people can take a drug to get rid of the adult parasites in their bodies,” Nguyen says. “But in areas where schistosomiasis is prevalent, people can easily get reinfected by coming in contact with contaminated water. And children, who like to play in water, tend to have the highest burden of the disease.” 

For the current paper, Nguyen focused on how global climate change and rising water temperatures might affect each stage of the schistosomiasis transmission cycle. It was already established that both the parasites and the snails are sensitive to water temperature, with each stage having an optimum temperature. 

“I wanted to build on previous work to see if we could use it to find better predictors for human risk and more effective interventions,” Nguyen says. 

The researchers integrated an epidemiological model of schistosomiasis and temperature-dependent traits of the parasites and their snail hosts to run different computer-simulated interventions. The results showed that interventions targeting snails were most effective at reducing transmission, and pinpointed the water temperature for when the risk of transmission peaks. 

Unexpectedly, the simulations also showed that interventions targeting snail removal actually raised the peak transmission temperature by 1.3 degrees centigrade, while reducing transmission risk. 

“That may not sound like a lot,” Nguyen says, “but we’re talking about water temperature, which takes a lot of energy to warm, so 1.3 degrees is actually a big shift.” 

Snails naturally start to die off at higher water temperatures. The data in the new paper shows how implementing snail control measures, such as through chemical treatment of the water, amplifies snail mortality at all temperatures. This lowers transmission risk overall, but allows peak transmission risk to occur at higher temperatures. 

These insights can guide public health workers to time their interventions, by factoring in regional water temperatures, and how the temperatures fluctuate during different seasons of the year. 

“Our findings don’t mean that we should stop human treatment for schistosomiasis,” Nguyen says. “Instead, it will likely be beneficial to include both the human and ecological components. By combining human drug treatment with snail removal measures, during times when water is below the peak transmission temperature, we may be able to maximize the efficacy of an intervention.” 

Additional authors of the PNAS paper include Jason Rohr (University of Notre Dame), Valerie Harwood (University of South Florida), Rachel Hartman (Emory staff) and Emory graduate student Sandra Mendiola. 

The work was funded by the National Institutes of Health, the National Science Foundation, the Porter Foundation and the U.S. Department of Agriculture.

Related: