Showing posts with label Physics. Show all posts
Showing posts with label Physics. Show all posts

Tuesday, August 25, 2015

Biophysicists take small step in quest for 'robot scientist'

The researchers dubbed their algorithm "Sir Isaac," in a nod to one of the greatest scientists of all time, Sir Isaac Newton. 

By Carol Clark

Biophysicists have taken another small step forward in the quest for an automated method to infer models describing a system’s dynamics – a so-called robot scientist. Nature Communications published the finding – a practical algorithm for inferring laws of nature from time-series data of dynamical systems.

“Our algorithm is a small step,” says Ilya Nemenman, lead author of the study and a professor of physics and biology at Emory University. “It could be described as a toy version of a robot scientist, but even so it may have practical applications. For the first time, we’ve taught a computer how to efficiently search for the laws that underlie arbitrary, natural dynamical systems, including complex, non-linear biological systems.”

Nemenman’s co-author on the paper is Bryan Daniels, a biophysicist at the University of Wisconsin.

Everything that is changing around us and within us – from the relatively simple motion of celestial bodies, to weather and complex biological processes – is a dynamical system. A large part of science is guessing the laws of nature that underlie such systems, summarizing them in mathematical equations that can be used to make predictions, and then testing those equations and predictions through experiments.

“The long-term dream is to harness large-scale computing to make the guesses for us and speed up the process of discovery,” Nemenman says.

Isaac Newton contemplates gravity beneath an apple tree. The intuition of a genius like Newton is one quality that distinguishes human intelligence from even the highest-powered computer and algorithmic program.

While the quest for a true robot scientist, or computerized general intelligence, remains elusive, this latest algorithm represents a new approach to the problem. “We think we have beaten any automated-inference algorithm that currently exists because we focus on getting an approximate solution to a problem, which we can get with much less data,” Nemenman says.

In previous research, John Wikswo, a biophysicist at Vanderbilt University, along with colleagues at Cornell University, applied a software system to automate the scientific process for biological systems.

“We came up with a way to derive a model of cell behavior, but the approach is complicated and slow, and it is limited in the number of variables that it can track – it can’t be scaled to more complicated systems,” Wikswo says. “This new algorithm increases the speed of the necessary calculation by a factor of 100 or more. It provides an elegant method to generate compact and effective models that should allow prediction and control of complex systems.”

Nemenman and Daniels dubbed their new algorithm “Sir Issac.”

The real Sir Isaac Newton serves as a classic example of how the scientific method involves forming hypotheses, then testing them by looking at data and experiments. Newton guessed that the same rules of gravity applied to a falling apple and to the moon in orbit. He used data to test and refine his guess and generated the law of universal gravitation.

To test their algorithm, Nemenman and Daniels created an artificial, model solar system by generating numerical trajectories of planets and comets that move around a sun. In this simplified solar system, only the sun attracted the planets and comets.

Images of the moon by NASA's Galileo spacecraft. Everything that is changing around us and within us – from the relatively simple motion of celestial bodies, to weather and complex biological processes – is a dynamical system.

“We trained our algorithm how to search through a group of laws which were limited enough to be practical, but also flexible enough to explain many different dynamics,” Nemenman explains. “We then gave the algorithm some simulated planetary trajectories, and asked it what makes these planets move. It gave us the universal gravitational force. Not perfectly, but with very good accuracy. The error was just a few percent.”

The algorithm also figured out that force changes velocity, not the position directly. “It gets Newton’s First Law,” Nemenman says, “the fact that in order to predict the possible trajectory of a planet, whether it stays near the sun or flies off into infinity, just knowing its initial position is not enough. The algorithm understands that you also need to know the velocity.”

While most modern-day high school student know Newton’s First Law, it took humanity 2,000 years beyond the time of Aristotle to discover it.

One limitation of the algorithm is inexactness. Getting an approximate model, however, is beneficial as long as the approximation is close enough to make good predictions, Nemenman says.

“Newton’s laws are also approximate, but they have been remarkably beneficial for 350 years,” he says. “We’re still using them to control everything from electron microscopes to rockets.”

Getting an exact description of any complex dynamical system requires large amounts of data, he adds. “In contrast, with our algorithm, we can get an approximate description by using just a few measurements of a system. That makes our method practical.”

The researchers demonstrated, for example, that the algorithm can infer the dynamics of a caricature of an immune receptor in a leukocyte. This type of model could lead to a better understanding of the time-course for the response to an infection or a drug.

In another experiment, the researchers fed the algorithm data on concentrations of just three different species of chemicals involved in glycolysis in yeast. The algorithm generated a model that makes accurate predictions for the full system of this basic metabolic process to consume glucose, which involves seven chemical species.

“If you applied other methods of automatic inference to this system it would typically take tens of thousands of examples to reliably generate the laws that drive these chemical transformations,” Nemenman says. “With our algorithm, we were able to do it with fewer than 100 examples.”

With their experimental collaborators, the researchers are now exploring whether the algorithm can model more complex biological processes, such as the dynamics of insulin secretion in the pancreas and its relationship to the onset of a disease like diabetes. “The biology of insulin secreting cells is extremely complex. Understanding their dynamics on multiple scales is going to be difficult, and may not be possible for years with traditional methods,” Nemenman says. “But we want to see if we can get a good enough approximation with our method to deliver a practical result.”

The intuition of a genius mind like that of Isaac Newton is one quality that distinguishes human intelligence from even the highest-powered computer and algorithmic program.

“You can’t give a machine intuition – at least for now,” Nemenman says. “What we’re hoping we can do is get our computer algorithm to spit out models of phenomena so that we, as scientists, can use them and our intuition to make useful generalizations. It’s easier to generalize from models of specific systems then it is to generalize from various data sets directly.”

Related:
Physicists eye neural fly data, find formula for Zipf's law
Biology may not be so complex after all

Thursday, August 13, 2015

Marks on 3.4-million-year-old bones not due to trampling, analysis confirms

Detail of the marks on a fossilized rib bone, one of the two controversial bones. “The best match we have for the marks, using currently available data, would still be butchery with stone tools," says anthropologist Jessica Thompson. Photo by Zeresenay Alemseged.

By Carol Clark

Marks on two 3.4 million-year-old animal bones found at the site of Dikika, Ethiopia, were not caused by trampling, an extensive statistical analysis confirms. The Journal of Human Evolution published the results of the study, which developed new methods of fieldwork and analysis for researchers exploring the origins of tool making and meat eating in our ancestors.

“Our analysis clearly shows that the marks on these bones are not characteristic of trampling,” says Jessica Thompson, an assistant professor of anthropology at Emory University and lead author of the study. “The best match we have for the marks, using currently available data, would still be butchery with stone tools.”

The 12 marks on the two specimens – a long bone from a creature the size of a medium antelope and a rib bone from an animal closer in size to a buffalo – most closely resemble a combination of purposeful cutting and percussion marks, Thompson says. “When these bones were hit, they were hit with enormous force and multiple times.”

The paper supports the original interpretation that the damage to the two bones is characteristic of stone tool butchery, published in Nature in 2010. That finding was sensational, since it potentially pushed back evidence for the use of stone tools, as well as the butchering of large animals, by about 800,000 years.

The Nature paper was followed in 2011 by a rebuttal in the Proceedings of the National Academy of Sciences (PNAS), suggesting that the bones were marked by incidental trampling in abrasive sediments. That sparked a series of debates about the significance of the discovery and whether the bones had been trampled.

Anthropologist Jessica Thompson at work in the field in Africa. She specializes in the study of what happens to bones after an animal dies.

For the current paper, Thompson and her co-authors examined the surfaces of a sample of more than 4000 other bones from the same deposits. They then used statistical methods to compare more than 450 marks found on those bones to experimental trampling marks and to the marks on the two controversial specimens.

“We would really like to understand what caused these marks,” Thompson says. “One of the most important questions in human evolution is when did we start eating meat, since meat is considered a likely explanation for how we fed the evolution of our big brains.”

Evidence shows that our genus, Homo, emerged around 2.8 million years ago. Until recently, the earliest known stone tools were 2.6 million years old. Changes had already been occurring in the organization of the brains of the human lineage, but after this time there was also an increase in overall brain size. This increased size has largely been attributed to a higher quality diet.

While some other apes are known to occasionally hunt and eat animals smaller than themselves, they do not hunt or eat larger animals that store abundant deposits of fat in the marrow of their long bones. A leading hypothesis in paleo-anthropology is that a diet rich in animal protein combined with marrow fat provided the energy needed to fuel the larger human brain.

The animal bones in the Dikika site, however, have been reliably dated to long before Homo emerged. They are from the same sediments and only slightly older than the 3.3-million-year-old fossils unearthed from Dikika belonging to the hominid species Australopithecus afarensis.

Thompson specializes in the study of what happens to bones after an animal dies. “Fossil bones can tell you stories, if you know how to interpret them,” she says.

A whole ecosystem of animals, insects, fungus and tree roots modify bones. Did they get buried quickly? Or were they exposed to the sun for a while? Were they gnawed by a rodent or chomped by a crocodile? Were they trampled on sandy soil or rocky ground? Or were they purposely cut, pounded or scraped with a tool of some kind?

"Fossil bones can tell you stories, if you know how to interpret them," Jessica Thompson says. For instance, the marks on this fossilized bone from the Dikika site are diagnostic of punctures made by crocodile teeth. Photo by Jessica Thompson.

One way that experimental archeologists learn to interpret marks on fossil bones is by modifying modern-day bones. They hit bones with hammer stones, feed them to carnivores and trample them on various substrates, then study the results.

Based on knowledge from such experiments, Thompson was one of three specialists who diagnosed the marks on the two bones from Dikika as butchery in a blind test, before being told the age of the fossils or their origin.

The PNAS rebuttal paper, however, also used experimental methods and came to the conclusion that the marks were characteristic of trampling.

Thompson realized that data from a larger sample of fossils were needed to chip away at the mystery.

The current paper investigated with microscopic scrutiny all non-hominin fossils collected from the Hadar Formation at Dikika. The researchers collected a random sample of fossils from the same deposits as the controversial specimens, as well as nearby deposits. They measured shapes and sizes of marks on the fossil bones. Then they compared the characteristics of the fossil marks statistically to the experimental marks reported in the PNAS rebuttal paper as being typical of trampling damage. They also investigated the angularity of sand grains at the site and found that they were rounded – not the angular type that might produce striations on a trampled bone.

“The random population sample of the fossils provides context,” Thompson says. “The marks on the two bones in question don’t look like other marks common on the landscape. The marks are bigger, and they have different characteristics.”

Trample marks tend to be shallow, sinuous or curvy. Purposeful cuts from a tool tend to be straight and create a narrow V-shaped groove, while a tooth tends to make a U-shaped groove. The study measured and quantified such damage to modern-day bones for comparison to the fossilized ones.

“Our analysis shows with statistical certainty that the marks on the two bones in question were not caused by trampling,” Thompson says. “While there is abundant evidence that other bones at the site were damaged by trampling, these two bones are outliers. The marks on them still more closely resemble marks made by butchering.”

One hypothesis is that butchering large animals with tools occurred during that time period, but that it was an exceedingly rare behavior. Another possibility is that more evidence is out there, but no one has been looking for it because they have not expected to find it at a time period this early.

The Dikika specimens represent a turning point in paleoanthropology, Thompson says. “If we want to understand when and how our ancestors started eating meat and moving into that ecological niche, we need to refine our search images for the field and apply these new recovery and analytical methods. We hope other researchers will use our work as a recipe to go out and systematically collect samples from other sites for comparison.”

In addition to Dikika, other recent finds are shaking up long held views of hominin evolution and when typical human behaviors emerged. This year, a team led by archeologist Sonia Harmand in Kenya reported unearthing stone tools that have been reliably dated to 3.3 million years ago, or 700,000 years older than the previous record.

“We know that simple stone tools are not unique to humans,” Thompson says. “The making of more complex tools, designed for more complex uses, may be uniquely human.”

Related:
Complex cognition shaped the Stone Age hand axe, study shows
Brain trumps hand in Stone Age tool study

Thursday, August 6, 2015

Chemists find new way to do light-driven reactions in solar energy quest

Emory physical chemist Tim Lian researches light-driven charge transfer for solar energy conversion. Photo by Bryan Meltz, Emory Photo/Video.

By Carol Clark

Chemists have found a new, more efficient method to perform light-driven reactions, opening up another possible pathway to harness sunlight for energy. The journal Science is publishing the new method, which is based on plasmon – a special motion of electrons involved in the optical properties of metals.

“We’ve discovered a new and unexpected way to use plasmonic metal that holds potential for use in solar energy conversion,” says Tim Lian, professor of physical chemistry at Emory University and the lead author of the research. “We’ve shown that we can harvest the high energy electrons excited by light in plasmon and then use this energy to do chemistry.”

Plasmon is a collective motion of free electrons in a metal that strongly absorbs and scatters light.

One of the most vivid examples of surface plasmon can be seen in the intricate stained glass windows of some medieval cathedrals, an effect achieved through gold nano-particles that absorb and scatter visible light. Plasmon is highly tunable: Varying the size and shape of the gold nano-particles in the glass controls the color of the light emitted.

Modern-day science is exploring and refining the use of these plasmonic effects for a range of potential applications, from electronics to medicine and renewable energy.

Surface plasmon effects can be seen in the stained glass of some medieval cathedrals. Photo of rose window in Notre Dame Cathedral by Krzysztof Mizera.

Lian’s lab, which specializes in exploring light-driven charge transfer for solar energy conversion, experimented with ways to use plasmon to make that process more efficient and sustainable.

Gold is often used as a catalyst, a substance to drive chemical reactions, but not as a photo catalyst: a material to absorb light and then do chemistry with the energy provided by the light.

During photocatalysis, a metal absorbs light strongly, rapidly exciting a lot of electrons. “Imagine electrons sloshing up and down in the metal,” Lian says. “Once you excite them at this level, they crash right down. All the energy is released as heat really fast – in picoseconds.”

The researchers wanted to find a way to capture the energy in the excited electrons before it was released as heat and then use hot electrons to fuel reactions.

Through experimentation, they found that coupling nano-rods of cadmium selenide, a semi-conductor, to a plasmonic gold nanoparticle tip allowed the excited electrons in the gold to escape into the semi-conductor material.

“If you use a material with a certain energy level that can strongly bond to plasmon, then the excited electrons can escape into the material and stay at the high energy level,” Lian says. “We showed that you can harvest electrons before they crash down and relax, and combine the catalytic property of plasmon with its light absorbing ability.”

Transmission electron micrograph (TEM) of cadmium selenide nanorods with gold tips. Inset shows a high-res TEM of two nanorods. Micrographs courtesy Kaifeng Wu and Tianquan Lian (Emory) and James McBride (Vanderbilt).

Instead of using heat to do chemistry, this new process uses metals and light to do photochemistry, opening a new, potentially more efficient, method for exploration.

“We are now looking at whether we can find other electron acceptors that would work in this same process, such as a molecule or molecular catalyst instead of cadmium selenide,” Lian says. “That would make this process a general scheme with many different potential applications.”

The researchers also want to explore whether the method can drive light-driven water oxidation more efficiently. Using sunlight to split water to generate hydrogen is a major goal in the quest for affordable and sustainable solar energy.

“Using unlimited sunlight to move electrons around and tap catalytic power is a difficult challenge, but we have to find ways to do this,” Lian says. “We have no choice. Solar power is the only energy source that can sustain the growing human population without catastrophic environmental impact.”

The current study was funded by the U.S. Department of Energy. The study co-authors include: Emory graduate student Kaifeng Wu; Emory post-doctoral fellow Jinquan Chen; and chemist James McBride from Vanderbilt University.

Related:
Shining a light on green energy

Thursday, June 25, 2015

Calving icebergs fall back, spring forward, causing glacial earthquakes

"We've provided an unprecedented understanding of how a glacial earthquake evolves," says Emory physicist Justin Burton. The research focused on Helheim Glacier in Greenland, above. Photo by NASA/Jim Yungel.

By Carol Clark

When a massive iceberg breaks off from the front of a glacier it can fall backwards, slamming into the glacier with such force that it reverses the ice flow for several minutes and causes it to drop, producing an earthquake that can be measured across the globe.

The journal Science is publishing the discovery, including detailed documentation of the forces involved in these iceberg calving events and an explanation for the causes of glacial earthquakes. The research marks a major step toward the ability to measure the size of iceberg calving events in near real-time and from anywhere in the world.

“Glaciers are extremely sensitive indicators of climate change,” says co-author Justin Burton, a physicist at Emory University who specializes in laboratory modeling of glacial forces. “Having a quantitative understanding of how our polar regions are losing ice is crucial to any forecasting related to climate change, in particular sea-level rise and its environmental and economic impacts.”

Placing a GPS sensor. (Swansea)
The study, which focused on Helheim Glacier in the Greenland Ice Sheet, also included scientists from the universities of Swansea, Newcastle and Sheffield in the UK and the universities of Columbia and Michigan in the U.S.

The Greenland Ice Sheet is disappearing at a faster rate than Antarctica, and shows no sign of slowing down. As much as half of that loss is due not to melting, but to icebergs breaking off and discharging into the sea, a process known as calving. As sheets of ice taller than a New York skyscraper fall over and collapse into the water they release energy equivalent to several nuclear bombs.

In 2003, scientists discovered the existence of glacial earthquakes. They knew that iceberg calving caused these quakes, but it was unclear why. A regular earthquake originates from stress building up from deep within the Earth, which then gets released suddenly. A glacial earthquake, however, originates on the surface and happens in relative slow motion, during the 10 to 15 minutes it takes an iceberg to flip 90 degrees, collapse into the sea and generate waves of energy.

The study authors wanted to gain a better understanding of the processes involved in collapsing icebergs and how they cause glacial earthquakes.

Tavi Murray, a glaciologist from Swansea University, led the field portion of the study. During the summer of 2013, researchers from Swansea, Newcastle and Sheffield universities flew over Helheim Glacier in helicopters. They installed a sophisticated network of Global Positioning System (GPS) devices on the glacier’s surface to record movements of the glacier in the minutes surrounding calving events.

The Greenland Ice Sheet is getting smaller. If it melts entirely, scientists estimate that sea level will rise about 6 meters (20 feet). Photo of Helheim Glacier by Nick Selmes, Swansea University.

One of the surprises revealed by the resulting data was that some of the calving events actually reversed the flow of the glacier during a glacial earthquake.

“That’s really strange,” Burton says, “because a glacier is an enormous mass that is always moving towards the sea. What could possibly reverse that?”

Burton led a laboratory modeling portion of the study, along with Mac Cathles, who is now at the University of Michigan. They built a rectangular, Plexiglas water tank as a scaled-down version of a fjord. Rectangular plastic blocks that have the same density as icebergs are tipped in the water tank and the resulting hydrodynamics are recorded.

The analysis phase also drew from the expertise of co-author Meredith Nettles, a seismologist at Columbia’s Lamont-Doherty Earth Observatory, and data in the Global Seismographic Network. The collaborative analyses and experimental modeling allowed the researchers to tease apart all the forces responsible for the motion of the glacier, recreate them in the lab, and solve the mystery of how glacial earthquakes work.

Watch a video of the Burton lab's model of a backwards falling iceberg, based on the data from Helheim Glacier:


“We were able to explain the motion of the GPS sensors by tracking all the forces that affect the glacier during iceberg calving, providing an unprecedented understanding of how a glacial earthquake evolves,” Burton says.

They found that many of the calving icebergs are falling backwards, slamming into the face of the glacier before they collapse into the sea. The front of the glacier gets compressed like a spring, temporarily reversing the motion of the glacier and generating the horizontal force of a glacial earthquake.

As the iceberg hits the water, it rapidly reduces pressure behind the rotating iceberg. This dramatic drop in water pressure draws the glacier down about 10 centimeters, while pulling the Earth upwards, creating the vertical force seen in the seismic signature of a glacier earthquake.

“This research required the combined efforts of glaciology, seismology and physics,” Burton says. “It was great to work hand-in-hand with field researchers, while also showing that lab research is crucial to understanding what’s happening on the surface of the Earth.”

Glacial earthquakes are globally detectable seismic events. The researchers hope their detailed documentation of the forces at play will help interpret the remote sensing of calving events, which are increasingly occurring at tidewater-terminating glaciers in Greenland and Antarctica.

Related:
The physics of falling icebergs

Saturday, May 16, 2015

A physicist's guide to foam and fortune

From foam to Frankenstein: Sidney Perkowitz enjoys a cappuccino (extra foam) at the Ink and Elm in Emory Village. So far this year he has published his first e-book, Universal Foam 2.0, and started work on a new book project, "Frankenstein 2018." (Photo by Carol Clark)

By Carol Clark

You never know what’s going to bubble up on the agenda of physicist Sidney Perkowitz, Emory Candler Professor of Physics Emeritus. Since the 76-year-old Perkowitz retired in 2011, he seems to pop up everywhere, from the Atlanta Science Festival to South Korean national television to a high-level policy meeting in Washington DC.

After 42 years of research and teaching at Emory, he has shifted his focus from the lab and classroom to the wider world. His mission: Communicating science in ways that get people interested and better informed.

“You’re doing something good for society if you can convey science well to a lay person,” Perkowitz says. “You can have an influence over everyone from a child to a congressman.”

Perkowitz began writing about physics for a general audience when he was about 50. “It forced me to be humble because I had a lot to learn,” he says. “Several editors really helped improve my writing. One gave me this great tip: “Remember, you don’t want to simplify the science. You want to simplify the writing.’”

Perkowitz has written six books about physics geared for a lay audience. His most successful, “Universal Foam,” was published in 2001 and remained in print through 2008, including five foreign editions. The book describes the myriad incarnations and inherent mysteries of foam, from densely packed bubbles floating atop a cappuccino to ocean white caps, soap bubbles, and exotic foamy materials used in aerospace and medicine.

Watch a clip from an English-language version of a South Korean documentary inspired by Perkowitz' book on foam, including interviews with Perkowitz:


Last fall, the book brought a Korean television film crew to Perkowitz’s door. “The filmmakers had contacted me out of the blue and said they wanted to make a documentary for children based on the book,” he says. “They sent over a cameraman, a sound guy, a director and a translator.”

So that’s how Perkowitz found himself in his kitchen, brewing a cappuccino as he was being interviewed about the wonders of foam. “We had a wonderful time,” he says of the experience. “The most amazing part was they paid me! It wasn’t a lot, but I was just doing it for fun. So that was a pretty great deal.”

The documentary, “Bubbles That Can Change the World,” was funded by the South Korean government and shown throughout the country as a way to inspire children’s interest in science.

After the publisher stopped putting out new editions of “Universal Foam,” Perkowitz obtained the rights so that he could update it himself as an e-book in January. He titled it “Universal Foam 2.0” “It’s amazingly easy,” Perkowitz says of the process of producing an e-book. He adds that he primarily did it to gain experience with e-books, and doesn’t expect it to sell many copies at this stage. “I just love learning something new and being engaged,” he explains. “And I want to feel that I’m doing something useful for science.”

During the past four years, Perkowitz has also written 20 magazine articles, given public talks, and serves on the science outreach committee of the American Association for the Advancement of Science, which takes him to Washington DC occasionally.

A selection of some of the many editions of Mary Shelley's classic "Frankenstein." (Andy Marbett)

Perkowitz is now at work on this seventh book, which has the working title "Frankenstein 2018." He is both contributing a chapter and co-editing the book, an anthology due out March 11, 2018, the bicentennial of the publication of Mary Shelley’s novel.

“There is something in humanity that wants to find a way to create life and to live forever. But that same desire is also full of fear,” Perkowitz says of the enduring appeal of Frankenstein.

The subject is more relevant than ever. Emory’s Center for Ethics is hosting a major international gathering in Atlanta May 17 to 19, to discuss both aspirations and guidelines for the era of synthetic biology. Biotechnology and the Ethical Imagination: A Global Summit (BEINGS) will bring together delegates from the top 30 biotechnology producing countries of the world.

“The idea of genetic engineering and creating an entirely new being is the 21st-century version of Frankenstein,” Perkowitz says. “Earlier, creating life was envisioned as stitching together dead body parts and zapping them with electricity. Now it’s about getting a micro-scalpel and moving around genes. Some people are afraid of genetically modified food. Imagine how they’ll feel about genetically modified animals and people.”

Perkowitz’ co-editor for the book project is Eddy Von Mueller, an Emory lecturer in film and media studies. The two have already rounded up a dozen contributors for the project, from religion, the arts and sciences, and secured a contract from Pegasus Books.

“Frankenstein is taught often in college classrooms, so we think this anthology might be a good seller as a textbook,” Perkowitz says. “The publisher agreed.”

Friday, May 8, 2015

Umbral Moonshine glimmers on 'The Big Bang Theory'

The precise statement of the Umbral Moonshine Conjecture can be seen on Sheldon's white board.

The proof of the Umbral Moonshine Conjecture has been making news in math and science circles in recent months, including stories in Qanta Magazine and Scientific American. The conjecture was proved by Emory mathematicians Ken Ono and Michael Griffin, and Case Western's John Duncan. The conjecture draws on everything from mock modular forms to string theory and quantum gravity, making it difficult to state. But it has still managed to find its way into pop culture.

A recent episode of “The Big Bang Theory,” titled “The Hofstader Insufficiency,” gave the conjecture a cameo of sorts. During one scene, the white board in apartment 4A, where Sheldon and Leonard live, was covered in the mathematical formulas of the Dynkin diagrams and the McKay-Thompson series of the Umbral Moonshine Conjecture. And the screenshot, above, shows the precise statement of the Umbral Moonshine Conjecture by Ono and his collaborators.

So remember to watch the white board in future “Big Bang” episodes. It may have news of some pretty cool discoveries.

Related:
Mathematicians prove the Umbral Moonshine Conjecture

Thursday, April 23, 2015

Her father’s trip to the moon showed her the power of evidence

Commander David Scott emerges from a hatch during the Apollo 9 mission. The 10-day flight in 1969 provided vital information on the operational performance, stability and reliability of lunar module propulsion and life support systems. NASA photo.

“When I was a kid, I didn’t think flying into space was a big deal. All my friends’ dads went into space,” says Tracy Scott, senior lecturer in sociology and director of Emory’s Quality Enhancement Plan (QEP).

The goal of Emory’s QEP topic, “The Nature of Evidence,” is to empower students as independent scholars capable of supporting arguments with different types of evidence. Scott’s interest in the topic was formed while growing up immersed in the culture of NASA.

Her father, Commander David Scott, was an astronaut who flew on Gemini 8, Apollo 9 and Apollo 15. He’s one of only 12 people who’ve ever set foot on the moon.



“The thing that was exciting for me was the chance to discover new evidence on the moon,” Scott recalls of her father’s lunar trip. “Here was a new environment that humans had never experienced before and there was a huge amount of knowledge to be gained. My dad took a lot of time before the Apollo 15 mission to explain the scientific goals to me. And particularly the experiment he was going to do on the moon. He helped to find new evidence to confirm a very old theory.”

Watch the video, above, to see Commander Scott conduct his famous hammer and feather experiment while standing on the moon. In a vacuous space, without air resistance, they fell at the same rate, just as Galileo predicted in 1589. It was a striking visual demonstration of what we now know as the equivalence principle: The influence of gravity and the influence of inertia are exactly the same.

“I thought it was really cool that my dad was able to do an experiment that linked all the way back to Galileo,” Scott says. “Learning about the power of evidence when I was a child inspired me. I developed a keen sense of seeking out evidence to deepen what I was being taught, and to support my own arguments and to create new knowledge.”

Wednesday, April 15, 2015

Complex cognition shaped the Stone Age hand axe, study shows

Even with extensive training, the modern mind finds it challenging to make an Acheulean hand axe. "We should have respect for Stone Age tool makers," says experimental archeologist Dietrich Stout. Photo by Carol Clark.

By Carol Clark

The ability to make a Lower Paleolithic hand axe depends on complex cognitive control by the prefrontal cortex, including the “central executive” function of working memory, a new study finds. 

PLOS ONE published the results, which knock another chip off theories that Stone Age hand axes are simple tools that don’t involve higher-order executive function of the brain.

“For the first time, we’ve showed a relationship between the degree of prefrontal brain activity, the ability to make technological judgments, and success in actually making stone tools,” says Dietrich Stout, an experimental archeologist at Emory University and the leader of the study. “The findings are relevant to ongoing debates about the origins of modern human cognition, and the role of technological and social complexity in brain evolution across species.”

The skill of making a prehistoric hand axe is “more complicated and nuanced than many people realize,” Stout says. “It’s not just a bunch of ape-men banging rocks together. We should have respect for Stone Age tool makers.”

The study’s co-authors include Bruce Bradley of the University of Exeter in England, Thierry Chaminade of Aix-Marseille University in France; and Erin Hecht and Nada Khreisheh of Emory University.

Stone tools – shaped by striking a stone “core” with a piece of bone, antler, or another stone – provide some of the most abundant evidence of human behavioral change over time. Simple Oldowan stone flakes are the earliest known tools, dating back 2.6 million years. The Late Acheulean hand axe goes back 500,000 years. While it’s relatively easy to learn to make an Oldowan flake, the Acheulean hand axe is harder to master, due to its lens-shaped core tapering down to symmetrical edges.



“We wanted to tease apart and compare what parts of the brain were most actively involved in these stone tool technologies, particularly the role of motor control versus strategic thinking,” Stout says.

The researchers recruited six subjects, all archeology students at Exeter University, to train in making stone tools, a skill known as “knapping.” The subjects’ skills were evaluated before and after they trained and practiced. For Oldowan evaluations, subjects detached five flakes from a flint core. For Acheulean evaluations, they produced a tool from a standardized porcelain core.

At the beginning, middle and end of the 18-month experiment, subjects underwent functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) scans of their brains while they watched videos. The videos showed rotating stone cores marked with colored cues: A red dot indicated an intended point of impact, and a white area showed the flake predicted to result from the impact.

The subjects were asked the following questions: “If the core were struck in the place indicated, is what you see a correct prediction of the flake that would result?” “Is the indicated place to hit the core a correct one given the objective of the technology?”

The subjects responded by pushing a “yes” or “no” button.

Answering the first question, how a rock will break if you hit it in a certain place, relies more on reflexive, perceptual and motor-control processes, associated with posterior portions of the brain. Stout compares it to the modern-day rote reflex of a practiced golf swing or driving a car.

The second question – is it a good idea to hit the core in a certain spot if you want to make a hand axe – involves strategic thinking, such as planning the route for a road trip. “You have to think about information that you have stored in your brain, bring it online, and then make a decision about each step of the trip,” Stout says.

This so-called executive control function of the brain, associated with activity in the prefrontal cortex, allows you to project what’s going to happen in the future and use that projection to guide your action. “It’s kind of like mental time travel, or using a computer simulation,” Stout explains. “It’s considered a high level, human cognitive capacity.”

The researchers mapped the skill level of the subjects onto the data from their brain scans and their responses to the questions. Greater skill at making tools correlated with greater accuracy on the video quiz for predicting the correct strategy for making a hand axe, which was itself correlated with greater activity in the prefrontal cortex.

“These data suggest that making an Acheulean hand axe is not simply a rote, auto pilot activity of the brain,” Stout says. “It requires you to engage in some complicated thinking.”

Most of the hand axes produced by the modern hands and minds of the study subjects would not have cut it in the Stone Age. “They weren’t up to the high standards of 500,000 years ago,” Stout says.

A previous study by the researchers showed that learning to make stone tools creates structural changes in fiber tracts of the brain connecting the parietal and frontal lobes, and that these brain changes correlated with increases in performance. “Something is happening to strengthen this connection,” Stout says. “This adds to evidence of the importance of these brain systems for stone tool making, and also shows how tool making may have shaped the brain evolutionarily.”

Stout recently launched a major, three-year archeology experiment that will build on these studies and others. Known as the Language of Technology project, the experiment involves 20 subjects who will each devote 100 hours to learning the art of making a Stone Age hand axe, and also undergo a series of MRI scans. The project aims to hone in whether the brain systems involved in putting together a sequence of words to make a meaningful sentence in spoken language overlap with systems involved in putting together a series of physical actions to reach a meaningful goal.

Related:
Top 10 reasons to learn to make Stone Age tools
Brain trumps hand in Stone Age tool study

Wednesday, March 25, 2015

Physicist's research of glassy materials nets NSF CAREER award

Physicist Justin Burton at work in his lab, where he studies amorphous matter. (Emory Photo/Video)

By Carol Clark

Emory physics professor Justin Burton received a $625,000 award from the National Science Foundation’s Faculty Early Career Development (CAREER) Program. The five-year CAREER grants, among the NSF’s most prestigious awards, support scientists who exemplify the role of teacher-scholars through outstanding research integrated with excellence in education.

Burton will apply the award to his research into amorphous matter, or substances made up of granules in jumbled, irregular states. These substances include everything from the foam on your cup of cappuccino to the vast, slushy mélange of a glacier as it breaks down and flows into the sea. Amorphous matter also encompasses soft condensed matter such as toothpaste, shaving cream, plastic and glass, which are collectively known as “glassy” materials.

“Amorphous material is everywhere, it’s among the most common states of solid matter,” Burton says, “and yet, there’s a lot that we don’t understand about it.”

Crystalline material, by contrast, is relatively rare but well understood by physicists. Crystals have a structural order that makes them easier to conceptualize and define mathematically. “Research into the thermodynamic behavior of crystals at ultra-low temperatures led to our understanding of how they conduct heat,” Burton says. “That’s one of the fundamental triumphs of quantum mechanics. It helped lay the foundation for a lot of important tools of the modern world, from computers to cell phones.”

Lacking the well-defined order of crystals, amorphous materials often behave in peculiar, unpredictable ways. Burton uses the example of a pile of sand at the bottom of an hourglass. “What seems stable enough can suddenly avalanche upon the addition of a few extra grains,” he says. “Or even a traffic jam: What determines the boundary between a flowing state and a rigid one? Our world is full of similar examples where systems exist in a region near marginal stability.”

A view inside the vacuum chamber, where colloidal particles are suspended in a flat disc, lit by the green light of a laser. Photo by Justin Burton.

Burton’s lab is creating model systems to simulate the dynamics of the microscopic granules of amorphous, glassy matter at ultra-low temperatures of below 1 degree Kelvin. That’s colder than the deepest reaches of space.

In a vacuum chamber, filled with argon gas, the lab conducts experiments. The chamber is filled with ionized argon gas. “It’s a plasma, or a gas that has had its electrons ripped away from its atoms,” Burton explains. “The electrons are constantly being ripped away and resembling.”

Colloidal particles, tiny as dust specks, are suspended in the plasma of the vacuum chamber, to stand-in for the molecules of an amorphous material. By altering the gas pressure inside the chamber, and varying the size of the particles, the lab members can study how the particles behave as they move between an excited, free-flowing state into a jammed, stable position.

They can also simulate how molecules in a stable position react to a disturbance. “We want to create a wave, like dropping a pebble into a still pond to make ripples, and study that dynamic,” Burton says. “That could help us understand, for instance, how sound moves through a glassy material.”

Burton’s lab will use another model, involving polymer hydrogel particles that expand or shrink in response to salt concentrations, to study Casimir forces, a special type of long-ranged force that can arise between objects in a highly fluctuating medium.

In addition to opening a window into the molecular motions common in glasses, the research could shed light on the connection between the dynamics and disorder in a broad range of physical systems, Burton says.

In parallel to his research effort, the CAREER award will also fund the creation of an after-school science club at an elementary school in Dekalb County. Burton and his graduate students will lead children in hands-on activities and experiments that give insights into basic principles of physics.

Related:
The physics of falling icebergs
Physicists crack another piece of the glass puzzle

Thursday, December 18, 2014

A clear, molecular view of the evolution of human color vision

By around 30 million years ago, our ancestors had evolved the ability to see the full-color spectrum of visible light, except for UV light.

By Carol Clark

Many genetic mutations in visual pigments, spread over millions of years, were required for humans to evolve from a primitive mammal with a dim, shadowy view of the world into a greater ape able to see all the colors in a rainbow.

Now, after more than two decades of painstaking research, scientists have finished a detailed and complete picture of the evolution of human color vision. PLOS Genetics published the final pieces of this picture: The process for how humans switched from ultraviolet (UV) vision to violet vision, or the ability to see blue light.

“We have now traced all of the evolutionary pathways, going back 90 million years, that led to human color vision,” says lead author Shozo Yokoyama, a biologist at Emory University. “We’ve clarified these molecular pathways at the chemical level, the genetic level and the functional level.”

Co-authors of the PLOS Genetics paper include Emory biologists Jinyi Xing, Yang Liu and Davide Faggionato; Syracuse University biologist William Starmer; and Ahmet Altun, a chemist and former post-doc at Emory who is now at Fatih University in Istanbul, Turkey.

Yokoyama and various collaborators over the years have teased out secrets of the adaptive evolution of vision in humans and other vertebrates by studying ancestral molecules. The lengthy process involves first estimating and synthesizing ancestral proteins and pigments of a species, then conducting experiments on them. The technique combines microbiology with theoretical computation, biophysics, quantum chemistry and genetic engineering.

Five classes of opsin genes encode visual pigments for dim-light and color vision. Bits and pieces of the opsin genes change and vision adapts as the environment of a species changes.

“Gorillas and chimpanzees have human color vision,” Yokoyama says. “Or perhaps we should say that humans have gorilla and chimpanzee vision.”

Around 90 million years ago, our primitive mammalian ancestors were nocturnal and had UV-sensitive and red-sensitive color, giving them a bi-chromatic view of the world. By around 30 million years ago, our ancestors had evolved four classes of opsin genes, giving them the ability to see the full-color spectrum of visible light, except for UV.

“Gorillas and chimpanzees have human color vision,” Yokoyama says. “Or perhaps we should say that humans have gorilla and chimpanzee vision.”

For the PLOS Genetics paper, the researchers focused on the seven genetic mutations involved in losing UV vision and achieving the current function of a blue-sensitive pigment. They traced this progression from 90-to-30 million years ago.

The researchers identified 5,040 possible pathways for the amino acid changes required to bring about the genetic changes. “We did experiments for every one of these 5,040 possibilities,” Yokoyama says. “We found that of the seven genetic changes required, each of them individually has no effect. It is only when several of the changes combine in a particular order that the evolutionary pathway can be completed.”

In other words, just as an animal’s external environment drives natural selection, so do changes in the animal’s molecular environment.

Mice are nocturnal and, like the primitive human ancestor of 90 million years ago, have UV vision and limited ability to see colors.

In previous research, Yokoyama showed how the scabbardfish, which today spends much of its life at depths of 25 to 100 meters, needed just one genetic mutation to switch from UV to blue-light vision. Human ancestors, however, needed seven changes and these changes were spread over millions of years. “The evolution for our ancestors’ vision was very slow, compared to this fish, probably because their environment changed much more slowly,” Yokoyama says.

About 80 percent of the 5,040 pathways the researchers traced stopped in the middle, because a protein became non-functional. Chemist Ahmet Altun solved the mystery of why the protein got knocked out. It needs water to function, and if one mutation occurs before the other, it blocks the two water channels extending through the vision pigment’s membrane.

“The remaining 20 percent of the pathways remained possible pathways, but our ancestors used only one,” Yokoyama says. “We identified that path.”

In 1990, Yokoyama identified the three specific amino acid changes that led to human ancestors developing a green-sensitive pigment. In 2008, he led an effort to construct the most extensive evolutionary tree for dim-light vision, including animals from eels to humans. At key branches of the tree, Yokoyama’s lab engineered ancestral gene functions, in order to connect changes in the living environment to the molecular changes.

The PLOS Genetics paper completes the project for the evolution of human color vision. “We have no more ambiguities, down to the level of the expression of amino acids, for the mechanisms involved in this evolutionary pathway,” Yokoyama says.

Images: Thinkstock

Related:
Evolutionary biologists urged to adapt their research methods
Fish vision makes waves in natural selection

Monday, December 15, 2014

Mathematicians prove the Umbral Moonshine Conjecture

In theoretical math, the term "moonshine" refers to an idea so seemingly impossible that it seems like lunacy.

By Carol Clark

Monstrous moonshine, a quirky pattern of the monster group in theoretical math, has a shadow – umbral moonshine. Mathematicians have now proved this insight, known as the Umbral Moonshine Conjecture, offering a formula with potential applications for everything from number theory to geometry to quantum physics.

“We’ve transformed the statement of the conjecture into something you could test, a finite calculation, and the conjecture proved to be true,” says Ken Ono, a mathematician at Emory University. “Umbral moonshine has created a lot of excitement in the world of math and physics.”

Co-authors of the proof include mathematicians John Duncan from Case Western Reserve University and Michael Griffin, an Emory graduate student.

“Sometimes a result is so stunningly beautiful that your mind does get blown a little,” Duncan says.

Duncan co-wrote the statement for the Umbral Moonshine Conjecture with Miranda Cheng, a mathematician and physicist at the University of Amsterdam, and Jeff Harvey, a physicist at the University of Chicago.

Ono will present their work on January 11, 2015 at the Joint Mathematics Meetings in San Antonio, the largest mathematics meeting in the world. Ono is delivering one of the highlighted invited addresses.

Ono gave a colloquium on the topic at the University of Michigan, Ann Arbor, in November, and has also been invited to speak on the umbral moonshine proof at upcoming conferences around the world, including Brazil, Canada, England, India, and Germany.

The number of elements in the monster group is larger than the number of atoms in 1,000 Earths.

It sounds like science fiction, but the monster group (also known as the friendly giant) is a real and influential concept in theoretical math.

Elementary algebra is built out of groups, or sets of objects required to satisfy certain relationships. One of the biggest achievements in math during the 20th century was classifying all of the finite simple groups. They are now collected in the ATLAS of Finite Groups, published in 1985.

“This ATLAS is to mathematicians what the periodic table is to chemists,” Ono says. “It’s our fundamental guide.”

And yet, the last and largest sporadic finite simple group, the monster group, was not constructed until the late 1970s. “It is absolutely huge, so classifying it was a heroic effort for mathematicians,” Ono says.

In fact, the number of elements in the monster group is larger than the number of atoms in 1,000 Earths. Something that massive defies description.

“Think of a 24-dimensional doughnut,” Duncan says. “And then imagine physical particles zooming through this space, and one particle sometimes hitting another. What happens when they collide depends on a lot of different factors, like the angles at which they meet. There is a particular way of making this 24-dimensional system precise such that the monster is its symmetry. The monster is incredibly symmetric.”

“The monster group is not just a freak,” Ono adds. “It’s actually important to many areas of math.”

It’s too immense, however, to use directly as a tool for calculations. That’s where representation theory comes in.

The shadow technique is a valuable tool in theoretical math.

Shortly after evidence for the monster was discovered, mathematicians John McKay and John Thompson noticed some odd numerical accidents. They found that a series of numbers that can be extracted from a modular function and a series extracted from the monster group seemed to be related. (One example is the strange and simple arithmetic equation 196884 = 196883 + 1.)

John Conway and Simon Norton continued to investigate and found that this peculiar pattern was not just a coincidence. “Evidence kept accumulating that there was a special modular function for every element in the monster group,” Ono says. “In other words, the main characteristics of the monster group could be read off from modular functions. That opened the door to representation theory to capture and manipulate the monster.”

The idea that modular functions could tame something as unruly as the monster sounded impossible – like lunacy. It was soon dubbed the Monstrous Moonshine Conjecture.

(The moonshine reference has the same meaning famously used by Ernest Rutherford, known as the father of nuclear physics. In a 1933 speech, Rutherford said that anyone who considered deriving energy from splitting atoms was "talking moonshine.”)

In 1998, Richard Borcherds won math’s highest honor, the Fields Medal, for proving the Monstrous Moonshine Conjecture. His proof turned this representation theory for the monster group into something computable.

Fast-forward 16 years. Three Japanese physicists (Tohru Eguchi, Hirosi Ooguri and Yuji Tachikawa) were investigating a particular kind of string theory involving four-dimensional spaces. The appearance of numbers from the Mathieu Group M24, another important finite simple group, was unexpected.

“They conjectured a new way to extract numbers from the Mathieu Group,” Duncan says, “and they noticed that the numbers they extracted were similar to those of the monster group, just not as large.” Mathematician Terry Gannon proved that their observations were true.

It was a new, unexpected analogue that hinted at a pattern similar to monstrous moonshine.

“What I hope is that we will eventually see that everything is unified, that monstrous moonshine and umbral moonshine have a common origin,” Duncan says.

Duncan started investigating this idea with physicists Cheng and Harvey. “We realized that the Mathieu group pattern was part of a much bigger picture involving mock modular forms and more moonshine,” Duncan says. “A beautiful mathematical structure was controlling it.”

They dubbed this insight the Umbral Moonshine Conjecture. Since the final version of the more than 100-page conjecture was published online last June, it has been downloaded more than 2,500 times.

The conjecture caught the eye of Ono, an expert in mock modular forms, and he began pondering the problem along with Griffin and Duncan.

“Things came together quickly after the statement of the Umbral Moonshine Conjecture was published,” Ono says. “We have been able to prove it and it is no longer a guess. We can now use the proof as a completely new and different tool to do calculations.”

Just as modular forms are “shadowed” by mock modular forms, monstrous moonshine is shadowed by umbral moonshine. (Umbra is Latin for the innermost and darkest part of a shadow.)

“The job of a theoretical mathematician is to take impossible problems and make them tractable,” Duncan says. “The shadow device is one valuable tool that lets us do that. It allows you to throw away information while still keeping enough to make some valuable observations.”

He compares it to a paleontologist using fossilized bones to piece together a dinosaur.

The jury is out on what role, if any, umbral moonshine could play in helping to unravel mysteries of the universe. Aspects of it, however, hint that it could be related to problems ranging from geometry to black holes and quantum gravity theory.

“What I hope is that we will eventually see that everything is unified, that monstrous moonshine and umbral moonshine have a common origin,” Duncan says. “And part of my optimistic vision is that umbral moonshine may be a piece in one of the most important puzzles of modern physics: The problem of unifying quantum mechanics with Einstein’s general relativity.”

Images: NASA and Thinkstock.

Related:
Mathematicians trace source of Rogers-Ramanujan identities
New theories reveal the nature of numbers

Tuesday, December 9, 2014

Birdsong study reveals how brain uses timing during motor activity

Songbirds are one of the best systems for understanding how the brain controls complex behavior.  Image credit: Sam Sober.

By Carol Clark

Timing is key for brain cells controlling a complex motor activity like the singing of a bird, finds a new study published by PLOS Biology.

“You can learn much more about what a bird is singing by looking at the timing of neurons firing in its brain than by looking at the rate that they fire,” says Sam Sober, a biologist at Emory University whose lab led the study. “Just a millisecond difference in the timing of a neuron’s activity makes a difference in the sound that comes out of the bird’s beak.”

The findings are the first to suggest that fine-scale timing of neurons is at least as important in motor systems as in sensory systems, and perhaps more critical.

“The brain takes in information and figures out how to interact with the world through electrical events called action potentials, or spikes in the activity of neurons,” Sober says. “A big goal in neuroscience is to decode the brain by better understanding this process. We’ve taken another step towards that goal.”

Sober’s lab uses Bengalese finches, also known as society finches, as a model system. The way birds control their song has a lot in common with human speech, both in how it’s learned early in life and how it’s vocalized in adults. The neural pathways for birdsong are also well known, and restricted to that one activity.

“Songbirds are the best system for understanding how the brain controls complex vocal behavior, and one of the best systems for understanding control of motor behavior in general,” Sober says.



Researchers have long known that for an organism to interpret sensory information – such as sight, sound and taste – the timing of spikes in brain cells can matter more than the rate, or the total number of times they fire. Studies on flies, for instance, have shown that their visual systems are highly sensitive to the movement of shadows. By looking at the timing of spikes in the fly’s neurons you can tell the velocity of a shadow that the fly is seeing.

An animal’s physical response to a stimulus, however, is much slower than the millisecond timescale on which spikes are produced. “There was an assumption that because muscles have a relatively slow response time, a timing code in neurons could not make a difference in controlling movement of the body,” Sober says.

An Emory undergraduate in the Sober lab, Claire Tang, got the idea of testing that assumption. She proposed an experiment involving mathematical methods that she was learning in a Physical Biology class. The class was taught by Emory biophysicist Ilya Nemenman, an expert in the use of computational techniques to study biological systems.

“Claire is a gifted mathematician and programmer and biologist,” Sober says of Tang, now a graduate student at the University of California, San Francisco. “She made a major contribution to the design of the study and in the analysis of the results.”

Co-authors also include Nemenman; laboratory technician Diala Chehayeb; and Kyle Srivastava, a graduate student in the Emory/Georgia Tech graduate program in biomedical engineering.

The researchers used an array of electrodes, each thinner than a human hair, to record the activity of single neurons of adult finches as they were singing.

“The birds repeat themselves, singing the same sequence of ‘syllables’ multiple times,” Sober says. “A particular sequence of syllables matches a particular firing of neurons. And each time a bird sings a sequence, it sings it a little bit differently, with a slightly higher or lower pitch. The firing of the neurons is also slightly different.”

The acoustic signals of the birdsong were recorded alongside the timing and the rate that single neurons fired. The researchers applied information theory, a discipline originally designed to analyze communications systems such as the Internet or cellular phones, to analyze how much one could learn about the behavior of the bird singing by looking at the precise timing of the spikes versus their number.

The result showed that for the duration of one song signal, or 40 milliseconds, the timing of the spikes contained 10 times more information than the rate of the spikes.

“Our findings make it pretty clear that you may be missing a lot of the information in the neural code unless you consider the timing,” Sober says.

Such improvements in our understanding of how the brain controls physical movement hold many potential health applications, he adds.

“For example,” he says, “one area of research is focused on how to record neural signals from the brains of paralyzed people and then using the signals to control prosthetic limbs. Currently, this area of research tends to focus on the firing rate of the neurons rather than taking the precise timing of the spikes into account. Our work shows that, in songbirds at least, you can learn much more about behavior by looking at spike timing than spike rate. If this turns out to be true in humans as well, timing information could be analyzed to improve a patient’s ability to control a prosthesis.”

The research was supported by grants from the National Institutes of Health, the National Science Foundation, the James S. McDonnell Foundation and Emory’s Computational Neuroscience Training Program.

Bird graphic courtesy of Sam Sober.

Related:
Doing the math for how songbirds learn to sing
Birdsong study pecks theory that music is uniquely human

Friday, November 7, 2014

Interstellar: Starting over on a new 'Earth'



The movie Interstellar opens in theaters at a time when Earth is facing major losses of biodiversity and ecosystems, says David Lynn, an Emory professor of biomolecular chemistry.

While humanity is challenged to find out what’s happening to Earth and how to make adjustments, we have also begun to realize that billions of Earth-like planets likely exist in habitable zones around the stars of our galaxy.

“In as little as 10 years, we could know whether we’re alone in the universe, whether there are other living systems,” Lynn says. “That’s an exciting prospect. It’s not clear necessarily that we’ll find out that there is intelligent life or not. That may be a lower probability, but that’s also possible.”

Much of the science in Interstellar is not accurate, and its vision of the future may not come true. And yet, it is still an important film, Lynn says, since its themes resonate today, during a critical time in our history.

Related:
Chemists boldly go in search of 'little green molecules'
Prometheus: Seeding wonder and science

Monday, September 8, 2014

Patterns etched in sound



“I’m into beautiful melodies and catchy harmonies,” says Robert Schneider, the co-founder of The Elephant 6 Recording Company and lead singer and songwriter in the band The Apples in Stereo. “As a producer, I’m also interested in surrounding my pop songs with experimental sounds. These sorts of things are very appealing to me.”

In a recent TEDxEmory talk, Schneider explains how music led him to become a Woodruff Graduate Fellow in Emory’s Department of Mathematics and Computer Science. His research focuses mainly on analytic number theory, but he also has created music compositions based on mathematics.

“I found as I started to study mathematics that there were all these beautiful patterns that were lying there,” he says. “It was like music that was silent, just waiting to be written out and used for compositions.”

Watch the video to learn more, and listen to some of Schneider’s mathematical compositions.

Related:
He took the psychedelic pop path to math

Tuesday, August 19, 2014

The physics of falling icebergs



By Carol Clark

For thousands of years, the massive glaciers of Earth’s polar regions have remained relatively stable, the ice locked into mountainous shapes that ebbed in warmer months but gained back their bulk in winter. In recent decades, however, warmer temperatures have started rapidly thawing these frozen giants. It’s becoming more common for sheets of ice, one kilometer tall, to shift, crack and tumble into the sea, splitting from their mother glaciers in an explosive process known as calving.

“Imagine a sheer, vertical ice face three times as tall as the tallest building in Atlanta breaking off from a glacier and flipping 90 degrees,” says Emory physicist Justin Burton. “In my lab, we can calculate how much energy is released during one of these events, which can be equivalent to several nuclear bombs.”

Burton studies the geophysics of calving icebergs in order to better understand and predict effects of climate change, such as sea-level rise.

“Ice coverage is one of the most sensitive indicators of climate change,” he says. About half of the loss of ice from the polar ice sheets is occurring due to melting and half due to iceberg calving. While it’s more straightforward to estimate iceberg melt rates, their calving rates are much harder to pin down.

Greenland's Ilulissat fjord is believed to have spawned the iceberg that brought down the Titanic.

For the 2012 film “Chasing Ice,” videographers endured subzero temperatures and years of patience to record stunning time-lapse footage of ancient glaciers receding. Their efforts also yielded the largest calving event ever captured on film. The area involved was about the size of Manhattan. The filmmakers described it as like watching skyscrapers rolling around in an earthquake and an entire city breaking apart before their eyes.

Direct field observations of calving icebergs are as dangerous as they are rare. So Burton and his colleagues developed ways to model these events in a controlled, laboratory setting. “We can measure things that can’t be measured in the field,” he explains, “and it’s also way cheaper and safer.”

He and his colleagues built a cylindrical, Plexiglas water tank as a scaled-down version of a fjord, similar to the ice-walled channel at the end of the Ilulissat glacier, which drains the Greenland ice sheet into the ocean. This well-studied glacier, also known as Jakobshavn, is considered an important bellwether for climate change.

While it is normal for glaciers to both accumulate and shed ice, Jakobshavn provides a vivid snapshot of how the shedding process has speeded up. The glacier retreated 8 miles during the 100-year period between 1902 and 2001, but has retreated more than 10 miles during the past decade. Greenland’s ice sheet appears to be out of balance, losing more ice than it gains.



Burton’s lab creates experimental models to gain a more precise understanding of these glacial processes. Rectangular plastic blocks that have the same density as icebergs are tipped in the water tank and the resulting hydrodynamics are recorded.

One hypothesis that the lab is investigating is how the waves unleashed by capsizing icebergs may be causing earthquakes that can be detected thousands of miles away. “It’s counterintuitive,” Burton says, “because usually you think of earthquakes as causing large waves and not the other way around.”

The lab models, however, suggest that the violent rotation of massive icebergs generates waves that release the brunt of their energy onto the sheer vertical face of the glacier, instead of dispersing most of it into the ocean.

“If we can correlate the frequencies of earthquake signals with the frequencies of icebergs rocking back and forth in the water, then that could be a direct measurement of the size of the icebergs that have broken off,” Burton explains. “Large iceberg calving events could then be detected and measured using remote seismic monitoring.”

Climate change and its impacts is one of the top problems in science, Burton says. “We’re seeing huge changes occurring within a few years and we’ve got to get on it. I’d like to think that, a few decades from now, we were able to do something.”

Photo of Ilulissat by iStockphoto.com

Related:
Creating an atmosphere for change
Crystal-liquid interface visible for first time

Tuesday, August 5, 2014

Physicists eye neural fly data, find formula for Zipf's law

The Zipf's law mechanism was verified with neural data of blowflies reacting to changes in visual signals.

By Carol Clark

Physicists have identified a mechanism that may help explain Zipf’s law – a unique pattern of behavior found in disparate systems, including complex biological ones. The journal Physical Review Letters is publishing their mathematical models, which demonstrate how Zipf’s law naturally arises when a sufficient number of units react to a hidden variable in a system.

“We’ve discovered a method that produces Zipf’s law without fine-tuning and with very few assumptions,” says Ilya Nemenman, a biophysicist at Emory University and one of the authors of the research.

The paper’s co-authors include biophysicists David Schwab of Princeton and Pankaj Mehta of Boston University. “I don’t think any one of us would have made this insight alone,” Nemenman says. “We were trying to solve an unrelated problem when we hit upon it. It was serendipity and the combination of all our varied experience and knowledge.”

Their findings, verified with neural data of blowflies reacting to changes in visual signals, may have universal applications. “It’s a simple mechanism,” Nemenman says. “If a system has some hidden variable, and many units, such as 40 or 50 neurons, are adapted and responding to the variable, then Zipf’s law will kick in.”

That insight could aid in the understanding of how biological systems process stimuli. For instance, in order to pinpoint a malfunction in neural activity, it would be useful to know what data recorded from a normally functioning brain would be expected to look like. “If you observed a deviation from the Zipf’s law mechanism that we’ve identified, that would likely be a good place to investigate,” Nemenman says.

“Letters and words in language are sequences that encode a description of something that is changing over time, like the plot line in a story,” Nemenman says.

Zipf’s law is a mysterious mathematical principle that was noticed as far back as the 19th century, but was named for 20th-century linguist George Zipf. He found that if you rank words in a language in order of their popularity, a strange pattern emerges: The most popular word is used twice as often as the second most popular, and three times as much as the third-ranked word, and so on. This same rank vs. frequency rule was also found to apply to many other social systems, including income distribution among individuals and the size of cities, with a few exceptions.

More recently, laboratory experiments suggest that Zipf’s power-law structure also applies to a range of natural systems, from the protein sequences of immune receptors in cells to the intensity of solar flares from the sun.

“It’s interesting when you see the same phenomenon in systems that are so diverse. It makes you wonder,” Nemenman says.

Scientists have pondered the mystery of Zipf’s law for decades. Some studies have managed to reveal how a feature of a particular system makes it Zipfian, while others have come up with broad mechanisms that generate similar power laws but need some fine-tuning to generate the exact Zipf’s law.

“Our method is the only one that I know of that covers both of these areas,” Nemenman says. “It’s broad enough to cover many different systems and you don’t have to fine tune it: It doesn’t require you to set some parameters at exactly the right value.”

Neurons turn visual stimuli into units of information.

The blowfly data came from experiments led by biophysicist Rob de Ruyter that Nemenman worked on as a graduate student. Flies were turned on a rotor as they watched the world go by, hundreds of times. The moving scenes that the flies repeatedly experienced simulated their natural flight patterns. The researchers recorded when neurons associated with vision spiked, or fired. All sets of the data largely matched within a few hundred microseconds, showing that the flies’ neurons were not randomly spiking, but instead operating like precise coding machines.

If you think of a neuron firing as a “1” and a neuron not firing as a “0,” then the neural activity can be thought of as words, made up of 1s and 0s. When these “words,” or units, are strung together over time, they become “sentences.”

The neurons are turning visual stimuli into units of information, Nemenman explains. “The data is a way for us to read the sentences the fly’s vision neurons are conveying to the rest of the brain.”

Nemenman and his co-authors took a fresh look at this fly data for the new paper in Physical Review Letters. “We were trying to understand if there is a relationship between ideas of universality, or criticality, in physical systems and neural examples of how animals learn,” he says.

The physicists are now researching whether they can bring their work full circle, by showing that the mechanism they identified applies to Zipf’s law in language.

In order to navigate in flight, the flies’ visual neurons adapt to changes in the visual signal, such as velocity. When the world moves faster in front of a fly, these sensitive neurons adapt and rescale. These adaptions enable the flies to adjust to new environments, just as our own eyes adapt and rescale when we move from a darkened theater to a brightly lit room.

“We showed mathematically that the system becomes Zipfian when you’re recording the activity of many units, such as neurons, and all of the units are responding to the same variable,” Nemenman says. “The fact that Zipf’s law will occur in a system with just 40 or 50 such units shows that biological units are in some sense special – they must be adapted to the outside world.”

The researchers provide mathematical simulations to back up their theory. “Not only can we predict that Zipf’s law is going to emerge in any system which consists of many units responding to variable outside signals,” Nemenman says, “we can also tell you how many units you need to develop Zipf’s law, given how variable the response is of a single unit.”

They are now researching whether they can bring their work full circle, by showing that the mechanism they identified applies to Zipf’s law in language.

“Letters and words in language are sequences that encode a description of something that is changing over time, like the plot line in a story,” Nemenman says. “I expect to find a pattern similar to how vision neurons fire as a fly moves through the world and the scenery changes.”

Related:
Biology may not be so complex after all 

Photos: iStockphoto.com