Showing posts with label Physics. Show all posts
Showing posts with label Physics. Show all posts

Thursday, August 6, 2015

Chemists find new way to do light-driven reactions in solar energy quest

Emory physical chemist Tim Lian researches light-driven charge transfer for solar energy conversion. Photo by Bryan Meltz, Emory Photo/Video.

By Carol Clark

Chemists have found a new, more efficient method to perform light-driven reactions, opening up another possible pathway to harness sunlight for energy. The journal Science is publishing the new method, which is based on plasmon – a special motion of electrons involved in the optical properties of metals.

“We’ve discovered a new and unexpected way to use plasmonic metal that holds potential for use in solar energy conversion,” says Tim Lian, professor of physical chemistry at Emory University and the lead author of the research. “We’ve shown that we can harvest the high energy electrons excited by light in plasmon and then use this energy to do chemistry.”

Plasmon is a collective motion of free electrons in a metal that strongly absorbs and scatters light.

One of the most vivid examples of surface plasmon can be seen in the intricate stained glass windows of some medieval cathedrals, an effect achieved through gold nano-particles that absorb and scatter visible light. Plasmon is highly tunable: Varying the size and shape of the gold nano-particles in the glass controls the color of the light emitted.

Modern-day science is exploring and refining the use of these plasmonic effects for a range of potential applications, from electronics to medicine and renewable energy.

Surface plasmon effects can be seen in the stained glass of some medieval cathedrals. Photo of rose window in Notre Dame Cathedral by Krzysztof Mizera.

Lian’s lab, which specializes in exploring light-driven charge transfer for solar energy conversion, experimented with ways to use plasmon to make that process more efficient and sustainable.

Gold is often used as a catalyst, a substance to drive chemical reactions, but not as a photo catalyst: a material to absorb light and then do chemistry with the energy provided by the light.

During photocatalysis, a metal absorbs light strongly, rapidly exciting a lot of electrons. “Imagine electrons sloshing up and down in the metal,” Lian says. “Once you excite them at this level, they crash right down. All the energy is released as heat really fast – in picoseconds.”

The researchers wanted to find a way to capture the energy in the excited electrons before it was released as heat and then use hot electrons to fuel reactions.

Through experimentation, they found that coupling nano-rods of cadmium selenide, a semi-conductor, to a plasmonic gold nanoparticle tip allowed the excited electrons in the gold to escape into the semi-conductor material.

“If you use a material with a certain energy level that can strongly bond to plasmon, then the excited electrons can escape into the material and stay at the high energy level,” Lian says. “We showed that you can harvest electrons before they crash down and relax, and combine the catalytic property of plasmon with its light absorbing ability.”

Transmission electron micrograph (TEM) of cadmium selenide nanorods with gold tips. Inset shows a high-res TEM of two nanorods. Micrographs courtesy Kaifeng Wu and Tianquan Lian (Emory) and James McBride (Vanderbilt).

Instead of using heat to do chemistry, this new process uses metals and light to do photochemistry, opening a new, potentially more efficient, method for exploration.

“We are now looking at whether we can find other electron acceptors that would work in this same process, such as a molecule or molecular catalyst instead of cadmium selenide,” Lian says. “That would make this process a general scheme with many different potential applications.”

The researchers also want to explore whether the method can drive light-driven water oxidation more efficiently. Using sunlight to split water to generate hydrogen is a major goal in the quest for affordable and sustainable solar energy.

“Using unlimited sunlight to move electrons around and tap catalytic power is a difficult challenge, but we have to find ways to do this,” Lian says. “We have no choice. Solar power is the only energy source that can sustain the growing human population without catastrophic environmental impact.”

The current study was funded by the U.S. Department of Energy. The study co-authors include: Emory graduate student Kaifeng Wu; Emory post-doctoral fellow Jinquan Chen; and chemist James McBride from Vanderbilt University.

Related:
Shining a light on green energy

Thursday, June 25, 2015

Calving icebergs fall back, spring forward, causing glacial earthquakes

"We've provided an unprecedented understanding of how a glacial earthquake evolves," says Emory physicist Justin Burton. The research focused on Helheim Glacier in Greenland, above. Photo by NASA/Jim Yungel.

By Carol Clark

When a massive iceberg breaks off from the front of a glacier it can fall backwards, slamming into the glacier with such force that it reverses the ice flow for several minutes and causes it to drop, producing an earthquake that can be measured across the globe.

The journal Science is publishing the discovery, including detailed documentation of the forces involved in these iceberg calving events and an explanation for the causes of glacial earthquakes. The research marks a major step toward the ability to measure the size of iceberg calving events in near real-time and from anywhere in the world.

“Glaciers are extremely sensitive indicators of climate change,” says co-author Justin Burton, a physicist at Emory University who specializes in laboratory modeling of glacial forces. “Having a quantitative understanding of how our polar regions are losing ice is crucial to any forecasting related to climate change, in particular sea-level rise and its environmental and economic impacts.”

Placing a GPS sensor. (Swansea)
The study, which focused on Helheim Glacier in the Greenland Ice Sheet, also included scientists from the universities of Swansea, Newcastle and Sheffield in the UK and the universities of Columbia and Michigan in the U.S.

The Greenland Ice Sheet is disappearing at a faster rate than Antarctica, and shows no sign of slowing down. As much as half of that loss is due not to melting, but to icebergs breaking off and discharging into the sea, a process known as calving. As sheets of ice taller than a New York skyscraper fall over and collapse into the water they release energy equivalent to several nuclear bombs.

In 2003, scientists discovered the existence of glacial earthquakes. They knew that iceberg calving caused these quakes, but it was unclear why. A regular earthquake originates from stress building up from deep within the Earth, which then gets released suddenly. A glacial earthquake, however, originates on the surface and happens in relative slow motion, during the 10 to 15 minutes it takes an iceberg to flip 90 degrees, collapse into the sea and generate waves of energy.

The study authors wanted to gain a better understanding of the processes involved in collapsing icebergs and how they cause glacial earthquakes.

Tavi Murray, a glaciologist from Swansea University, led the field portion of the study. During the summer of 2013, researchers from Swansea, Newcastle and Sheffield universities flew over Helheim Glacier in helicopters. They installed a sophisticated network of Global Positioning System (GPS) devices on the glacier’s surface to record movements of the glacier in the minutes surrounding calving events.

The Greenland Ice Sheet is getting smaller. If it melts entirely, scientists estimate that sea level will rise about 6 meters (20 feet). Photo of Helheim Glacier by Nick Selmes, Swansea University.

One of the surprises revealed by the resulting data was that some of the calving events actually reversed the flow of the glacier during a glacial earthquake.

“That’s really strange,” Burton says, “because a glacier is an enormous mass that is always moving towards the sea. What could possibly reverse that?”

Burton led a laboratory modeling portion of the study, along with Mac Cathles, who is now at the University of Michigan. They built a rectangular, Plexiglas water tank as a scaled-down version of a fjord. Rectangular plastic blocks that have the same density as icebergs are tipped in the water tank and the resulting hydrodynamics are recorded.

The analysis phase also drew from the expertise of co-author Meredith Nettles, a seismologist at Columbia’s Lamont-Doherty Earth Observatory, and data in the Global Seismographic Network. The collaborative analyses and experimental modeling allowed the researchers to tease apart all the forces responsible for the motion of the glacier, recreate them in the lab, and solve the mystery of how glacial earthquakes work.

Watch a video of the Burton lab's model of a backwards falling iceberg, based on the data from Helheim Glacier:


“We were able to explain the motion of the GPS sensors by tracking all the forces that affect the glacier during iceberg calving, providing an unprecedented understanding of how a glacial earthquake evolves,” Burton says.

They found that many of the calving icebergs are falling backwards, slamming into the face of the glacier before they collapse into the sea. The front of the glacier gets compressed like a spring, temporarily reversing the motion of the glacier and generating the horizontal force of a glacial earthquake.

As the iceberg hits the water, it rapidly reduces pressure behind the rotating iceberg. This dramatic drop in water pressure draws the glacier down about 10 centimeters, while pulling the Earth upwards, creating the vertical force seen in the seismic signature of a glacier earthquake.

“This research required the combined efforts of glaciology, seismology and physics,” Burton says. “It was great to work hand-in-hand with field researchers, while also showing that lab research is crucial to understanding what’s happening on the surface of the Earth.”

Glacial earthquakes are globally detectable seismic events. The researchers hope their detailed documentation of the forces at play will help interpret the remote sensing of calving events, which are increasingly occurring at tidewater-terminating glaciers in Greenland and Antarctica.

Related:
The physics of falling icebergs

Saturday, May 16, 2015

A physicist's guide to foam and fortune

From foam to Frankenstein: Sidney Perkowitz enjoys a cappuccino (extra foam) at the Ink and Elm in Emory Village. So far this year he has published his first e-book, Universal Foam 2.0, and started work on a new book project, "Frankenstein 2018." (Photo by Carol Clark)

By Carol Clark

You never know what’s going to bubble up on the agenda of physicist Sidney Perkowitz, Emory Candler Professor of Physics Emeritus. Since the 76-year-old Perkowitz retired in 2011, he seems to pop up everywhere, from the Atlanta Science Festival to South Korean national television to a high-level policy meeting in Washington DC.

After 42 years of research and teaching at Emory, he has shifted his focus from the lab and classroom to the wider world. His mission: Communicating science in ways that get people interested and better informed.

“You’re doing something good for society if you can convey science well to a lay person,” Perkowitz says. “You can have an influence over everyone from a child to a congressman.”

Perkowitz began writing about physics for a general audience when he was about 50. “It forced me to be humble because I had a lot to learn,” he says. “Several editors really helped improve my writing. One gave me this great tip: “Remember, you don’t want to simplify the science. You want to simplify the writing.’”

Perkowitz has written six books about physics geared for a lay audience. His most successful, “Universal Foam,” was published in 2001 and remained in print through 2008, including five foreign editions. The book describes the myriad incarnations and inherent mysteries of foam, from densely packed bubbles floating atop a cappuccino to ocean white caps, soap bubbles, and exotic foamy materials used in aerospace and medicine.

Watch a clip from an English-language version of a South Korean documentary inspired by Perkowitz' book on foam, including interviews with Perkowitz:


Last fall, the book brought a Korean television film crew to Perkowitz’s door. “The filmmakers had contacted me out of the blue and said they wanted to make a documentary for children based on the book,” he says. “They sent over a cameraman, a sound guy, a director and a translator.”

So that’s how Perkowitz found himself in his kitchen, brewing a cappuccino as he was being interviewed about the wonders of foam. “We had a wonderful time,” he says of the experience. “The most amazing part was they paid me! It wasn’t a lot, but I was just doing it for fun. So that was a pretty great deal.”

The documentary, “Bubbles That Can Change the World,” was funded by the South Korean government and shown throughout the country as a way to inspire children’s interest in science.

After the publisher stopped putting out new editions of “Universal Foam,” Perkowitz obtained the rights so that he could update it himself as an e-book in January. He titled it “Universal Foam 2.0” “It’s amazingly easy,” Perkowitz says of the process of producing an e-book. He adds that he primarily did it to gain experience with e-books, and doesn’t expect it to sell many copies at this stage. “I just love learning something new and being engaged,” he explains. “And I want to feel that I’m doing something useful for science.”

During the past four years, Perkowitz has also written 20 magazine articles, given public talks, and serves on the science outreach committee of the American Association for the Advancement of Science, which takes him to Washington DC occasionally.

A selection of some of the many editions of Mary Shelley's classic "Frankenstein." (Andy Marbett)

Perkowitz is now at work on this seventh book, which has the working title "Frankenstein 2018." He is both contributing a chapter and co-editing the book, an anthology due out March 11, 2018, the bicentennial of the publication of Mary Shelley’s novel.

“There is something in humanity that wants to find a way to create life and to live forever. But that same desire is also full of fear,” Perkowitz says of the enduring appeal of Frankenstein.

The subject is more relevant than ever. Emory’s Center for Ethics is hosting a major international gathering in Atlanta May 17 to 19, to discuss both aspirations and guidelines for the era of synthetic biology. Biotechnology and the Ethical Imagination: A Global Summit (BEINGS) will bring together delegates from the top 30 biotechnology producing countries of the world.

“The idea of genetic engineering and creating an entirely new being is the 21st-century version of Frankenstein,” Perkowitz says. “Earlier, creating life was envisioned as stitching together dead body parts and zapping them with electricity. Now it’s about getting a micro-scalpel and moving around genes. Some people are afraid of genetically modified food. Imagine how they’ll feel about genetically modified animals and people.”

Perkowitz’ co-editor for the book project is Eddy Von Mueller, an Emory lecturer in film and media studies. The two have already rounded up a dozen contributors for the project, from religion, the arts and sciences, and secured a contract from Pegasus Books.

“Frankenstein is taught often in college classrooms, so we think this anthology might be a good seller as a textbook,” Perkowitz says. “The publisher agreed.”

Friday, May 8, 2015

Umbral Moonshine glimmers on 'The Big Bang Theory'

The precise statement of the Umbral Moonshine Conjecture can be seen on Sheldon's white board.

The proof of the Umbral Moonshine Conjecture has been making news in math and science circles in recent months, including stories in Qanta Magazine and Scientific American. The conjecture was proved by Emory mathematicians Ken Ono and Michael Griffin, and Case Western's John Duncan. The conjecture draws on everything from mock modular forms to string theory and quantum gravity, making it difficult to state. But it has still managed to find its way into pop culture.

A recent episode of “The Big Bang Theory,” titled “The Hofstader Insufficiency,” gave the conjecture a cameo of sorts. During one scene, the white board in apartment 4A, where Sheldon and Leonard live, was covered in the mathematical formulas of the Dynkin diagrams and the McKay-Thompson series of the Umbral Moonshine Conjecture. And the screenshot, above, shows the precise statement of the Umbral Moonshine Conjecture by Ono and his collaborators.

So remember to watch the white board in future “Big Bang” episodes. It may have news of some pretty cool discoveries.

Related:
Mathematicians prove the Umbral Moonshine Conjecture

Thursday, April 23, 2015

Her father’s trip to the moon showed her the power of evidence

Commander David Scott emerges from a hatch during the Apollo 9 mission. The 10-day flight in 1969 provided vital information on the operational performance, stability and reliability of lunar module propulsion and life support systems. NASA photo.

“When I was a kid, I didn’t think flying into space was a big deal. All my friends’ dads went into space,” says Tracy Scott, senior lecturer in sociology and director of Emory’s Quality Enhancement Plan (QEP).

The goal of Emory’s QEP topic, “The Nature of Evidence,” is to empower students as independent scholars capable of supporting arguments with different types of evidence. Scott’s interest in the topic was formed while growing up immersed in the culture of NASA.

Her father, Commander David Scott, was an astronaut who flew on Gemini 8, Apollo 9 and Apollo 15. He’s one of only 12 people who’ve ever set foot on the moon.



“The thing that was exciting for me was the chance to discover new evidence on the moon,” Scott recalls of her father’s lunar trip. “Here was a new environment that humans had never experienced before and there was a huge amount of knowledge to be gained. My dad took a lot of time before the Apollo 15 mission to explain the scientific goals to me. And particularly the experiment he was going to do on the moon. He helped to find new evidence to confirm a very old theory.”

Watch the video, above, to see Commander Scott conduct his famous hammer and feather experiment while standing on the moon. In a vacuous space, without air resistance, they fell at the same rate, just as Galileo predicted in 1589. It was a striking visual demonstration of what we now know as the equivalence principle: The influence of gravity and the influence of inertia are exactly the same.

“I thought it was really cool that my dad was able to do an experiment that linked all the way back to Galileo,” Scott says. “Learning about the power of evidence when I was a child inspired me. I developed a keen sense of seeking out evidence to deepen what I was being taught, and to support my own arguments and to create new knowledge.”

Wednesday, April 15, 2015

Complex cognition shaped the Stone Age hand axe, study shows

Even with extensive training, the modern mind finds it challenging to make an Acheulean hand axe. "We should have respect for Stone Age tool makers," says experimental archeologist Dietrich Stout. Photo by Carol Clark.

By Carol Clark

The ability to make a Lower Paleolithic hand axe depends on complex cognitive control by the prefrontal cortex, including the “central executive” function of working memory, a new study finds. 

PLOS ONE published the results, which knock another chip off theories that Stone Age hand axes are simple tools that don’t involve higher-order executive function of the brain.

“For the first time, we’ve showed a relationship between the degree of prefrontal brain activity, the ability to make technological judgments, and success in actually making stone tools,” says Dietrich Stout, an experimental archeologist at Emory University and the leader of the study. “The findings are relevant to ongoing debates about the origins of modern human cognition, and the role of technological and social complexity in brain evolution across species.”

The skill of making a prehistoric hand axe is “more complicated and nuanced than many people realize,” Stout says. “It’s not just a bunch of ape-men banging rocks together. We should have respect for Stone Age tool makers.”

The study’s co-authors include Bruce Bradley of the University of Exeter in England, Thierry Chaminade of Aix-Marseille University in France; and Erin Hecht and Nada Khreisheh of Emory University.

Stone tools – shaped by striking a stone “core” with a piece of bone, antler, or another stone – provide some of the most abundant evidence of human behavioral change over time. Simple Oldowan stone flakes are the earliest known tools, dating back 2.6 million years. The Late Acheulean hand axe goes back 500,000 years. While it’s relatively easy to learn to make an Oldowan flake, the Acheulean hand axe is harder to master, due to its lens-shaped core tapering down to symmetrical edges.



“We wanted to tease apart and compare what parts of the brain were most actively involved in these stone tool technologies, particularly the role of motor control versus strategic thinking,” Stout says.

The researchers recruited six subjects, all archeology students at Exeter University, to train in making stone tools, a skill known as “knapping.” The subjects’ skills were evaluated before and after they trained and practiced. For Oldowan evaluations, subjects detached five flakes from a flint core. For Acheulean evaluations, they produced a tool from a standardized porcelain core.

At the beginning, middle and end of the 18-month experiment, subjects underwent functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) scans of their brains while they watched videos. The videos showed rotating stone cores marked with colored cues: A red dot indicated an intended point of impact, and a white area showed the flake predicted to result from the impact.

The subjects were asked the following questions: “If the core were struck in the place indicated, is what you see a correct prediction of the flake that would result?” “Is the indicated place to hit the core a correct one given the objective of the technology?”

The subjects responded by pushing a “yes” or “no” button.

Answering the first question, how a rock will break if you hit it in a certain place, relies more on reflexive, perceptual and motor-control processes, associated with posterior portions of the brain. Stout compares it to the modern-day rote reflex of a practiced golf swing or driving a car.

The second question – is it a good idea to hit the core in a certain spot if you want to make a hand axe – involves strategic thinking, such as planning the route for a road trip. “You have to think about information that you have stored in your brain, bring it online, and then make a decision about each step of the trip,” Stout says.

This so-called executive control function of the brain, associated with activity in the prefrontal cortex, allows you to project what’s going to happen in the future and use that projection to guide your action. “It’s kind of like mental time travel, or using a computer simulation,” Stout explains. “It’s considered a high level, human cognitive capacity.”

The researchers mapped the skill level of the subjects onto the data from their brain scans and their responses to the questions. Greater skill at making tools correlated with greater accuracy on the video quiz for predicting the correct strategy for making a hand axe, which was itself correlated with greater activity in the prefrontal cortex.

“These data suggest that making an Acheulean hand axe is not simply a rote, auto pilot activity of the brain,” Stout says. “It requires you to engage in some complicated thinking.”

Most of the hand axes produced by the modern hands and minds of the study subjects would not have cut it in the Stone Age. “They weren’t up to the high standards of 500,000 years ago,” Stout says.

A previous study by the researchers showed that learning to make stone tools creates structural changes in fiber tracts of the brain connecting the parietal and frontal lobes, and that these brain changes correlated with increases in performance. “Something is happening to strengthen this connection,” Stout says. “This adds to evidence of the importance of these brain systems for stone tool making, and also shows how tool making may have shaped the brain evolutionarily.”

Stout recently launched a major, three-year archeology experiment that will build on these studies and others. Known as the Language of Technology project, the experiment involves 20 subjects who will each devote 100 hours to learning the art of making a Stone Age hand axe, and also undergo a series of MRI scans. The project aims to hone in whether the brain systems involved in putting together a sequence of words to make a meaningful sentence in spoken language overlap with systems involved in putting together a series of physical actions to reach a meaningful goal.

Related:
Top 10 reasons to learn to make Stone Age tools
Brain trumps hand in Stone Age tool study

Wednesday, March 25, 2015

Physicist's research of glassy materials nets NSF CAREER award

Physicist Justin Burton at work in his lab, where he studies amorphous matter. (Emory Photo/Video)

By Carol Clark

Emory physics professor Justin Burton received a $625,000 award from the National Science Foundation’s Faculty Early Career Development (CAREER) Program. The five-year CAREER grants, among the NSF’s most prestigious awards, support scientists who exemplify the role of teacher-scholars through outstanding research integrated with excellence in education.

Burton will apply the award to his research into amorphous matter, or substances made up of granules in jumbled, irregular states. These substances include everything from the foam on your cup of cappuccino to the vast, slushy mélange of a glacier as it breaks down and flows into the sea. Amorphous matter also encompasses soft condensed matter such as toothpaste, shaving cream, plastic and glass, which are collectively known as “glassy” materials.

“Amorphous material is everywhere, it’s among the most common states of solid matter,” Burton says, “and yet, there’s a lot that we don’t understand about it.”

Crystalline material, by contrast, is relatively rare but well understood by physicists. Crystals have a structural order that makes them easier to conceptualize and define mathematically. “Research into the thermodynamic behavior of crystals at ultra-low temperatures led to our understanding of how they conduct heat,” Burton says. “That’s one of the fundamental triumphs of quantum mechanics. It helped lay the foundation for a lot of important tools of the modern world, from computers to cell phones.”

Lacking the well-defined order of crystals, amorphous materials often behave in peculiar, unpredictable ways. Burton uses the example of a pile of sand at the bottom of an hourglass. “What seems stable enough can suddenly avalanche upon the addition of a few extra grains,” he says. “Or even a traffic jam: What determines the boundary between a flowing state and a rigid one? Our world is full of similar examples where systems exist in a region near marginal stability.”

A view inside the vacuum chamber, where colloidal particles are suspended in a flat disc, lit by the green light of a laser. Photo by Justin Burton.

Burton’s lab is creating model systems to simulate the dynamics of the microscopic granules of amorphous, glassy matter at ultra-low temperatures of below 1 degree Kelvin. That’s colder than the deepest reaches of space.

In a vacuum chamber, filled with argon gas, the lab conducts experiments. The chamber is filled with ionized argon gas. “It’s a plasma, or a gas that has had its electrons ripped away from its atoms,” Burton explains. “The electrons are constantly being ripped away and resembling.”

Colloidal particles, tiny as dust specks, are suspended in the plasma of the vacuum chamber, to stand-in for the molecules of an amorphous material. By altering the gas pressure inside the chamber, and varying the size of the particles, the lab members can study how the particles behave as they move between an excited, free-flowing state into a jammed, stable position.

They can also simulate how molecules in a stable position react to a disturbance. “We want to create a wave, like dropping a pebble into a still pond to make ripples, and study that dynamic,” Burton says. “That could help us understand, for instance, how sound moves through a glassy material.”

Burton’s lab will use another model, involving polymer hydrogel particles that expand or shrink in response to salt concentrations, to study Casimir forces, a special type of long-ranged force that can arise between objects in a highly fluctuating medium.

In addition to opening a window into the molecular motions common in glasses, the research could shed light on the connection between the dynamics and disorder in a broad range of physical systems, Burton says.

In parallel to his research effort, the CAREER award will also fund the creation of an after-school science club at an elementary school in Dekalb County. Burton and his graduate students will lead children in hands-on activities and experiments that give insights into basic principles of physics.

Related:
The physics of falling icebergs
Physicists crack another piece of the glass puzzle

Thursday, December 18, 2014

A clear, molecular view of the evolution of human color vision

By around 30 million years ago, our ancestors had evolved the ability to see the full-color spectrum of visible light, except for UV light.

By Carol Clark

Many genetic mutations in visual pigments, spread over millions of years, were required for humans to evolve from a primitive mammal with a dim, shadowy view of the world into a greater ape able to see all the colors in a rainbow.

Now, after more than two decades of painstaking research, scientists have finished a detailed and complete picture of the evolution of human color vision. PLOS Genetics published the final pieces of this picture: The process for how humans switched from ultraviolet (UV) vision to violet vision, or the ability to see blue light.

“We have now traced all of the evolutionary pathways, going back 90 million years, that led to human color vision,” says lead author Shozo Yokoyama, a biologist at Emory University. “We’ve clarified these molecular pathways at the chemical level, the genetic level and the functional level.”

Co-authors of the PLOS Genetics paper include Emory biologists Jinyi Xing, Yang Liu and Davide Faggionato; Syracuse University biologist William Starmer; and Ahmet Altun, a chemist and former post-doc at Emory who is now at Fatih University in Istanbul, Turkey.

Yokoyama and various collaborators over the years have teased out secrets of the adaptive evolution of vision in humans and other vertebrates by studying ancestral molecules. The lengthy process involves first estimating and synthesizing ancestral proteins and pigments of a species, then conducting experiments on them. The technique combines microbiology with theoretical computation, biophysics, quantum chemistry and genetic engineering.

Five classes of opsin genes encode visual pigments for dim-light and color vision. Bits and pieces of the opsin genes change and vision adapts as the environment of a species changes.

“Gorillas and chimpanzees have human color vision,” Yokoyama says. “Or perhaps we should say that humans have gorilla and chimpanzee vision.”

Around 90 million years ago, our primitive mammalian ancestors were nocturnal and had UV-sensitive and red-sensitive color, giving them a bi-chromatic view of the world. By around 30 million years ago, our ancestors had evolved four classes of opsin genes, giving them the ability to see the full-color spectrum of visible light, except for UV.

“Gorillas and chimpanzees have human color vision,” Yokoyama says. “Or perhaps we should say that humans have gorilla and chimpanzee vision.”

For the PLOS Genetics paper, the researchers focused on the seven genetic mutations involved in losing UV vision and achieving the current function of a blue-sensitive pigment. They traced this progression from 90-to-30 million years ago.

The researchers identified 5,040 possible pathways for the amino acid changes required to bring about the genetic changes. “We did experiments for every one of these 5,040 possibilities,” Yokoyama says. “We found that of the seven genetic changes required, each of them individually has no effect. It is only when several of the changes combine in a particular order that the evolutionary pathway can be completed.”

In other words, just as an animal’s external environment drives natural selection, so do changes in the animal’s molecular environment.

Mice are nocturnal and, like the primitive human ancestor of 90 million years ago, have UV vision and limited ability to see colors.

In previous research, Yokoyama showed how the scabbardfish, which today spends much of its life at depths of 25 to 100 meters, needed just one genetic mutation to switch from UV to blue-light vision. Human ancestors, however, needed seven changes and these changes were spread over millions of years. “The evolution for our ancestors’ vision was very slow, compared to this fish, probably because their environment changed much more slowly,” Yokoyama says.

About 80 percent of the 5,040 pathways the researchers traced stopped in the middle, because a protein became non-functional. Chemist Ahmet Altun solved the mystery of why the protein got knocked out. It needs water to function, and if one mutation occurs before the other, it blocks the two water channels extending through the vision pigment’s membrane.

“The remaining 20 percent of the pathways remained possible pathways, but our ancestors used only one,” Yokoyama says. “We identified that path.”

In 1990, Yokoyama identified the three specific amino acid changes that led to human ancestors developing a green-sensitive pigment. In 2008, he led an effort to construct the most extensive evolutionary tree for dim-light vision, including animals from eels to humans. At key branches of the tree, Yokoyama’s lab engineered ancestral gene functions, in order to connect changes in the living environment to the molecular changes.

The PLOS Genetics paper completes the project for the evolution of human color vision. “We have no more ambiguities, down to the level of the expression of amino acids, for the mechanisms involved in this evolutionary pathway,” Yokoyama says.

Images: Thinkstock

Related:
Evolutionary biologists urged to adapt their research methods
Fish vision makes waves in natural selection

Monday, December 15, 2014

Mathematicians prove the Umbral Moonshine Conjecture

In theoretical math, the term "moonshine" refers to an idea so seemingly impossible that it seems like lunacy.

By Carol Clark

Monstrous moonshine, a quirky pattern of the monster group in theoretical math, has a shadow – umbral moonshine. Mathematicians have now proved this insight, known as the Umbral Moonshine Conjecture, offering a formula with potential applications for everything from number theory to geometry to quantum physics.

“We’ve transformed the statement of the conjecture into something you could test, a finite calculation, and the conjecture proved to be true,” says Ken Ono, a mathematician at Emory University. “Umbral moonshine has created a lot of excitement in the world of math and physics.”

Co-authors of the proof include mathematicians John Duncan from Case Western Reserve University and Michael Griffin, an Emory graduate student.

“Sometimes a result is so stunningly beautiful that your mind does get blown a little,” Duncan says.

Duncan co-wrote the statement for the Umbral Moonshine Conjecture with Miranda Cheng, a mathematician and physicist at the University of Amsterdam, and Jeff Harvey, a physicist at the University of Chicago.

Ono will present their work on January 11, 2015 at the Joint Mathematics Meetings in San Antonio, the largest mathematics meeting in the world. Ono is delivering one of the highlighted invited addresses.

Ono gave a colloquium on the topic at the University of Michigan, Ann Arbor, in November, and has also been invited to speak on the umbral moonshine proof at upcoming conferences around the world, including Brazil, Canada, England, India, and Germany.

The number of elements in the monster group is larger than the number of atoms in 1,000 Earths.

It sounds like science fiction, but the monster group (also known as the friendly giant) is a real and influential concept in theoretical math.

Elementary algebra is built out of groups, or sets of objects required to satisfy certain relationships. One of the biggest achievements in math during the 20th century was classifying all of the finite simple groups. They are now collected in the ATLAS of Finite Groups, published in 1985.

“This ATLAS is to mathematicians what the periodic table is to chemists,” Ono says. “It’s our fundamental guide.”

And yet, the last and largest sporadic finite simple group, the monster group, was not constructed until the late 1970s. “It is absolutely huge, so classifying it was a heroic effort for mathematicians,” Ono says.

In fact, the number of elements in the monster group is larger than the number of atoms in 1,000 Earths. Something that massive defies description.

“Think of a 24-dimensional doughnut,” Duncan says. “And then imagine physical particles zooming through this space, and one particle sometimes hitting another. What happens when they collide depends on a lot of different factors, like the angles at which they meet. There is a particular way of making this 24-dimensional system precise such that the monster is its symmetry. The monster is incredibly symmetric.”

“The monster group is not just a freak,” Ono adds. “It’s actually important to many areas of math.”

It’s too immense, however, to use directly as a tool for calculations. That’s where representation theory comes in.

The shadow technique is a valuable tool in theoretical math.

Shortly after evidence for the monster was discovered, mathematicians John McKay and John Thompson noticed some odd numerical accidents. They found that a series of numbers that can be extracted from a modular function and a series extracted from the monster group seemed to be related. (One example is the strange and simple arithmetic equation 196884 = 196883 + 1.)

John Conway and Simon Norton continued to investigate and found that this peculiar pattern was not just a coincidence. “Evidence kept accumulating that there was a special modular function for every element in the monster group,” Ono says. “In other words, the main characteristics of the monster group could be read off from modular functions. That opened the door to representation theory to capture and manipulate the monster.”

The idea that modular functions could tame something as unruly as the monster sounded impossible – like lunacy. It was soon dubbed the Monstrous Moonshine Conjecture.

(The moonshine reference has the same meaning famously used by Ernest Rutherford, known as the father of nuclear physics. In a 1933 speech, Rutherford said that anyone who considered deriving energy from splitting atoms was "talking moonshine.”)

In 1998, Richard Borcherds won math’s highest honor, the Fields Medal, for proving the Monstrous Moonshine Conjecture. His proof turned this representation theory for the monster group into something computable.

Fast-forward 16 years. Three Japanese physicists (Tohru Eguchi, Hirosi Ooguri and Yuji Tachikawa) were investigating a particular kind of string theory involving four-dimensional spaces. The appearance of numbers from the Mathieu Group M24, another important finite simple group, was unexpected.

“They conjectured a new way to extract numbers from the Mathieu Group,” Duncan says, “and they noticed that the numbers they extracted were similar to those of the monster group, just not as large.” Mathematician Terry Gannon proved that their observations were true.

It was a new, unexpected analogue that hinted at a pattern similar to monstrous moonshine.

“What I hope is that we will eventually see that everything is unified, that monstrous moonshine and umbral moonshine have a common origin,” Duncan says.

Duncan started investigating this idea with physicists Cheng and Harvey. “We realized that the Mathieu group pattern was part of a much bigger picture involving mock modular forms and more moonshine,” Duncan says. “A beautiful mathematical structure was controlling it.”

They dubbed this insight the Umbral Moonshine Conjecture. Since the final version of the more than 100-page conjecture was published online last June, it has been downloaded more than 2,500 times.

The conjecture caught the eye of Ono, an expert in mock modular forms, and he began pondering the problem along with Griffin and Duncan.

“Things came together quickly after the statement of the Umbral Moonshine Conjecture was published,” Ono says. “We have been able to prove it and it is no longer a guess. We can now use the proof as a completely new and different tool to do calculations.”

Just as modular forms are “shadowed” by mock modular forms, monstrous moonshine is shadowed by umbral moonshine. (Umbra is Latin for the innermost and darkest part of a shadow.)

“The job of a theoretical mathematician is to take impossible problems and make them tractable,” Duncan says. “The shadow device is one valuable tool that lets us do that. It allows you to throw away information while still keeping enough to make some valuable observations.”

He compares it to a paleontologist using fossilized bones to piece together a dinosaur.

The jury is out on what role, if any, umbral moonshine could play in helping to unravel mysteries of the universe. Aspects of it, however, hint that it could be related to problems ranging from geometry to black holes and quantum gravity theory.

“What I hope is that we will eventually see that everything is unified, that monstrous moonshine and umbral moonshine have a common origin,” Duncan says. “And part of my optimistic vision is that umbral moonshine may be a piece in one of the most important puzzles of modern physics: The problem of unifying quantum mechanics with Einstein’s general relativity.”

Images: NASA and Thinkstock.

Related:
Mathematicians trace source of Rogers-Ramanujan identities
New theories reveal the nature of numbers

Tuesday, December 9, 2014

Birdsong study reveals how brain uses timing during motor activity

Songbirds are one of the best systems for understanding how the brain controls complex behavior.  Image credit: Sam Sober.

By Carol Clark

Timing is key for brain cells controlling a complex motor activity like the singing of a bird, finds a new study published by PLOS Biology.

“You can learn much more about what a bird is singing by looking at the timing of neurons firing in its brain than by looking at the rate that they fire,” says Sam Sober, a biologist at Emory University whose lab led the study. “Just a millisecond difference in the timing of a neuron’s activity makes a difference in the sound that comes out of the bird’s beak.”

The findings are the first to suggest that fine-scale timing of neurons is at least as important in motor systems as in sensory systems, and perhaps more critical.

“The brain takes in information and figures out how to interact with the world through electrical events called action potentials, or spikes in the activity of neurons,” Sober says. “A big goal in neuroscience is to decode the brain by better understanding this process. We’ve taken another step towards that goal.”

Sober’s lab uses Bengalese finches, also known as society finches, as a model system. The way birds control their song has a lot in common with human speech, both in how it’s learned early in life and how it’s vocalized in adults. The neural pathways for birdsong are also well known, and restricted to that one activity.

“Songbirds are the best system for understanding how the brain controls complex vocal behavior, and one of the best systems for understanding control of motor behavior in general,” Sober says.



Researchers have long known that for an organism to interpret sensory information – such as sight, sound and taste – the timing of spikes in brain cells can matter more than the rate, or the total number of times they fire. Studies on flies, for instance, have shown that their visual systems are highly sensitive to the movement of shadows. By looking at the timing of spikes in the fly’s neurons you can tell the velocity of a shadow that the fly is seeing.

An animal’s physical response to a stimulus, however, is much slower than the millisecond timescale on which spikes are produced. “There was an assumption that because muscles have a relatively slow response time, a timing code in neurons could not make a difference in controlling movement of the body,” Sober says.

An Emory undergraduate in the Sober lab, Claire Tang, got the idea of testing that assumption. She proposed an experiment involving mathematical methods that she was learning in a Physical Biology class. The class was taught by Emory biophysicist Ilya Nemenman, an expert in the use of computational techniques to study biological systems.

“Claire is a gifted mathematician and programmer and biologist,” Sober says of Tang, now a graduate student at the University of California, San Francisco. “She made a major contribution to the design of the study and in the analysis of the results.”

Co-authors also include Nemenman; laboratory technician Diala Chehayeb; and Kyle Srivastava, a graduate student in the Emory/Georgia Tech graduate program in biomedical engineering.

The researchers used an array of electrodes, each thinner than a human hair, to record the activity of single neurons of adult finches as they were singing.

“The birds repeat themselves, singing the same sequence of ‘syllables’ multiple times,” Sober says. “A particular sequence of syllables matches a particular firing of neurons. And each time a bird sings a sequence, it sings it a little bit differently, with a slightly higher or lower pitch. The firing of the neurons is also slightly different.”

The acoustic signals of the birdsong were recorded alongside the timing and the rate that single neurons fired. The researchers applied information theory, a discipline originally designed to analyze communications systems such as the Internet or cellular phones, to analyze how much one could learn about the behavior of the bird singing by looking at the precise timing of the spikes versus their number.

The result showed that for the duration of one song signal, or 40 milliseconds, the timing of the spikes contained 10 times more information than the rate of the spikes.

“Our findings make it pretty clear that you may be missing a lot of the information in the neural code unless you consider the timing,” Sober says.

Such improvements in our understanding of how the brain controls physical movement hold many potential health applications, he adds.

“For example,” he says, “one area of research is focused on how to record neural signals from the brains of paralyzed people and then using the signals to control prosthetic limbs. Currently, this area of research tends to focus on the firing rate of the neurons rather than taking the precise timing of the spikes into account. Our work shows that, in songbirds at least, you can learn much more about behavior by looking at spike timing than spike rate. If this turns out to be true in humans as well, timing information could be analyzed to improve a patient’s ability to control a prosthesis.”

The research was supported by grants from the National Institutes of Health, the National Science Foundation, the James S. McDonnell Foundation and Emory’s Computational Neuroscience Training Program.

Bird graphic courtesy of Sam Sober.

Related:
Doing the math for how songbirds learn to sing
Birdsong study pecks theory that music is uniquely human

Friday, November 7, 2014

Interstellar: Starting over on a new 'Earth'



The movie Interstellar opens in theaters at a time when Earth is facing major losses of biodiversity and ecosystems, says David Lynn, an Emory professor of biomolecular chemistry.

While humanity is challenged to find out what’s happening to Earth and how to make adjustments, we have also begun to realize that billions of Earth-like planets likely exist in habitable zones around the stars of our galaxy.

“In as little as 10 years, we could know whether we’re alone in the universe, whether there are other living systems,” Lynn says. “That’s an exciting prospect. It’s not clear necessarily that we’ll find out that there is intelligent life or not. That may be a lower probability, but that’s also possible.”

Much of the science in Interstellar is not accurate, and its vision of the future may not come true. And yet, it is still an important film, Lynn says, since its themes resonate today, during a critical time in our history.

Related:
Chemists boldly go in search of 'little green molecules'
Prometheus: Seeding wonder and science

Monday, September 8, 2014

Patterns etched in sound



“I’m into beautiful melodies and catchy harmonies,” says Robert Schneider, the co-founder of The Elephant 6 Recording Company and lead singer and songwriter in the band The Apples in Stereo. “As a producer, I’m also interested in surrounding my pop songs with experimental sounds. These sorts of things are very appealing to me.”

In a recent TEDxEmory talk, Schneider explains how music led him to become a Woodruff Graduate Fellow in Emory’s Department of Mathematics and Computer Science. His research focuses mainly on analytic number theory, but he also has created music compositions based on mathematics.

“I found as I started to study mathematics that there were all these beautiful patterns that were lying there,” he says. “It was like music that was silent, just waiting to be written out and used for compositions.”

Watch the video to learn more, and listen to some of Schneider’s mathematical compositions.

Related:
He took the psychedelic pop path to math

Tuesday, August 19, 2014

The physics of falling icebergs



By Carol Clark

For thousands of years, the massive glaciers of Earth’s polar regions have remained relatively stable, the ice locked into mountainous shapes that ebbed in warmer months but gained back their bulk in winter. In recent decades, however, warmer temperatures have started rapidly thawing these frozen giants. It’s becoming more common for sheets of ice, one kilometer tall, to shift, crack and tumble into the sea, splitting from their mother glaciers in an explosive process known as calving.

“Imagine a sheer, vertical ice face three times as tall as the tallest building in Atlanta breaking off from a glacier and flipping 90 degrees,” says Emory physicist Justin Burton. “In my lab, we can calculate how much energy is released during one of these events, which can be equivalent to several nuclear bombs.”

Burton studies the geophysics of calving icebergs in order to better understand and predict effects of climate change, such as sea-level rise.

“Ice coverage is one of the most sensitive indicators of climate change,” he says. About half of the loss of ice from the polar ice sheets is occurring due to melting and half due to iceberg calving. While it’s more straightforward to estimate iceberg melt rates, their calving rates are much harder to pin down.

Greenland's Ilulissat fjord is believed to have spawned the iceberg that brought down the Titanic.

For the 2012 film “Chasing Ice,” videographers endured subzero temperatures and years of patience to record stunning time-lapse footage of ancient glaciers receding. Their efforts also yielded the largest calving event ever captured on film. The area involved was about the size of Manhattan. The filmmakers described it as like watching skyscrapers rolling around in an earthquake and an entire city breaking apart before their eyes.

Direct field observations of calving icebergs are as dangerous as they are rare. So Burton and his colleagues developed ways to model these events in a controlled, laboratory setting. “We can measure things that can’t be measured in the field,” he explains, “and it’s also way cheaper and safer.”

He and his colleagues built a cylindrical, Plexiglas water tank as a scaled-down version of a fjord, similar to the ice-walled channel at the end of the Ilulissat glacier, which drains the Greenland ice sheet into the ocean. This well-studied glacier, also known as Jakobshavn, is considered an important bellwether for climate change.

While it is normal for glaciers to both accumulate and shed ice, Jakobshavn provides a vivid snapshot of how the shedding process has speeded up. The glacier retreated 8 miles during the 100-year period between 1902 and 2001, but has retreated more than 10 miles during the past decade. Greenland’s ice sheet appears to be out of balance, losing more ice than it gains.



Burton’s lab creates experimental models to gain a more precise understanding of these glacial processes. Rectangular plastic blocks that have the same density as icebergs are tipped in the water tank and the resulting hydrodynamics are recorded.

One hypothesis that the lab is investigating is how the waves unleashed by capsizing icebergs may be causing earthquakes that can be detected thousands of miles away. “It’s counterintuitive,” Burton says, “because usually you think of earthquakes as causing large waves and not the other way around.”

The lab models, however, suggest that the violent rotation of massive icebergs generates waves that release the brunt of their energy onto the sheer vertical face of the glacier, instead of dispersing most of it into the ocean.

“If we can correlate the frequencies of earthquake signals with the frequencies of icebergs rocking back and forth in the water, then that could be a direct measurement of the size of the icebergs that have broken off,” Burton explains. “Large iceberg calving events could then be detected and measured using remote seismic monitoring.”

Climate change and its impacts is one of the top problems in science, Burton says. “We’re seeing huge changes occurring within a few years and we’ve got to get on it. I’d like to think that, a few decades from now, we were able to do something.”

Photo of Ilulissat by iStockphoto.com

Related:
Creating an atmosphere for change
Crystal-liquid interface visible for first time

Tuesday, August 5, 2014

Physicists eye neural fly data, find formula for Zipf's law

The Zipf's law mechanism was verified with neural data of blowflies reacting to changes in visual signals.

By Carol Clark

Physicists have identified a mechanism that may help explain Zipf’s law – a unique pattern of behavior found in disparate systems, including complex biological ones. The journal Physical Review Letters is publishing their mathematical models, which demonstrate how Zipf’s law naturally arises when a sufficient number of units react to a hidden variable in a system.

“We’ve discovered a method that produces Zipf’s law without fine-tuning and with very few assumptions,” says Ilya Nemenman, a biophysicist at Emory University and one of the authors of the research.

The paper’s co-authors include biophysicists David Schwab of Princeton and Pankaj Mehta of Boston University. “I don’t think any one of us would have made this insight alone,” Nemenman says. “We were trying to solve an unrelated problem when we hit upon it. It was serendipity and the combination of all our varied experience and knowledge.”

Their findings, verified with neural data of blowflies reacting to changes in visual signals, may have universal applications. “It’s a simple mechanism,” Nemenman says. “If a system has some hidden variable, and many units, such as 40 or 50 neurons, are adapted and responding to the variable, then Zipf’s law will kick in.”

That insight could aid in the understanding of how biological systems process stimuli. For instance, in order to pinpoint a malfunction in neural activity, it would be useful to know what data recorded from a normally functioning brain would be expected to look like. “If you observed a deviation from the Zipf’s law mechanism that we’ve identified, that would likely be a good place to investigate,” Nemenman says.

“Letters and words in language are sequences that encode a description of something that is changing over time, like the plot line in a story,” Nemenman says.

Zipf’s law is a mysterious mathematical principle that was noticed as far back as the 19th century, but was named for 20th-century linguist George Zipf. He found that if you rank words in a language in order of their popularity, a strange pattern emerges: The most popular word is used twice as often as the second most popular, and three times as much as the third-ranked word, and so on. This same rank vs. frequency rule was also found to apply to many other social systems, including income distribution among individuals and the size of cities, with a few exceptions.

More recently, laboratory experiments suggest that Zipf’s power-law structure also applies to a range of natural systems, from the protein sequences of immune receptors in cells to the intensity of solar flares from the sun.

“It’s interesting when you see the same phenomenon in systems that are so diverse. It makes you wonder,” Nemenman says.

Scientists have pondered the mystery of Zipf’s law for decades. Some studies have managed to reveal how a feature of a particular system makes it Zipfian, while others have come up with broad mechanisms that generate similar power laws but need some fine-tuning to generate the exact Zipf’s law.

“Our method is the only one that I know of that covers both of these areas,” Nemenman says. “It’s broad enough to cover many different systems and you don’t have to fine tune it: It doesn’t require you to set some parameters at exactly the right value.”

Neurons turn visual stimuli into units of information.

The blowfly data came from experiments led by biophysicist Rob de Ruyter that Nemenman worked on as a graduate student. Flies were turned on a rotor as they watched the world go by, hundreds of times. The moving scenes that the flies repeatedly experienced simulated their natural flight patterns. The researchers recorded when neurons associated with vision spiked, or fired. All sets of the data largely matched within a few hundred microseconds, showing that the flies’ neurons were not randomly spiking, but instead operating like precise coding machines.

If you think of a neuron firing as a “1” and a neuron not firing as a “0,” then the neural activity can be thought of as words, made up of 1s and 0s. When these “words,” or units, are strung together over time, they become “sentences.”

The neurons are turning visual stimuli into units of information, Nemenman explains. “The data is a way for us to read the sentences the fly’s vision neurons are conveying to the rest of the brain.”

Nemenman and his co-authors took a fresh look at this fly data for the new paper in Physical Review Letters. “We were trying to understand if there is a relationship between ideas of universality, or criticality, in physical systems and neural examples of how animals learn,” he says.

The physicists are now researching whether they can bring their work full circle, by showing that the mechanism they identified applies to Zipf’s law in language.

In order to navigate in flight, the flies’ visual neurons adapt to changes in the visual signal, such as velocity. When the world moves faster in front of a fly, these sensitive neurons adapt and rescale. These adaptions enable the flies to adjust to new environments, just as our own eyes adapt and rescale when we move from a darkened theater to a brightly lit room.

“We showed mathematically that the system becomes Zipfian when you’re recording the activity of many units, such as neurons, and all of the units are responding to the same variable,” Nemenman says. “The fact that Zipf’s law will occur in a system with just 40 or 50 such units shows that biological units are in some sense special – they must be adapted to the outside world.”

The researchers provide mathematical simulations to back up their theory. “Not only can we predict that Zipf’s law is going to emerge in any system which consists of many units responding to variable outside signals,” Nemenman says, “we can also tell you how many units you need to develop Zipf’s law, given how variable the response is of a single unit.”

They are now researching whether they can bring their work full circle, by showing that the mechanism they identified applies to Zipf’s law in language.

“Letters and words in language are sequences that encode a description of something that is changing over time, like the plot line in a story,” Nemenman says. “I expect to find a pattern similar to how vision neurons fire as a fly moves through the world and the scenery changes.”

Related:
Biology may not be so complex after all 

Photos: iStockphoto.com

Wednesday, June 25, 2014

In Emory's Math Circle, bubbles are square and equations are cool

This summer, math graduate student Sarah Trebat-Leder is working with elementary-age children at the Children’s Museum of Atlanta (above) and with advanced college undergraduates on the Emory campus. And during the school year, she organizes the Emory Math Circle for middle school and high school students. (Photos by Tony Benner, Emory Photo/Video.)

By Carol Clark

Each June and July, the Emory math department gathers a hive of brilliant minds from around the country for Research Experiences for Undergraduates (REU), a National Science Foundation initiative. The 13 participants at Emory this summer have come from Brown, Harvard, Indiana University, Princeton, Stanford, the University of Georgia and Yale. Number theorist Ken Ono heads up the Emory REU. He and the other instructors charge the group with problems relating to elliptic curves and Galois representations, mock modular and quantum modular forms, additive number theory and distribution of primes.

“This is one of the top REUs in the country, because of the research you get to do here,” says Sarah Trebat-Leder, an Emory NSF Graduate Fellow, who is an instructor for the group this summer.

Trebat-Leder, who graduated from Princeton in 2013, came to two of the Emory REU summer programs herself as an undergraduate. “I learned how to be a mathematician,” she says of the experience. “How to read technical math papers, how to give talks, how to write math and how to go about doing research.”

Ono put her to work on extending the findings of a major discovery in the area of partitions that he had just published with colleagues. “My first REU project was generalizing this major paper that a lot of people in the math world cared about,” Trebat-Leder says. “I had taken a lot of classes, but I had never worked on a problem that no one had solved. Ken is a great mentor because he knows how to develop projects that are accessible to students and yet important to math.”

Ever seen a square bubble? Emory graduate students are giving kids a new view of math, aiming to spark wonder and a desire to learn more.

Trebat-Leder is also devoted to making math accessible and inspiring, for everyone from young kids to adults. Her career goal is to become a college professor focused on teaching and community outreach. 

In January, Trebat-Leder launched the Emory Math Circle. The free program draws students from Atlanta middle schools and high schools to campus on Saturdays for challenging and fun math enrichment sessions led by Emory graduate students. This summer, in addition to teaching for the REU, she is spending several Saturday afternoons at the Children’s Museum of Atlanta alongside other Emory graduate students, including Amanda Clemm, a co-organizer of the Math Circle. They are immersing young children in math and physics through a hands-on activity they call “3D Boxes and Bubbles."

Trebat-Leder reshapes math education.
“Who doesn’t like bubbles?” says Trebat-Leder, explaining the activity’s appeal.

First the kids build a variety of geometric structures out of ZomeTools, interlocking plastic balls and tubes. Then they use the structures to create soap bubbles in crazy shapes: Squares, cubes, spirals, wormholes and parabolas.

While the kids are busy making bubbles, the graduate students are asking them questions about what they think is happening. The reason an odd-shaped bubble forms in the middle of a 3D geometic shape? "The bubble mix is kind of lazy," Trebat-Leder explains. "It wants to connect up without having to stretch a lot and it takes less stretch for it to connect in the middle than to stretch to the outside."

The idea is to strip out complex jargon and give kids glimpses into math and physics that help them to think both logically and creatively.

It’s a far different approach than multiplication drills.

Amanda Clemm is among the Emory graduate students who are volunteering their time to give kids positive early experiences with math.

“I was getting my hair cut the other day, and the hair dresser asked what I do. I told her and she said, ‘I hated math!’” Trebat-Leder says. “I get that reaction everywhere. Everyone is always telling me about their bad experiences with math. I’d like to change that, but it takes time.”

Trebat-Leder, who grew up in Pennsylvania’s Lehigh Valley, loved teaching even as a child. “It’s in my blood,” she says. By the time she was 11, she had earned her black belt in karate and was leading a karate class herself.

She also had an affinity for math. After her sophomore year in high school, she went to a summer math camp offered by Hampshire College in Massachusetts. “I spent six weeks doing nothing but math all day, and I got a strong sense of what it was all about,” Trebat-Leder says. “I love math because it’s both logical and creative. In science, you have a hypothesis and conduct an experiment that can strongly support your hypothesis. But math is more precise. You can actually prove something and be sure that it is true.”

During her Princeton undergraduate years, Trebat-Leder participated in a Boston University summer program called PROMYS, or Program in Mathematics for Young Scientists. PROMYS immerses both high school students and teachers in the creative aspects of math and original research.

CBS46 News

Trebat-Leder drew on all her varied experiences to launch the Emory Math Circle. More than a dozen Emory graduate students responded to her call to lead the free math enrichment sessions on Saturday afternoons, and about 30 middle school and high school students attended throughout the spring semester.

Math Circle is not free tutoring for students who are struggling in their classes, Trebat-Leder stresses. “We’re looking for kids who really want to be here and who enjoy our sessions,” she says. “Our aim is to get the students excited about math and let them see how interesting it can be by exposing them to things they don’t learn in school.”

The middle-school level sessions might introduce the students to graph theory by showing how it can be used to model Facebook networks or to play “Cops and Robbers,” a game that explores how many policemen you need to catch a criminal in different scenarios. Another popular game in the Math Circle requires students to keep four colors from touching one another. “The four-color theorem was one of the really deep problems in combinatorics,” Trebat-Leder says. “It took a lot of computers and people to prove it. But it’s also super visual and it doesn’t require a lot of technical language and symbols to convey.”

Kids grasp the idea of math hidden in shapes.
The students that attended the Math Circle sessions last spring came from a range of races and about half were girls, Trebat-Leder says. She notes that girls were the first- and second-place winners of a problem-solving contest organized for the Math Circle middle school students.

“The kids get to learn some really cool math and see what it’s like to actually discuss it themselves and not have it lectured to them,” Trebat-Leder says. “It’s really beneficial to have graduate students, who have studied a lot of math and understand it deeply, interact with kids.”

She cites an article she read recently comparing math to art. “If art classes consisted of just reproducing other people’s paintings, than the experience wouldn’t be nearly as fun or creative,” Trebat-Leder says. “And yet, that’s the way most schools teach math.”

She hopes to keep expanding her influence as an educator, and come up with more ways to improve the math experience of kids. “I think schools are emphasizing the wrong things in an era when computers drive a lot of the work,” she says. “We’re still having kids spend a lot of time practicing long division when we should be focusing more on concepts. Technology has changed so much, and I think that what we’re teaching should be adapting to that.”

Related:
The math of card tricks, games and gambling
How culture shaped a mathematician

Thursday, May 1, 2014

The art and physics of falling fluid

Pouring layers of paint, of different colors, produces “the most magical fantasies and forms that the human mind can imagine,” wrote Mexican painter David Alfaro Siqueiros.

It turns out that Siqueiros and Jackson Pollock, two iconic artists of Abstract Expressionism, were also experimentalists of fluid mechanics.

“Physical analysis illuminates the ways that both artists used a natural effect – fluids falling under gravity – to produce their works,” writes Emory physicist Sidney Perkowitz, in a recent issue of Physics World.

The video above demonstrates that the patterns produced by Siqueiros, who described his technique as “accidental painting,” result from a Rayleigh-Taylor instability of a viscous gravity current.

You can read Perkowitz’s article in the April issue of Physics World.

Monday, April 28, 2014

Mathematicians trace source of Rogers-Ramanujan identities, find algebraic gold

Nautilus shell spirals are among the many forms in nature that can be related to the golden ratio, the most famous algebraic number of them all. Now mathematicians have discovered a new treasure trove of algebraic numbers and formulas to access them.

By Carol Clark

Mathematicians have found a framework for the celebrated Rogers-Ramanujan identities and their arithmetic properties, solving another long-standing mystery stemming from the work of Indian math genius Srinivasa Ramanujan.

The findings, by mathematicians at Emory University and the University of Queensland, yield a treasure trove of algebraic numbers and formulas to access them.

“Algebraic numbers are among the first numbers you encounter in mathematics,” says Ken Ono, a number theorist at Emory “And yet, it’s surprisingly difficult to find functions that return them as values in a uniform and systematic way.”

Ono is the co-author of the new findings, along with S. Ole Warnaar of the University of Queensland and Michael Griffin, an Emory graduate student.

Ono announced the findings in April as a plenary speaker at the Applications of Automorphic Forms in Number Theory and Combinatorics conference at Louisiana State University. He will also present them as a plenary speaker at the 2015 Joint Mathematics Meetings, the largest mathematics meeting in the world, set for January in San Antonio. Warnaar, Griffin and others will give additional talks on the findings during an invited special session to accompany Ono’s plenary address.
Ramanujan had "a Midas touch."

The most famous algebraic number of all is the golden ratio, also known by the Greek letter phi. Many great works of architecture and art, such as the Parthenon, are said to embody the pleasing proportions of the golden ratio, which is also seen in beautiful forms in nature. Mathematicians, artists and scientists, from ancient times to today have pondered the qualities of phi, which is approximately equal to 1.618, although its digits just keep on going, with no apparent pattern.

“People studied the golden ratio before there was a real theory of algebra,” Ono says. “It was a kind of prototype for algebraic numbers.”

Although no other algebraic units are as famous as the golden ratio, they are of central importance to algebra. “A fundamental problem in mathematics is to find functions whose values are always algebraic numbers,” Ono says. “The famous Swiss mathematician Leonhard Euler made some progress on this problem in the 18th century. His theory of continued fractions, where one successively divides numbers in a systematic way, produces some very special algebraic numbers like the golden ratio. But his theory cannot produce algebraic numbers which go beyond the stuff of the quadratic formula that one encounters in high school algebra.”

Ramanujan, however, could produce such numbers, and he made it look easy.

“Ramanujan has a very special, almost mythic, status in mathematics,” says Edward Frenkel, a mathematician at the University of California, Berkeley. “He had a sort of Midas touch that seemed to magically turn everything into gold.”

And the Rogers-Ramanujan identities are considered among Ramanujan’s greatest legacies, adds Frenkel, a leading expert on the identities.

“They are two of the most remarkable and important results in the theory of q-series, or special functions,” says Warnaar, who began studying the Rogers-Ramanujan identities shortly after he encountered them while working on his PhD in statistical mechanics about 20 years ago.

Although originally discovered by L. J. Rogers in 1894, the identities became famous through the work of Ramanujan, who was largely self-taught and worked instinctively.

The Rogers-Ramanujan identities are among Ramanujan's greatest legacies.

In 1913, Ramanujan sent a letter from his native India to the British mathematician G. H. Hardy that included the two identities that Rogers discovered and a third formula that showed these identities are essentially modular functions and their quotient has the special property that its singular values are algebraic integral units. That result came to be known as the Rogers-Ramanujan continued fraction. 

Hardy was astonished when he saw the formulas. “I had never seen anything in the least like this before,” Hardy wrote. “A single look at them is enough to show they could only be written down by a mathematician of the highest class. They must be true because no one would have the imagination to invent them.”

“Ramanujan seemed to produce this result out of thin air,” Ono says.

Ramanujan died in 1920 before he could explain how he conjured up the formulas. “They have been cited hundreds of times by mathematicians,” Ono says. “They are used in statistical mathematics, conformal field theory and number theory. And yet no one knew whether Ramanujan just stumbled onto the power of these two identities or whether they were fragments of a larger theory.”

For nearly a century, many great mathematicians have worked on solving the mystery of where Ramanujan’s formulas came from and why they should be true.

"Ramanujan has a very special, almost mythic, status among mathematicians," says Frenkel. Above is a still photo from an upcoming film, "Ramanujan," a biography of the math genius by Camphor Cinema.

Ono uses the analogy of going for a walk in a creek bed and discovering a piece of gold. Had Ramanujan accidentally found a random nugget? Or was he drawn to that area because he knew of a rich seam of gold nearby?

Warnaar was among those who pondered these questions. “Just like digging for gold, in mathematics it’s not always obvious where to look for a solution,” he says. “It takes time and effort, with no guarantee of success, but it helps if you develop a lot of intuition about where to look.”

Finally, after 15 years of focusing almost entirely on the Rogers-Ramanujan identities, Warnaar found a way to embed them into a much larger class of similar identities using something known as representation theory.

“Ole found the mother lode of identities,” Ono says.

When Ono saw Warnaar’s work posted last November on arXiv.org, a mathematics-physics archive, his eyes lit up.

“It just clicked,” Ono recalls. “Ole found this huge vein of gold, and we then figured out a way to mine the gold. We went to work and showed how to come full circle and make use of the formulas. Now we can extract infinitely many functions whose values are these beautiful algebraic numbers.” 

“Historically, the Rogers-Ramanujan identities have tantalized mathematicians,” says George Andrews, a mathematician at Penn State and another top authority on the identities. “They have played an almost magical role in many areas of math, statistical mechanics and physics.”

The collaboration of Warnaar, Ono and Griffin “has given us a big picture of the general setting for these identities, and deepened our theoretical understanding for many of the breakthroughs in this area of mathematics during the past 100 years,” Andrews says. “They’ve given us a whole new set of tools to be able to attack new problems.”

“It’s incredibly exciting to solve any problem related to Ramanujan, he’s such an important figure in mathematics,” Warnaar says. “Now we can move on to more questions that we don’t understand. Math is limitless, and that’s fantastic.”

Related:
Math formula gives new glimpse into the magical mind of Ramanujan
New theories reveal the nature of numbers
How a hike in the woods led to a math 'Eureka!'

Image credits: Top, iStockphoto.com; center, Wikipedia Commons; bottom, Camphor Cinema