Tuesday, October 23, 2018

Schadenfreude sheds light on the darker side of humanity


“We all experience schadenfreude but we don’t like to think about it too much because it shows how ambivalent we can be to our fellow humans,” says Emory psychologist Philippe Rochat.

By Carol Clark

Schadenfreude, the sense of pleasure people derive from the misfortune of others, is a familiar feeling to many — perhaps especially during these times of pervasive social media.

This common, yet poorly understood, emotion may provide a valuable window into the darker side of humanity, finds a review article by psychologists at Emory University. New Ideas in Psychology published the review, which drew upon evidence from three decades of social, developmental, personality and clinical research to devise a novel framework to systematically explain schadenfreude.

The authors propose that schadenfreude comprises three separable but interrelated subforms — aggression, rivalry and justice — which have distinct developmental origins and personality correlates.

They also singled out a commonality underlying these subforms.

“Dehumanization appears to be at the core of schadenfreude,” says Shensheng Wang, a PhD candidate in psychology at Emory and first author of the paper. “The scenarios that elicit schadenfreude, such as intergroup conflicts, tend to also promote dehumanization.”

Co-authors of the study are Emory psychology professors Philippe Rochat, who studies infant and child development, and Scott Lilienfeld, whose research focuses on personality and personality disorders.

Dehumanization is the process of perceiving a person or social group as lacking the attributes that define what it means to be human. It can range from subtle forms, such as assuming that someone from another ethnic group does not feel the full range of emotions as one’s in-group members do, all the way to blatant forms — such as equating sex offenders to animals. Individuals who regularly dehumanize others may have a disposition towards it. Dehumanization can also be situational, such as soldiers dehumanizing the enemy during a battle.

“Our literature review strongly suggests that the propensity to experience schadenfreude isn’t entirely unique, but that it overlaps substantially with several other ‘dark’ personality traits, such as sadism, narcissism and psychopathy,” Lilienfeld says. “Moreover, different subforms of schadenfreude may relate somewhat differently to these often malevolent traits.”

One problem with studying the phenomenon is the lack of an agreed definition of schadenfreude, which literally means “harm joy” in German. Since ancient times, some scholars have condemned schadenfreude as malicious, while others have perceived it as morally neutral or even virtuous.

“Schadenfreude is an uncanny emotion that is difficult to assimilate,” Rochat says. “It’s kind of a warm-cold experience that is associated with a sense of guilt. It can make you feel odd to experience pleasure when hearing about bad things happening to someone else.”

Psychologists view schadenfreude through the lens of three theories. Envy theory focuses on a concern for self-evaluation, and a lessening of painful feelings when someone perceived as enviable gets knocked down a peg. Deservingness theory links schadenfreude to a concern for social justice and the feeling that someone dealt a misfortune received what was coming to them. Intergroup-conflict theory concerns social identity and the schadenfreude experienced after the defeat of members of a rival group, such as during sporting or political competitions.

The authors of the current article wanted to explore how all these different facets of schadenfreude are interrelated, how they differ, and how they can arise in response to these concerns.

Their review delved into the primordial role of these concerns demonstrated in developmental studies. Research suggests that infants as young as eight months demonstrate a sophisticated sense of social justice. In experiments, they showed a preference for puppets who assisted a helpful puppet, and who punished puppets that had exhibited antisocial behavior. Research on infants also points to the early roots of intergroup aggression, showing that, by nine months, infants preferred puppets who punish others who are unlike themselves.

“When you think of normal child development, you think of children becoming good natured and sociable,” Rochat says. “But there’s a dark side to becoming socialized. You create friends and other in-groups to the exclusion of others.”

Spiteful rivalry appears by at least age five or six, when research has shown that children will sometimes opt to maximize their gain over another child, even if they have to sacrifice a resource to do so.

By the time they reach adulthood, many people have learned to hide any tendencies for making a sacrifice just for spite, but they may be more open about making sacrifices that are considered pro-social.

The review article posits a unifying, motivational theory: Concerns of self-evaluation, social identity and justice are the three motivators that drive people toward schadenfreude. What pulls people away from schadenfreude is the ability to feel empathy for others and to perceive them as fully human and to show empathy for them.

Ordinary people may temporarily lose empathy for others. But those with certain personality disorders and associated traits — such as psychopathy, narcissism or sadism — are either less able or less motivated to put themselves in the shoes of others.

“By broadening the perspective of schadenfreude, and connecting all of the related phenomena underlying it, we hope we’ve provided a framework to gain deeper insights into this complex, multi-faceted emotion,” Wang says.

“We all experience schadenfreude but we don’t like to think about it too much because it shows how ambivalent we can be to our fellow humans,” Rochat says. “But schadenfreude points to our ingrained concerns and it’s important to study it in a systematic way if we want to understand human nature.”

Related:
What is a psychopath?
Sharing ideas about the concept of fairness

Monday, October 22, 2018

Study gives new insight into how the brain perceives places

Example of an image from the fMRI study. Participants were asked to imagine they were standing in the room and indicate through a button press whether it was a bedroom, a kitchen or a living room. On separate trials, they were asked to imagine that they were walking on the continuous path through the room and indicate which door they could leave through. (Image by Andrew Persichetti)

By Carol Clark

Nearly 30 years ago, scientists demonstrated that visually recognizing an object, such as a cup, and performing a visually guided action, such as picking the cup up, involved distinct neural processes, located in different areas of the brain. A new study shows that the same is true for how the brain perceives our environment — it has two distinct systems, one for recognizing a place and another for navigating through it.

The Journal of Neuroscience published the finding by researchers at Emory University, based on experiments using functional magnetic resonance imaging (fMRI). The results showed that the brain’s parahippocampal place area responded more strongly to a scene recognition task while the occipital place area responded more to a navigation task.

The work could have important implications for helping people to recover from brain injuries and for the design of computer vision systems, such as self-driving cars.

“It’s thrilling to learn what different regions of the brain are doing,” says Daniel Dilks, senior author of the study and an assistant professor of psychology at Emory. “Learning how the mind makes sense of all the information that we’re bombarded with every day is one of the greatest of intellectual quests. It’s about understanding what makes us human.”

Entering a place and recognizing where you are — whether it’s a kitchen, a bedroom or a garden — occurs instantaneously and you can almost simultaneously make your way around it.

“People assumed that these two brain functions were jumbled up together — that recognizing a place was always navigationally relevant,” says first author Andrew Persichetti, who worked on the study as an Emory graduate student. “We showed that’s not true, that our brain has dedicated and dissociable systems for each of these tasks. It’s remarkable that the closer we look at the brain the more specialized systems we find — our brains have evolved to be super efficient.”

Persichetti, who has since received his PhD from Emory and now works at the National Institute of Mental Health, explains that an interest in philosophy led him to neuroscience. “Immanuel Kant made it clear that if we can’t understand the structure of our mind, the structure of knowledge, we’re not going to fully understand ourselves, or even a lot about the outside world, because that gets filtered through our perceptual and cognitive processes,” he says.

The Dilks lab focuses on mapping how the visual cortex is functionally organized. “We are visual creatures and the majority of the brain is related to processing visual information, one way or another,” Dilks says.

Researchers have wondered since the late 1800s why people suffering from brain damage sometimes experience strange visual consequences. For example, someone might have normal visual function in all ways except for the ability to recognize faces.

It was not until 1992, however, that David Milner and Melvyn Goodale came out with an influential paper delineating two distinct visual systems in the brain. The ventral stream, or the temporal lobe, is involved in object recognition and the dorsal stream, or the parietal lobe, guides an action related to the object.

In 1997, MIT’s Nancy Kanwisher and colleagues demonstrated that a region of the brain is specialized in face perception — the fusiform face area, or FFA. Just a year later, Kanwisher’s lab delineated a neural region specialized in processing places, the parahippocampal place area (PPA), located in the ventral stream.

While working as a post-doctoral fellow in the Kanwisher lab, Dilks led the finding of a second region of the brain specialized in processing places, the occipital place area, or OPA, located in the parietal lobe.

Dilks set up his own lab at Emory the same year that discovery was published, in 2013. Among the first questions he wanted to tackle was why the brain had two regions dedicated to processing places.

Persichetti designed an experiment to test the hypothesis that place processing was divided in the brain in a manner similar to object processing. Using software from the SIMS life simulation game, he created three digital images of places: A bedroom, a kitchen and a living room. Each room had a path leading through it and out one of three doors. Study participants in the fMRI scanner were asked to fixate their gaze on a tiny white cross. On each trial, an image of one of the rooms then appeared, centered behind the cross. Participants were asked to imagine they were standing in the room and indicate through a button press whether it was a bedroom, a kitchen or a living room. On separate trials, the same participants were also asked to imagine that they were walking on the continuous path through the exact same room and indicate whether they could leave through the door on the left, in the center, or on the right.

The resulting data showed that the two brain regions were selectively activated depending on the task: The PPA responded more strongly to the recognition task while the OPA responded more strongly to the navigation task.

“While it’s incredible that we can show that different parts of the cortex are responsible for different functions, it’s only the tip of the iceberg,” Dilks says. “Now that we understand what these areas of the brain are doing we want to know precisely how they’re doing it and why they’re organized this way.”

Dilks plans to run causal tests on the two scene-processing areas. Repetitive transcranial magnetic stimulation, or rTMS, is a non-invasive technology that can be attached to the scalp to temporarily deactivate the OPA in healthy participants and test whether someone can navigate without it.

The same technology cannot be used to deactivate the PPA, due to its deeper location in the temporal lobe. The Dilks lab plans to recruit participants suffering brain injury to the PPA region to test for any effects on their ability to recognize scenes.

Clinical applications for the research include more precise guidance for surgeons who operate on the brain and better brain rehabilitation methods.

“My ultimate goal is to reverse-engineer the human brain’s visual processes and replicate it in a computer vision system,” Dilks says. “In addition to improving robotic systems, a computer model could help us to more fully understand the human mind and brain.”

Related:
How babies see faces: New fMRI technology opens window onto infants' minds

Monday, October 15, 2018

Scientists chase mystery of how dogs process words

Eddie, one of the dogs that participated in the study, poses in the fMRI scanner with two of the toys used in the experiments, "Monkey" and "Piggy." (Photo courtesy Gregory Berns)

By Carol Clark

When some dogs hear their owners say “squirrel,” they perk up, become agitated. They may even run to a window and look out of it. But what does the word mean to the dog? Does it mean, “Pay attention, something is happening?” Or does the dog actually picture a small, bushy-tailed rodent in its mind?

Frontiers in Neuroscience published one of the first studies using brain imaging to probe how our canine companions process words they have been taught to associate with objects, conducted by scientists at Emory University. The results suggest that dogs have at least a rudimentary neural representation of meaning for words they have been taught, differentiating words they have heard before from those they have not.

“Many dog owners think that their dogs know what some words mean, but there really isn’t much scientific evidence to support that,” says Ashley Prichard, a PhD candidate in Emory’s Department of Psychology and first author of the study. “We wanted to get data from the dogs themselves — not just owner reports.”

Study participant Stella and her toys.
“We know that dogs have the capacity to process at least some aspects of human language since they can learn to follow verbal commands,” adds Emory neuroscientist Gregory Berns, senior author of the study. “Previous research, however, suggests dogs may rely on many other cues to follow a verbal command, such as gaze, gestures and even emotional expressions from their owners.”

The Emory researchers focused on questions surrounding the brain mechanisms dogs use to differentiate between words, or even what constitutes a word to a dog.

Berns is founder of the Dog Project, which is researching evolutionary questions surrounding man’s best, and oldest friend. The project was the first to train dogs to voluntarily enter a functional magnetic resonance imaging (fMRI) scanner and remain motionless during scanning, without restraint or sedation. Studies by the Dog Project have furthered understanding of dogs’ neural response to expected reward, identified specialized areas in the dog brain for processing faces, demonstrated olfactory responses to human and dog odors, and linked prefrontal function to inhibitory control.

For the current study, 12 dogs of varying breeds were trained for months by their owners to retrieve two different objects, based on the objects’ names. Each dog’s pair of objects consisted of one with a soft texture, such as a stuffed animal, and another of a different texture, such as rubber, to facilitate discrimination. Training consisted of instructing the dogs to fetch one of the objects and then rewarding them with food or praise. Training was considered complete when a dog showed that it could discriminate between the two objects by consistently fetching the one requested by the owner when presented with both of the objects.

During one experiment, the trained dog lay in the fMRI scanner while the dog’s owner stood directly in front of the dog at the opening of the machine and said the names of the dog’s toys at set intervals, then showed the dog the corresponding toys.

Eddie, a golden retriever-Labrador mix, for instance, heard his owner say the words “Piggy” or “Monkey,” then his owner held up the matching toy. As a control, the owner then spoke gibberish words, such as “bobbu” and “bodmick,” then held up novel objects like a hat or a doll.

The results showed greater activation in auditory regions of the brain to the novel pseudowords relative to the trained words.

“We expected to see that dogs neurally discriminate between words that they know and words that they don’t,” Prichard says. “What’s surprising is that the result is opposite to that of research on humans — people typically show greater neural activation for known words than novel words.”

The researchers hypothesize that the dogs may show greater neural activation to a novel word because they sense their owners want them to understand what they are saying, and they are trying to do so. “Dogs ultimately want to please their owners, and perhaps also receive praise or food,” Berns says.

Half of the dogs in the experiment showed the increased activation for the novel words in their parietotemporal cortex, an area of the brain that the researchers believe may be analogous to the angular gyrus in humans, where lexical differences are processed.

The other half of the dogs, however, showed heightened activity to novel words in other brain regions, including the other parts of the left temporal cortex and amygdala, caudate nucleus, and the thalamus.

These differences may be related to a limitation of the study — the varying range in breeds and sizes of the dogs, as well as possible variations in their cognitive abilities. A major challenge in mapping the cognitive processes of the canine brain, the researchers acknowledge, is the variety of shapes and sizes of dogs’ brains across breeds.

“Dogs may have varying capacity and motivation for learning and understanding human words,” Berns says, “but they appear to have a neural representation for the meaning of words they have been taught, beyond just a low-level Pavlovian response.”

This conclusion does not mean that spoken words are the most effective way for an owner to communicate with a dog. In fact, other research also led by Prichard and Berns and recently published in Scientific Reports, showed that the neural reward system of dogs is more attuned to visual and to scent cues than to verbal ones.

“When people want to teach their dog a trick, they often use a verbal command because that’s what we humans prefer,” Prichard says. “From the dog’s perspective, however, a visual command might be more effective, helping the dog learn the trick faster.”

Co-authors of the Frontiers in Neuroscience study include Peter Cook (a neuroscientist at the New College of Florida), Mark Spivak (owner of Comprehensive Pet Therapy) and Raveena Chhibber (an information specialist in Emory’s Department of Psychology).

Co-authors of the Science Reports paper also include Spivak and Chhibber, along with Kate Athanassiades (from Emory’s School of Nursing).

Related:
Do dogs prefer praise or food?
Scent of the familiar: You may linger like perfume in your dog's brain
Multi-dog experiment points to canine brain's reward center

Monday, October 1, 2018

Songbird data yields new theory for learning sensorimotor skills

"Our findings suggest that an animal knows that even the perfect neural command is not going to result in the right outcome every time," says Emory biophysicist Ilya Nemenman. (Image courtesy Samuel Sober.)

By Carol Clark

Songbirds learn to sing in a way similar to how humans learn to speak — by listening to their fathers and trying to duplicate the sounds. The bird’s brain sends commands to the vocal muscles to sing what it hears, and then the brain keeps trying to adjust the command until the sound echoes the one made by the parent.

During such trial-and-error processes of sensorimotor learning, a bird remembers not just the best possible command, but a whole suite of possibilities, suggests a study by scientists at Emory University.

The Proceedings of the National Academy of the Sciences (PNAS) published the study results, which include a new mathematical model for the distribution of sensory errors in learning.

“Our findings suggest that an animal knows that even the perfect neural command is not going to result in the right outcome every time,” says Ilya Nemenman, an Emory professor of biophysics and senior author of the paper. “Animals, including humans, want to explore and keep track of a range of possibilities when learning something in order to compensate for variabilities.”

Nemenman uses the example of learning to swing a tennis racket. “You’re only rarely going to hit the ball in the racket’s exact sweet spot,” he says. “And every day when you pick up the racket to play your swing is going to be a little bit different, because your body is different, the racket and the ball are different, and the environmental conditions are different. So your body needs to remember a whole range of commands, in order to adapt to these different situations and get the ball to go where you want.”

First author of the study is Baohua Zhou, a graduate student of physics. Co-authors include David Hofmann and Itai Pinkoviezky (post-doctoral fellows in physics) and Samuel Sober, an associate professor of biology.

Traditional theories of learning propose that animals use sensory error signals to zero in on the optimal motor command, based on a normal distribution of possible errors around it — what is known as a bell curve. Those theories, however, cannot explain the behavioral observations that small sensory errors are more readily corrected, while the larger ones may be ignored by the animal altogether.

For the PNAS paper, the researchers analyzed experimental data on Bengalese finches collected in previous work with the Sober lab. The lab uses finches as a model system for understanding how the brain controls complex vocal behavior and motor behavior in general.

Miniature headphones were custom-fitted to adult birds and used to provide auditory feedback in which the pitch that the bird perceives it vocalizes at could be manipulated, replacing what the bird hears — its natural auditory feedback — with the manipulated version. The birds would try to correct the pitch they were hearing to match the sound they were trying to make. Experiments allowed the researchers to record and measure the relationship between the size of a vocal error the bird perceives, and the probability of the brain making a correction of a specific size.

The researchers analyzed the data and found that the variability of errors in correction did not have the normal distribution of a bell curve, as previously proposed. Instead, the distribution had long tails of variability, indicating that the animal believed that even large fluctuations in the motor commands could sometimes produce a correct pitch. The researchers also found that the birds combined their hypotheses about the relationship between the motor command and the pitch with the new information that their brains received from their ears while singing. In fact, they did this surprisingly accurately.

“The birds are not just trying to sing in the best possible way, but appear to be exploring and trying wide variations,” Nemenman says. “In this way, they learn to correct small errors, but they don’t even try to correct large errors, unless the large error is broken down and built up gradually.”

The researchers created a mathematical model for this process, revealing the pattern of how small errors are corrected quickly and large errors take much longer to correct, and might be neglected altogether, when they contradict the animal’s “beliefs” about the errors that its sensorimotor system can produce.

“Our model provides a new theory for how an animal learns, one that allows us to make predictions for learning that we have tested experimentally,” Nemenman says.

The researchers are now exploring if this model can be used to predict learning in other animals, as well as predicting better rehabilitative protocols for people dealing with major disruptions to their learned behaviors, such as when recovering from a stroke.

The work was funded by the National Institutes of Health BRAIN Initiative, the James S. McDonnell Foundation, and the National Science Foundation. The NVIDIA corporation donated high-performance computing hardware that supported the work.

Related:
BRAIN grant to fund study of how the mind learns
How songbirds learn to sing