Published June 18, 2015 This content is archived.
Sit down with a friend in a quiet restaurant and begin talking, just before the dinner crowd’s arrival. Business is slow at first, but picks up quickly, just like the sound level. Music plays, glasses clink, servers discuss specials. Discussions are everywhere, colliding and competing with the other noises.
All of these sounds are hitting the eardrum at the same time, yet the initial conversation that began amidst surrounding silence continues easily because of a process that allows humans to isolate, identify and prioritize overlapping sounds.
Sometimes called the cocktail party effect, the ability to tune out a noisy room to focus on one conversation — or auditory stream segregation, part of the larger field of auditory scene analysis — is apparently universal to all animals and serves as a critical survival mechanism.
Although it’s unclear how this largely automatic process is accomplished, two UB researchers have added important pieces relating to the timing and complexity of sounds to the yet-unfinished puzzle of understanding how humans and other animals perceive the auditory world.
“It’s a difficult problem,” says Micheal Dent, associate professor of psychology, whose two studies with Erikson Neilans were published in successive editions of The Journal of Comparative Psychology. “We don’t know how it works in humans or if it works the same way in animals.”
The studies tested both humans and budgerigars (common parakeets). Previous research shows remarkable similarities between birds and humans in how they perceive auditory objects, according to Dent.
“Birds are vocal learners like us,” she says. “This makes them a good model for helping us understand if the way animals perceive sound is the same as how humans perceive sound.”
It turns out that birds are able to pick out separate sound sources faster than humans when they partially overlap, and the ability to segregate those sounds becomes easier the more they are offset, highlighting the importance of timing in sound segregation.
“The sound’s frequency (pitch) didn’t matter in the first experiment that used pure tones, but adding more frequencies helped the birds and humans in the second experiments.”
Dent says adding frequencies is like asking orchestra members to play more complicated pieces, trills for example, rather than a sustained note, similar to the pure tone. Adding complexity counterintuitively makes it easier to recognize two sounds and identify those two sounds.
“We start most of our experiments using pure tones because the results are easier to analyze, but these findings suggest those simple tones might not be telling us the whole story,” she says.
Even the biological relevance of the sounds didn’t seem to play a role.
“There are lots of studies showing detection of sound in noise is easier if it’s ‘your’ sound; in the budgerigar’s case, that would be a contact call,” says Dent. “We thought birds would be good at bird calls and humans would be good at speech. But we didn’t find that. Signal complexity was all that seemed to matter when sounds overlapped. When we gave the birds and humans more realistic sounds to isolate, they did better than they did with the pure tones, no matter what. They did not have to be sounds that were important to the subjects.
“These studies, combined with others on auditory scene analysis, help us to understand more about how we are able to make sense of the noisy world by picking out what is important and ignoring the rest.”