When I first read the term “The Cocktail Party Effect” I thought, brilliant, I know all about this, I am a pro at cocktail parties! I even know how to make some of these fabulous drinks. Who would have dreamt that scientists would work out a way to put science into the delicate art of giggling with your friends over a cosmopolitan at the end of a hard week at the office/Uni. Oh how wrong I was.

When we’re at a party (drinking cosmos), we are mostly catching up on the latest dirty gossip about an old school friend, learning in detail what happened to your friends brothers seconds cousins girlfriend, or eavesdropping on a particularly juicy story being told at the table behind you. We are also magically able to hear our own names when someone is talking about us! Yet there is so much going on around us, how is it possible to focus on that one voice in particular, the one that is bringing all those fantastic stories to your ears, and still block out all the rest of the background noise? Somehow our clever brains manage to filter out a specific voice from a whole symphony of sounds.

This ability to select what we listen to has had scientists doing experiments for years. What they are looking for has been colloquially defined as “The Cocktail Party Effect”. From the point of view of the listeners, it is second nature, it’s only too easy to zone in on our best girlfriends latest news whilst ignoring the ongoing drone of the lecturers. Yet form a psychological and physiological perspective, it is much more difficult. The delicate ways in which the sensory, auditory and nervous systems collaborate to single out one particular voice is more complicated than anything initially expected.

Two scientists subjected three volunteers with epilepsy to a series of experiments requiring them to listen to two different speech samples simultaneously, and then repeat each one back. This is not a new experiment. In 1953 Colin Cherry from Imperial College in London, was the first to define the “Cocktail Party Effect”. He subjected his listeners to many different experiments. Initially, his listeners were trying to identify two different messages, one into each ear, spoken by the same voice at the same time. This caused a lot of eye closing and brow furrowing with concentration, but after several replays of the sound tracks they managed to reconstruct the phrases. When Cherry repeated the experiment, but using two different voices, his listeners had no trouble focussing on either voice, or switching between the two!

Up until recently, researchers haven’t been able to explain how the brain is managing to do this amazing trick. The difference now is that there is technology available for us to read what the brain is listening to! The researchers carefully placed electrodes in the auditory cortex of the brains to pick up any activity. Using some fancy-pants computer algorithms they managed to reconstruct the pathways used in the brain during these tasks.  What their decoding algorithms found was that they could predict which speaker they were listening to, and what words they were focussing on, based on the neural patterns!

The impact this piece of research has could potentially be astounding. Speech recognition technologies could really benefit from this type of work. We are all familiar with “Siri” the voice behind the iPhone that answers our silly questions. This sort of thing could get better. What if two people could talk to Siri at the same time, and have a genuine conversation with him/her? With Siri’s connections to the internet, and always having an answer to absolutely everything, would we even bother talking to anyone else anymore?

So, boys and girls, be warned: next time you’re having a drink and a good old chin wag at your local bar, be aware that anyone could be listening!

Right, I’m off for a cocktail.

Go to the orginal article here or listen below