First of all, remember that post I wrote on the serotonin theory of depression, and how it was probably wrong? I was right it is at the very least incomplete. Another one bites the dust. It’s sad, as we are so desperate to find SOME theory on which we people who like to study depression can hang our hats. But the serotonin one was not to be. Check out the blog coverage. It is incisive. I don’t know that we should be THAT hard on the researchers who invented the idea. After all, it was a good idea at the time, and the good news is that everyone is willing to accept better evidence and move on. The scientific method at work.
Ok, I’ll admit, when Sci first saw this publication, she went “LOL wut?!” Why would anyone DO this? I mean, cool, but WHY? Kind of like putting a really sensitive measurement apparatus for brain wave activity in a freely-flying bat. Cool? Yes. Useful? Well…it’s COOL!
But this paper IS cool, and the more I think about it, the more I think there might be something to this, following some more refinement and development down the line.
Wu, et al. “Scale-free music of the brain”, PLoS ONE, 2009.
The important part of this paper isn’t the figures. It’s the audio files. I’ll be including them in the links, and I definitely recommend a listen. And the sounds are what I’m going to focus on, because this paper is REALLY math heavy, and Sci can’t do this kind of math justice.
Basically, there have been experiments trying to convert brain waves (from EEGs) into sound since about 1934. Electroencephalograms, or EEGs are still the only way we really have to see the brain in real time, as fMRI and PET still work on too slow of a scale to allow for good resolution.
But the question is: why convert brain waves into sound? Well…because it’s cool. No really, there’s another reason. Humans actually hear pretty well in a pretty wide range. More importantly, we can hear very small changes in pitch and rhythm. And sound patterns (because of our extensive use of language) may be easier for us to distinguish compared to really complicated visual patterns. So the idea is to turn brain activity into sound, and see if you can come up with anything. Perhaps, for example, people could compare a normal brain with an epileptic one, and hear differences. Of course, differences during a seizure would be pretty obvious, but it’s possible, if the technique got refined enough, that people could be trained to “hear” differences resulting from things like schizophrenia or Alzheimer’s, which could aid in diagnosis, and thus in treatment.
Suffice it to say that the methods contain a lot of equations. I could go into what each of them means, but Sci is tired and in the lab late. Rather, she will show you what it ended up looking like:
Pretty cool, huh? You can see they took the amplitude from each wave (top panel) and translated it to a pitch (middle panel) which they then corresponded to a note. They even took the duration of the waves and translated it to rhythm. And they got something rather…abstract. One might wonder why they put it only in bass clef, but I’m not going to be picky (c’mon, be scientists! Use tenor clef!).
And how did it end up sounding? Well, go here and check out the supporting information. And it turns out that your brain sounds, not like a Mozart symphony, but rather like a cat on a keyboard.
Ok, maybe not even that organized.
That’s more like it.
Now, this doesn’t really give you a picture of the thousands of neuron firings that are taking place per second, rather, it shows you the overall activity of the brain over time.
Of course, the scientists performed several experiments with this, including whether or not eyes were closed, eyes were open, or the person was in REM or slow wave sleep. They found that REM, or rapid eye movement sleep, looked very active (described as “a lively melody”), almost like an awake brain:
While slow wave sleep was not only slower, it was also at lower amplitude, resulting in a lower pitched tune.
But the real test is this: can ordinary people distinguish, when hearing brain waves made into music, between different states? It turns out that they can, and very reliably. Now granted, they only used a few sets of clips, but it’s conceivable that people could be trained to distinguish particular brain activity types based on the music, regardless of whether they had heard or identified the clip before.
There is one thing, though, that I wish they had done with this paper. Basically, they matched amplitude with tone, put the whole thing on a scale, and made it play on a piano. That’s all well and good, but I don’t know that they got the exact pitch to come across realistically. I think, instead of a piano, they should have used an instrument that can distinguish more than just half and whole tones. For example, a great deal of middle eastern music uses quarter tones as well as half and whole tones, which humans are still perfectly capable of distinguishing (though it’s REALLY hard to sing if you’re not used to it), and which might give more options for how the “music of the brain” might really sound.
This paper, if the technique is refined more and studied more, could provide a new way for people to “look” at brain activity patterns by “listening” for them. It would be pretty easy to train humans to professionally distinguish between different types of brain activity patterns to help diagnose disease. And it’d be something that some trained in music might be able to do really well. For example, I am classically trained in music, and I ALWAYS know Bach when I hear it. It would be a good job for an out of work classical musician. At least one who studied a lot of Schoenberg. 🙂
Wu, D., Li, C., & Yao, D. (2009). Scale-Free Music of the Brain PLoS ONE, 4 (6) DOI: 10.1371/journal.pone.0005915