Since I have finally completed my PhD, I thought it was high time I actually wrote a blog post about cognitive neuroscience, my field of study. So here goes. Neuroscience is currently as sexy as a science is ever likely to get. When we see a study about some psychological constructs and it includes some mention of a specific brain region responsible for the behaviour, or even better a picture of a brain, it makes the whole study more believable. Legitimate. Seductive. Although cognitive neuroscience has only been my field of study for about 4.5 years or so, I’ve seen enough to know that all that glitters isn’t gold. And that seductive science isn’t always the best science. So for today’s blog post, I will follow in the footsteps of the Neuroskeptic and similar bloggers, and try to explain why I get so frustrated reading news articles that casually mention which brain region is responsible for a certain behaviour.
Human brains consist of many billions of neurons. And despite what the film Lucy purports, we use pretty much all of them. Like with any other part of the body, you use it or lose it. These neurons are all highly interconnected, ‘talking’ to each other constantly. The only thing more complex than the neurons themselves (not including astrophysics or calculating a tip) is the human behaviour that they enable. There are certain areas, such as the visual cortex or auditory processing areas, which as their names give away have well-defined features. That’s because they handle basic, vital processes. Any higher order processes though, require activation of a lot of different brain areas working in sync. Even when we are resting there is a default network at work that encompasses a lot of the brain. In short, there can never be such a thing as the ‘making up new words’ or the ‘childhood memories’ or even the ‘depression’ area, because these things are not easy things to compute, relying on lots of underlying processes like a big jigsaw puzzle. Hence why things like depression or autism are not easy to treat either.
Even when one is trying to determine what networks are responsible for certain cognitions, technology is not advanced enough yet to give us exact answers. Electroencephalography (EEG), energy signals recorded through the skull, provides an affordable means of looking into the living human brain, but researchers still can’t deduce exactly where in the brain a signal is coming from, and aren’t even altogether sure how EEG works yet. The other most commonly used technique, functional Magnetic Resonance Imaging (fMRI), uses the magnetic properties of oxygen in blood and the fact that active neurons use up oxygen, to see where in the brain activation is occurring (as in the picture above), but while we know what fMRI measures, the resolution isn’t good enough yet to look at anything more detailed than clusters of neurons. This means that we can’t tell what individual neurons are doing unless we stick electrodes directly into them.
These technologies, while fallible, have given us a lot of data, and will continue to help us enlighten how the brain and our cognition work for many years to come. This post is not meant to nullify the amazing progress that science has already made, but just meant to provide a note of caution, that neither the brain nor current technology work as smoothly and easily as popular science stories sometimes lead us to believe. If only they did, then I’d just be able to hook my brain up to a computer instead of trying (and most of the time failing) to find the words to write down stories as I imagine them in my head.
Of course others have stated what I am trying to express much more eloquently, such as @PsychWriter in his last post for the Wired Brain Watch Blog, which is definitely worth reading.