Exploring the problem of consciousness from 19 different perspectives
What do we really know about why we do the things we do? A neuroscientist tackles that question in new book
One of the great unsolved problems in neuroscience and philosophy is the problem of consciousness. In fact, the concept of consciousness is so difficult that no one definition really captures it.
In a new book, writer and neuroscientist Patrick House explores the subject of consciousness through a variety of lenses: biology, evolution, neuroscience and philosophy. Bob McDonald spoke with House about his new book, 19 Ways of Looking at Consciousness, in an interview for Quirks & Quarks.
Here is part of their conversation.
Throughout your book, there's one story you keep revisiting and we'll keep revisiting it during this interview. It's about a 16 year-old-girl with epilepsy you call Anna, and back in 1998 she was the subject of a case study in the journal Nature. Tell me about her story.
Yeah, of course. So there's a thing when someone has epilepsy and the surgeons want to figure out where it is in the brain, they want to find the location and possibly to remove it, possibly just so they know where it starts. What happens is sometimes it can be a bit of a mystery. It's hidden.
The way that I like to think about it is like there's an earthquake somewhere and you don't know where it is. One thing you might want to do is you might want to place a bunch of seismic sensors all around the globe, right? And I think the best way to think about what they did with this girl to try to find the source of her seizure is they effectively implanted electrical seismic monitoring stations throughout her brain.
And they sat and they waited and they waited until she had a seizure, and she was in the hospital the whole time. While they were waiting, they just asked, "Could we ask you a few questions? Could we do a few things?" And the results of this Nature paper came out of asking some really profound questions.
What they did is they effectively stimulated a part of her brain that caused her to laugh. And when she laughed, they then asked, "Why did you laugh?" And what was so fascinating was when she did not know the surgeon was kind of turning up the dial and making her laugh, she made up reasons why. So she said things like, "The picture of the horse is funny. The doctor just told a joke. You guys are just so funny standing around like that."
And what's so fascinating is that none of the times did she say, "Well, you have a stimulating electrode implanted in my supplementary motor area, which is causing me to laugh, right?" That's the reason. But the brain is so good at storytelling, the brain is so good at figuring out plausible stories that it made some up.
Why do you feel that her story is particularly relevant when it comes to describing consciousness?
To me, it gets to the question of how much access do we ever have to why we do things that we do? You laughed as I was telling the anecdote. Is the reason we laugh the reason we think it is? Are we ever sure? So this has always been a kind of compelling study on my radar, and I realized its profound relevance to consciousness when there's a small detail in this study, which is that not only did she laugh, she also experienced the feeling of joy and mirth that you would otherwise feel when you might be induced to laugh. She felt the subjective sensation that comes along with laughter.
It wasn't just the motor program. It wasn't just like a doctor kind of tapping your knee and kicking your leg out, right? It was something more profound. There were emotions that came with it. There was a suite of subjective feelings that came with it. And this really hints at this idea that how is it possible that a little kind of blip of electricity, a little, tiny little seismic quake in one region of the brain cannot just cause someone to behave in a way that is unexpected and kind of unfathomable to them, but also that it comes with all of the kind of icky, subjective stuff that science usually doesn't try to tackle because we have no idea where to even start. So I realized it was a kind of case study for everything.
You pointed out in your book that when Anna's brain was stimulated and she laughed, it's not like they hit the laugh centre. It was a trigger that activated her entire brain and that included her personal history. Tell me about that.
What's very actually compelling about the reasons she gave, and it's a fascinating kind of thing, is that you can ignore them because they seem to be confabulated. But another way to think about it is actually, well, they weren't implausible answers. They involved things that she was seeing. They involved things that she was perceiving. They involved things in the room. They weren't random.
We walk around as adults and consciousness does not present a problem to us. You know, you're spending gobs of glucose and ATP and every sugar you've ever eaten just so that your brain does its best to make a stable world in front of you. And part of that is taking into account everything you've ever been exposed to, and everything you've ever learned.
We are using 100 per cent of our brain, always, like every single cell is on and active at all times. And it has one goal, which is to make sure that the next time you walk outside, the next time you open your eyes, that you know a little bit more about the world. You take all of your prior experience and you store it away and you do better. Your predictions should become better over time at what you see in the world.
And so even when Anna is able to respond, the very act of her being able to give a plausible answer to why she laughed, that the doctor just told a joke, that's coming from her lifetime of experience personally to her for why she found things funny. And that's why I find it such a terrifying study.
So given all these mysteries and how much we don't know about consciousness and how the brain generates that sense of identity and sadness and whatever, do you think we'll ever be able to replicate consciousness and give it to a machine with artificial intelligence?
One of my favourite sayings about this is from a colleague of mine, who said he believes that machines will never be conscious, in the same way that weather simulators will never be wet. Which is to say, you think you can replicate it almost perfectly, but that there's a fundamental difference between a weather simulator — it can simulate all the patterns of storms, it can simulate clouds, it can tell you where the rain is going to be, but it does not have itself the property of being wet. I think that whole, "Well, the weather simulator will never be wet," is very similar to saying, well, the laughing robot will never have joy.
This interview has been edited for length and clarity. Written and produced by Sonya Buyting.