The idea that brains produce conscious experience runs up against an unsolvable logical problem: how can the same brain that is generating the experience also experience the experience.
Cognitive psychology came upon this early on with something called the homunculus infinite regress problem. If the idea of mental representation—a foundational principle in cognitive psychology—is taken seriously, then you need both the process that is creating the representation, and a “something else” that it is being represented to. But in order for this something else to know what it is observing when it interprets the representation, it needs a way of representing the representation, and so on in an infinite regress.
Stated in neurological terms relative to consciousness, in order to be conscious, you need something to be conscious of (the output of brain activity) and something (other brain activity) to register the awareness of it. But this registration of awareness is just more brain activity and not the awareness itself. For that you need something else, which can only be additional brain activity requiring yet an additional something else and so on.
It’s turtles all the way down.
The only real solution to the infinite regress problem is to dislodge consciousness from the organism. Material engagement theory reflects one interesting attempt to articulate this. In (grossly) simplified terms, material engagement theory (MET) considers conscious experience as a phenomenological property of the three-way interaction (interpenetration?) between culture, the person, and the material world. [In his book, How Things Shape the Mind, Lambros Malafouris provides an in-depth look at material engagement theory, and, in particular, how it relates to human tool use.]
But the notion that consciousness is a brain product is a hard one to shake for most folks. Part of the reason for this may have to do with the way civilized life is structured, and the fact that (a point that MET drives home) the way we think about the world is not independent of what we are actually doing and how we are doing it.
Consider how a logical error called the mereological fallacy plays into this. The mereological fallacy, as it applies to living organisms, is the tendency to ascribe characteristics of the whole creature to a single component part. For example, to say that stomachs digest, or that legs run, or that brains remember. The world of cognitive neuroscience is replete with examples of this logical error, conscious brains being just one.
The mereological fallacy reflects a thoughtform that is endemic to the civilized mind, and its parts-standing-in-for-the-whole nature reflects the constituent partitioning of civilized activity. Civilization is a mechanistically structured, systemized way of living in which various parts of the system are designed to play circumscribed roles or carry out specific isolated functions. In addition, the civilized perspective ascribes ontological priority to abstract categories rather than to concrete material events. In reality, of course, consciousness is just an idea, a concept, an abstraction. Awareness is never without content. Awareness is not to be found outside of the content. But consciousness is nonetheless mistaken for a thing in itself, is reified, is granted “thingness,” and its existence then becomes a problem to be solved.
So, what difference does it make how we think about consciousness? What does it matter and why should anyone care?
Seeing consciousness as something produced by brains, something happening inside us, something separate and independent of the world itself, supports the irrational notion that our experience exists somehow independent of the various environments that we inhabit, that we could be who we are independent of the larger contexts in which who we are expresses itself, that we could somehow exist outside of any context at all, or in an artificial context of our own choosing. Not only does this predispose us to undervalue the rest of living nature, but it can ultimately lead to absurd and potentially dangerous notions such as the crazy idea that physical reality doesn’t matter at all, that we could, for example, upload our consciousness into an artificial medium and become an inhabitant of an electronic “metaverse.”
Consciousness doesn’t happen in brains. First-person experience is not actually first-person, but something that emerges from our shared participation in a world inhabited by numerous others. And, perhaps most importantly, who we are at any point in time depends critically on those numerous others. Those numerous others are, in the most intimate and inseparable sense, us.