I got some replies in my previous post about consciousness. Here I reply to those, and add a few more thoughts about consciousness. Comments from David Pearce are deliberately ignored, as I will devote a full post to his proposed solution to consciousness in a future post.

Identity theory and functionalism

Commenter Simon disagrees with me on my characterisation of identity theory (under which functionalism will fall). Note that I didn't discuss functionalism in my previous post. Functionalism can be understood, or is used, in two ways

Functionalism-A: Systems that perform the same function will be conscious. In this view, I would be conscious, and so would be the China brain or the Chinese room. Here, consciousness is really real. And so it would have to be a subclass of epiphenomenalism or the other non-materialist theories.

Functionalism-B: The function of a system is all there is. We have a brain, and we call certain processes that the brain does "consciousness". Here consciousness is merely, but not really, real. Once the easy problem of consciousness is solved, there wouldn't be anything else to explain. If this is so, this view is a form of eliminativism.

Brian Tomasik agrees with me in this analysis. I would add that the identity theorist wants to have its materialist cake and eat the dualism too, but that cannot be. Claiming that X physical property is identical to Y phenomenal property is like claiming that apples are identical to the Catbus. The way to escape this is to clarify what one means: functionalism of the A or B types. Carroll's is a B type functionalism. I would guess that A type functionalism is widespread among scientists.

a) The only one of your five theses that I deny is (5). I think that with sufficient understanding, consciousness will be described using algorithmic language that will describe the behavior of neurons firing (in a human brain), in much the same way that today we can describe a character interacting with a 3D world in a video game using algorithmic language that describes the behavior of electrons moving in transistors. The only reason that this seems impossible to us is that we are one or many major insights away from the required understanding.

Thesis 5 is the idea that one cannot logically derive a conclusion from a set of premises that have nothing to do with it. (The is-ought gap is an example of this). In the videogame character example, one can, given enough time and grad students, explain why an Archon moved in the screen in that way while playing Starcraft 2, down to the transistor level. Our explanation could be done in a low level language (Talking purely in terms of flows of electrons and then photons when the pixels in the screen are activated). To get to a high-level explanation, like "The Archon moved down to get close to the enemy's base", we would need a chain of premises that would introduce, bit by bit, those high level concepts. We could define 'down' in terms of its current position, then define position in terms of its position in the game-map, then define that in terms of the working of the computer, and then turn to transistors. There is no movement of the Archon above and beyond that, and this higher-level language is a short-hand for that highly complex causal chain from the workings of transistors to the appearence of behaviour in a videogame.

Thesis 5 says that you cannot get to "The Archon moves" just from the laws of physics. You need to introduce premises that explain what is the Archon, and what is moving first. Doing this respects Thesis 5. We would not complain if one says that 'Archon' is a high-level description, but we would if someone says that consciousness is a high level description of certain processes in the brain. That would be type-B functionalism, and it wouldn't account for real consciousness, so I find it unsatisfying.

b) I don’t think “poetic” is the right word to describe the relationship between the concept of ‘consciousness’ and the neuronal activity in a human brain. Is it “poetic” to call a car “a car” instead of talking about the way that the engine interacts with the steering and transmission systems, and so on? No, it’s the correct English word to refer to a long list of specific components assembled in a specific way, capable of specific behavior.

Well, maybe it's not the right word! :) I was using it because Carroll was using it, basically. 'Poetic' is here short-hand for "High-level description of a phenomenon that is nothing above and beyond physics roughly as we understand it today".

c) Likewise, if something is “merely real” (as opposed to really real) if it’s made of parts, that means that everything that exists except fundamental particles (and, if you’re right, consciousness) is merely real. Water molecules, bacteria, human bodies, human societies, stars, and the universe as a whole are all “merely real”. Doesn’t it seem strange that you’ve just declared yourself an eliminativist about 99.99%+ of the concepts you think and talk about every day?

Yes indeed. I am a mereological nihilist, and I would say that almost everything is merely real. It's fields and quarks all the way down... plus consciousness. The merely/really real distinction is useful precisely because it leads to a very sharp contrast. I think it is not shocking to say that a car is nothing above and beyond its parts and the laws that tie them together. There is no intrinsic 'carness nature'. The best theory of concepts that we have (of course,Huemer 2015) tells us that concepts are fuzzy regions (that we make up) in property space.

The really/merely distinction also help us avoid the linguistic ploy used by some who do not really believe in consciousness, but use the word with a different (non-realist) sense. When they say "Of course consciousness is real! Consciousness is identical with brain activity!" we can probe deeper and see if they mean it in a strong (really) or a weak (merely) sense, and clarify what they really mean.

So as a computational functionalist I would say that: 1) Consciousness is really real, and like everything else it’s made of parts.

I agree with the first part, and I tend to agree with the second part, though it seems that consciousness appears to us as a unified experience, I'm willing to entertain the possibility that it isn't really that unified (only merely).

  1. We’re not p-zombies because p-zombies are not possible, although a human mind can conceive of them, in the same way that someone who doesn’t understand cars can conceive of a car moving without an engine. However…

If I have to hold in my mind the laws of physics as they are and the engineless car at the same time, then I cannot conceive it. If we are proposing such a thing, we would need to consider a world with different laws. But this is what zombie-ists say: David Chalmers says that zombies are not possible in this universe, but that in a different universe, they could be possible. A universe that would miss either psychophysical laws (which define which systems get consciousness) or consciousness as a fundamental property. He says this to highlight that consciousness is something that doesn't flow immediately from the laws of physics as we know them today (Or has anyone seen consciousness anywhere hidden amongst the Lagrangian of the standard model?).

  1. It might be possible to assemble a machine that outwardly looks like us and behaves like us and yet isn’t conscious, but if we were to take a good look at its ‘brain’ (or whatever material mechanism makes it act and react), we would see that it doesn’t work like ours at all. Therefore, the Chinese room isn’t conscious if it’s just implementing a bunch of “if input X, output Y” rules, but a replica of a human brain implemented by something that’s not neurons would be conscious, whether that something is transistors moving electrons around or people passing messages around.

This is interesting, and something that, to some extent, I subscribe. The thing is that the human brain, when seen through the lenses of physics is a bunch of if input X, output Y. If we go full functionalism, the implementation doesn't matter, the function does. I've seen people claiming the Chinese room is conscious! What Simon may have in mind is that the replica must replicate the 'causal powers of the brain' to user Searle's parlance. This is, the parts should be connected to each other in the same (or similar) way as in the human brain. I agree with this: It should be possible to replicate consciousness in non-neurons, by figuring out where and what in the brain is related to consciousness and replicating that.

  1. Mary does learn something new when she sees red, kinda, using a somewhat loose definition of “learn”: Her neurons will fire in a way that they’ve never fired before, and shortly after the neurons that store her memories will be configured in a new way to hold a new memory.

I agree. But the point is that she wouldn't be able to learn this just by studying only physics, neurology and cognitive science in general.

  1. All the theories of mind that state that consciousness is not made of parts amount to taking a mysterious concept, putting it in a box, labeling this box “fundamental”, and declaring the problem solved. This is not good philosophy.

Compare: all theories of matter that state that quarks is not made of parts amount to hiding the problem, etc . Reduction presumably has to begin at some point. Chalmers has tiny bits of conscious matter (parts!) aggregating to form consciousness.

  1. I’m not sure what to think about eliminativists. I’ve read some things by Dennett that make me think he’d agree with everything I’ve written here except perhaps point (4) about Mary’s room, but if Tomasik really thinks we’re p-zombies I’m at a loss.

Yes, he thinks that, I directly asked him the question in a very clear way. I copypaste. I asked:

Just to clarify, Brian, would you agree with 'Consciousness is an illusion in the sense that the Müller-Lyer illusion is an illusion. It seems that I (Brian) am conscious, but on further thought I (Brian) realise that it' s not the case' And' Zombies are not only possible and conceivable, but everyone is actually a zombie, and no-zombies do not exist given the laws of physics in this universe'

And the answer I got:

  Yeah, I agree with what you wrote if we interpret "consciousness" in the usual philosopher's sense that Chalmers/Nagel/etc. have in mind (i.e., not type-A consciousness: http://reducing-suffering.org/hard-problem-consciousness/ ).

Tomasik may be accused of inconsistency or poetry, but I leave this as a question for him to sort out :-)

Zombie beliefs

Next is Tomasik's response.

Cool post. 🙂

the idea that consciousness is really a poetic name for certain patterns of brain activity. This is the identity theory.

Relative to my (fallible) understanding, I would still call this a type-A theory of consciousness (and you go on to agree that it’s just a less blunt way to explain eliminativism). I understand identity theories as type-B views, which hold that consciousness is really real and happens to be identical (whatever that means) to certain physical processes. (Of course, I don’t think this makes sense, and I see identity theories as disguised forms of property dualism.)

This is what I have already discussed about functionalism or identity theory: they are unstable and collapse to epiphenomenalism or eliminativism.

You can have bridges built of wood or steel, but you need bridges to get external phenomena into your awareness. Recall, the reason you are doubting realism and embracing eliminativism is that at some point, some facts entered your awareness and you pondered them.

If zombies are possible, then zombies get facts and ideas into their brains without consciousness. Zombies can make statements about their being conscious, can know things about the world, etc.

In a weak sense of know (without attached mental experience).. yes, zombies can do all that and more. The argument I made there is a reply to Rob Bensinger's argument that qualia do nothing because if we invert them, they work the same. I argued that this is not so if qualia work as a bridge made of different material, but the qualia have to be there to have knowledge (which implies belief which implies phenomenal awareenss). I agree with Tomasik, though, that if we interpret knowledge in a less mental way, yes, sure, zombies can know as much as us.

Emergentism

Seva Gunitsky on twitter said that it is too early to dismiss emergentism, that we ontological reductionism is a shaky foundation. He cites evidence from chemistry and cites this paper which seem to state a strong emergentist thesis:

All the various properties of artificial vesicles as membranous compartment systems emerge from molecular assembly as these properties are not present in the individual molecules the system is composed of.

However, this is no evidence against reductionism.

In the blog, I argued against emergentism here. My argument, basically, is to say that reductionism means that a property or behaviour of a physical system is reducible to its components (parts) and the laws that govern them (laws of nature). Given this, the system is fully specified (How else could it be?). Of course, the properties of the vesicles in the previous paper are not present in the molecules that compose the vesicles, but those properties are implied by the molecules and the laws of physics.

Consider water. It is a common example: water is wet, and atoms are not wet, so wetness is an emergent property. But it ain't. What is wetness? That it we touch it, it feels watery? That it flows? Whatever we take it to mean, we can begin by imagining a volume of water that has "wetness" and then imagine a smaller volume. And so on till we are left with a very small volume. We'll see water molecules interacting via, among others, hydrogen bonds, and we will see wetness all the way down to the one molecule case, which doesn't have hydrogen bonds. But these hydrogen bonds, while not present in hydrogen or oxygen, are implied by the laws of physics. What else is there to wetness? A subjective feeling of wetness, sure, but that reduces to the hard problem of consciousness, but that's not an issue of emergence.

With consciousness, for emergentism to work we need two things:

Either "particles of consciousness" (a la Monads) or psychophysical laws that especify how consciousness emerges. You could call this weak emergentism if you wish, as I accept there are things happening that were not in the parts, but that arise due to the laws that govern their interaction. I wouldn't call it like that. Strong emergentism, the view that once that we have accounted for components and interactions, there is something else, is something I reject. I haven't seen a good argument for that, so if you know one, please leave a comment.

A universe of zombies, a universe of angels

I like clarity of language, and I can imagine how I would talk if I were an eliminativist. If I were, I wouldn't talk about beliefs, qualia, or thoughts. I might talk about them, but in the way I talk about ghosts or God. Eliminativist Artir would say things like:

EA: You talk about this consciousness thing, but you can't even explain it. Look, it is not in the laws of physics, it is not in the elementary particles, and I cannot even approximate what you are talking about. It is like ghosts or gnomes or other mythical beings. It is not real. Same for these other things you talk about like beliefs or thoughts. You have your brain, you get inputs, those bounce around and at some point you output whatever. That's all there is to it. You can call those electric impulses 'thoughts' if so you wish, but they are not what you say they are.

Realist Artir: But I am conscious! I experiment things, I see the redness of red!

EA: In the same way that people say they 'believe' in 'God' or that they have seen 'ghosts'. It is a malfunctioning of your brain, just study some neuroscience or read how brains can malfuntion and it will go away.

RA: But I've done that already and I still think it!

EA: Okay, imagine this thought experiment. A universe where there are beings just like us, but that they have this so called 'consciousness'. Let's call them angels (I borrow the term from Rob Bensinger). Would they do anything different from us?

RA: Well, maybe... probably almost everyone would think the way I do, and you would be in the minority saying that consciousness is not real.

EA: Fine, that makes sense. But besides parrot about having a property that they cannot explain, what can these angels do that we -Because you are not conscious, just confused!- cannot? Can they fly, are their senses more accurate, or what?

RA: Well, no, they would just have conscious experiences... but I'm not sure.

EA: Indeed, you see! It would be a useless property, it wouldn't have evolved, and if it existed, it would have been a spandrel. But what sort of property of nature is one that has only the effect of making you talk about it? Doesn't it sound ridiculous to you? There is no room in physics for consciousness doing anything. And if there isn't, then how would they be talking about it? The angel world is nonsense. And so is consciousness!

RA: But it seems that the reason why I like to listen to music is because it sounds so nice, I experience a sound and it sounds pleasing. Without consciousness, I wouldn't like it.

EA: Not true: you like music because certain patterns of sound activate the reward centres of your brain, and that causes you to seek musical experiences. I also listen to music, for the same reason. No need for consciousness.

RA: Maybe it is this world that does not make sense. Admittedly I haven't explored how consciousness could work yet, but nothing in what you tell me or nothing in what I have read so far convinces me that I am not conscious. I do believe you and everyone else is conscious like I am, and that everyone except myself is deluded. Universal consensus and state of the art scientific knowledge are not enough.

EA: sigh

RA: sigh indeed.

Comments from WordPress

  • Simon Simon 2017-04-19T06:29:24Z

    Thank you for the very thorough reply!

    A few comments:

    1. Yes, if you define consciousness as immaterial and irreducible, then Dennett and Tomasik and I are saying that 'consciousness' doesn't exist. But... this is a debate about the nature of consciousness, it's obviously not a valid move to bake your conclusions about the nature of consciousness into the definition of consciousness. And yet that's exactly what you and Searle and many other non-materialists are doing.

    2. As a reply to my claim that declaring mysterious phenomena "fundamental" and moving on is bad philosophy, you wrote, "Compare: all theories of matter that state that quarks is not made of parts amount to hiding the problem, etc . Reduction presumably has to begin at some point. Chalmers has tiny bits of conscious matter (parts!) aggregating to form consciousness."

    Reduction doesn't "begin" at some point, it ends at some point, and that point is not arbitrary. If it were, any time scientists were stumped by a phenomenon and unable to immediately explain it, they could have given up and called it fundamental. The point at which the process of reducing mysterious complex things into simpler things ends is when the things left to explain are so simple that they can't logically be broken down into more simple parts, and there is no mystery and confusion left. I'm not sure if quarks have reached that ultimate level of reduction, but consciousness definitely hasn't; even if it has, there's too much mystery and confusion left for us to know it!

    As for Chalmers' "tiny bits of conscious matter (parts!) aggregating to form consciousness", that's just a veneer of fake reduction. It's like explaining a car by saying it's made of tiny bits of automobile matter.

    1. I like your point about the unlikelihood of consciousness having evolved if it's as useless as it appears to be. I can think of two possibilities, if consciousness is material:

    a) Consciousness does have a use but it's not an outwardly obvious one (such as flight or vision). For example, perhaps certain cognitive tasks are less computationally expensive for a conscious brain than for a non-conscious one, at least within the part of brain design space that evolution had available to it.

    b) Consciousness is a natural 'outgrowth' of brains having reached a certain level (and kind?) of intelligence, a bit like the ability to do advanced mathematics: Utterly useless from an evolutionary point of view, but it follows from having the kind of brains that evolution has 'given' humans.

  • Artir Artir 2017-04-19T16:20:39Z

    Thank you for your comment (And for breaking down points by numbers, makes replying much more easy)

    1. I do not want to define consciousness as inmaterial and irreducible! I am okay with a definition of the type "Consciousness is real iff there exists at least the possibility of a being being instantiated in the universe that can have phenomenal experiences, qualia, etc". There is a second claim attached to the first, namely that I am conscious, therefore consciousness is real in that sense. You may disagree that I am conscious in that sense and agree with my definition, in which case you are an eliminativist. Or you could dispute the definition: But this will not do, I don't believe much in definitional disputes. If you define consciousness in a different way I will call your consciousness consciousness-2, and we would be talking about different things. I have no problem in discussing forms of consciousness-2, but the one of interest for me here is consciousness in the above sense.

    2)Fair enough, end, not start. I don't say that consciousness has to be fundamental, I say that it is a possibility. We may want to say that it is fundamental like decades ago when atoms were fundamental, but admit that we may be wrong. I am open to that. Saying that consciousness is made of conscious-bits seems like a sensible way to proceed in reduction, as it opens the study of the mechanisms that bounds them together. This can be done empirically, perhaps. Those conscious bits may have their own properties like atoms do. I'm not saying they will be consciousness cut into smaller pieces.

    1. I have considered both possibilities. There is some evidence that consciousness is not really required for anything https://academic.oup.com/nc/article/doi/10.1093/nc/niw005/2757125/Dual-process-theories-and-consciousness-the-case . Though Tononi disagrees, he sees consciousness as a key to processing complex information. If consciousness turns out to be a spandrel, it would be quite weird, for the reasons outlined in my first post. This would be pure epiphenomenalism, my comments to that apply. I do not reject this explanation, but I rank it as less plausible than the others, consciousness seems to have something to do with us talking about it.
  • David Pearce David Pearce 2017-04-24T13:02:49Z

    What is consciousness “for”? Why has consciousness evolved “if it’s as useless as it appears to be”, as Simon puts it (3)?

    IMO, its fitness-enhancing role is so fundamental that we're mostly oblivious of its existence. Imagine a notional human without any capacity for phenomenal binding, whether "local" binding or “global” binding – the victim of a deficit syndrome more severe than simultanagnosia, cerebral akinetopsia (“motion blindness”) and florid schizophrenia combined. The central nervous system of this profoundly handicapped human has no capacity to bind neuronal feature-processors into conscious perceptual objects, and no capacity to run a phenomenally bound world-simulation apprehended by a unitary phenomenal self – what Kant forbiddingly calls the “transcendental unity of apperception”. In short, this creature is a micro-experiential zombie, 86 billion odd neuronal pixels of membrane-bound Jamesian “mind-dust”. Perhaps individual neurons of this notional micro-experiential zombie can be identified via neuroscanning as distributed feature-processors – edge-detectors, motion-detectors, colour-mediating neurons, etc – just as our artificial connectionist systems with a sub-symbolic architecture can analogously be “trained up” to perform different computational tasks. Yet this poor handicapped creature can't combine its neuronal micro-experiences into dynamic perceptual objects populating a real-time unitary world-simulation. The CNS of this notional micro-experiential zombie doesn’t even undergo the kind of inky introspective void that you or I experience when closing our eyes. This notional micro-experiential zombie is not a so-called p-zombie – far from it, it's severely handicapped – but like a p-zombie, it’s not even “all dark inside”.

    By contrast, the real-life phenomenal binding of Darwinian minds is ridiculously computational powerful - regardless of how you believe that our own conscious minds / world-simulations carry it off, and regardless of whether you think our minds are effectively classical or non-classical information processors. Most people are implicitly perceptual direct realists. They don’t normally conceive of the world-simulation run by their minds as a functional manifestation of consciousness. Instead they identify consciousness with e.g. self-awareness, or meta-cognition, or maybe wonder if consciousness has any real function at all - beyond facilitating interminable debates about consciousness. But a convergence of scientific evidence confirms that what we each pre-theoretically conceive as the “physical” – i.e. solid, medium-sized macroscopic objects “out there” - is as much a manifestation of consciousness as subtle introspective thought-episodes “in here”. This analysis of your phenomenally bound consciousness holds true whether you are dreaming or awake. When you are awake, however, the properties of your phenomenal world-simulation track gross fitness-relevant patterns in your mind-independent local surroundings.

    The capacity of nervous systems to run real-time conscious world-simulations as a tool for navigating an unforgiving environment dates to the early Cambrian. In my view, Penrose et al are barking up the wrong end of the evolutionary tree. Yet highlighting the fitness-enhancing properties of phenomenally bound consciousness doesn’t explain how such consciousness is physically possible for a pack of membrane-bound neurons. After all, telepathy would be fitness-enhancing for an organism too.

    A non-materialist physicalist will say that, strictly speaking, consciousness per se isn’t evolutionarily “for” anything. Ultimately, all consciousness, and only consciousness, has causal efficacy: it’s the essence of the physical, the "stuff" of the world mathematically described by QFT. Plants, stock markets, digital computers and classically parallel connectionist systems are all examples of information-processing systems that are micro-experiential zombies. The textures of consciousness of their components are incidental, mere implementation details. By contrast, what makes biological minds special is how your consciousness is functionally bound.

    Precisely how such functional binding is physically possible is really the topic for another comment / post. But before saying more, a word on Michael Huemer’s plausible-sounding, "1. For any system, every fact about the whole is a necessary consequence of the nature and relations of the parts.” (cf. https://nintil.com/2017/04/07/consciousness-and-its-discontents/)

    In classical physics, yes, this assertion is almost a truism. The claim is inconsistent with modern physics. Quantum theory turns Huemer’s assertion of mereological priority on its head. Perhaps see e.g. Jonathan Schaffer's "Monism: The Priority of the Whole" (2.2): https://pdfs.semanticscholar.org/ff0f/4e110da053d4ca1a2bacff43b42bb14ebdd3.pdf

    Non-materialist physicalists who are wavefunction monists (cf. https://www.amazon.com/Wave-Function-Metaphysics-Quantum-Mechanics/dp/019979054X) face the phenomenal _un_binding problem. Why isn't the multiverse a single psychotic mega-mind, so to speak? In my view, the phenomenal _un_binding problem is solved by decoherence: the rapid, environmentally-induced scrambling of phase angles of the components of an individual superposition. Applying Zurek’s “quantum Darwinism” (cf. https://arxiv.org/pdf/0903.5082.pdf) to the CNS yields a selection mechanism of unimaginable power, turning what would otherwise be nonsensical psychotic “noise” into the classical-seeming world you’re subjectively undergoing right now. Selection pressure more powerful than four billion years of Darwinian evolution (as naively understood) takes place in your CNS every moment of your life.

    Insane-sounding, I know. Mercifully, experiment (i.e. molecular matter-wave interferometry) rather than philosophising will settle the issue.