Let's try to debunk that Chinese Room.

The Chinese Room is one of a handful thought experiments in philosophy that can truly be said to have gone viral. Well, at least within the field itself, although I would recommend any intellectually curious person to learn what it's about. Many excellent accounts of it exist, so I won't add mine - my goal here is different, I want to "debunk" it.

I will give you a quick upshot of the Chinese Room, if only to let you know what it is that I am "debunking". Here it is, in a bumper sticker format: simulation is not duplication. While its author, the excellent and highly engaging philosopher John Searle, believes that statement in general, he designed the Chinese Room scenario to illustrate it specifically for the case of understanding a language. 

He points out that the most perfect computer simulation of the human digestive system won't be able to digest an actual pizza. A simulation of the heart won't be able to pump actual blood. Simulating a function doesn't duplicate it. And no wonder, a computer program just shuffles symbols around, ones and zeros, which we, observers, then interpret. It's pure syntax.

Similarly, says Searle, simulating a cognitive capacity, such as the ability to understand and speak Chinese, doesn't replicate this capacity. Syntax is not semantics. Such a simulation wouldn't actually understand Chinese any more than a digestion simulation would digest an actual pizza. Of course, Searle says, this logic is not limited to understanding a language. It applies to any other capacity involving subjective experience, and for consciousness itself - a computer simulation of consciousness is just that, a simulation, it's not itself conscious.

But I say it is conscious. Yes, simulating digestion doesn't replicate it, but I will argue that simulating consciousness does in fact replicate it. And how will I do that? I have a couple of arguments and thought experiments in mind, in this article I'll focus on one.

Argument 1. I think I am conscious therefore I am.

Here's the compact logical form of it:

1a. A perfect computer simulation of me thinks it's conscious.

1b. If it thinks it's conscious then it is conscious.

--------------------------------------------------------------------------------

Conclusion. Therefore, a perfect computer simulation of me is conscious.

Defense of 1a. A perfect computer simulation would, by definition, simulate faithfully how the neurons in my brain interact with one another. So its simulated memories and thought processes would mimic mine. It would answer questions the same way I would, it would be able to solve the same puzzles I could, in the same way I would, etc. Just as I have different brain activity patterns, potentially detectable, for when I respond truthfully vs lying, it also has the same potentially detectable patterns in its simulated brain. Thus, when it makes a statement, we can potentially detect whether it evaluates that statement to be true or false. 

So, for example, when it says to itself or to us "I feel self-aware, conscious" it couldn't secretly have a brain pattern corresponding to lying, it couldn't be evaluating that statement as false. Why? Because it and I have the same brain patterns, and I am certainly sincere when I make that statement!

It is in this sense that it thinks it's conscious. When I say "it thinks X" in this argument, I simply mean "it evaluates X as true, it says X to itself (and us) "sincerely", without brain patterns corresponding to lying". Thinking in that sense doesn't presuppose consciousness (even a simple chess program evaluates things, not statements but chess positions), so premise 1a doesn't already presuppose the conclusion.

Defense of 1b. If it thinks it's conscious then it is - what is my argument for that? Well, let's try to deny this and see what happens. Let's try to assume it thinks/says it's conscious, while not actually conscious, and flesh out the consequences.

An immediate consequence is that forming the brain patterns for thinking/saying it's conscious happens perfectly fine without actual consciousness. But remember from 1a, its brain patterns match my brain patterns, neuron for neuron. So me thinking/saying I'm conscious is also not due to me being conscious, it's due to what I and my simulation have in common, the neuron dynamics. And so we have arrived at the most ridiculous thing in the world: apparently, the reason I think and say I am conscious is NOT because I am conscious! 

Similarly with more specific conscious experiences. When I say the ice cream I'm eating is delicious it's not because I am conscious of its taste - it's due to some specific pattern of neuronal firings that can exist with or without conscious experience. When I yelp in pain after stabbing my toe, it's not because I felt pain - my simulation didn't feel anything after stabbing its simulated toe but yelped just the same. 

All of that is preposterous, and all of that came as a consequence of denying 1b. But what if the Chinese Room defender doesn't want to take this lying down and says:

Objection. I can make a parallel argument to show that there's something wrong with yours. When I throw a rock at an angle it follows a curved path, due to gravity, before falling to the ground. That path is a parabola. We can simulate gravity and make a perfect computer simulation of how the rock flies. Of course there's no actual gravity in the simulation, yet the simulated rock also follows the same parabola. 

We can now run an argument parallel to the above: the parabola of the simulated rock is not due to gravity, so the parabola of the actual rock is not due to gravity either! It's due to what the real rock and the simulation share in common, the equations they both follow. 

But that's wrong, of course the parabola of the real rock is due to gravity - without it, the rock would fly in a straight line. So if there's something wrong with this type of argument in the rock case, the same thing is wrong with the logic in the consciousness case.

Response. Actually there's nothing wrong with this type of argument in the rock case. It's not true that without gravity the rock would fly in a straight line - what will happen to the rock in a hypothetical world without gravity depends on what's there in its place. If in its place there's some other force that happens to act on that specific rock according to the same equation, then the rock would still follow the same parabola. For example, an electric force could produce the same constant pull that is needed for a parabolic path.

So gravity per se is not required for the parabola, anything satisfying the necessary equation will do just fine. The fact that it's specifically gravity (a field made up of gravitons, as opposed to something else) plays no causal role in creating the parabola. It is in fact what the rock and the simulation share in common that's responsible.

Similarly for consciousness, it's what me and my simulation share in common that's responsible for me thinking/saying I'm conscious. If we accept that when we say we have consciousness it's because we do in fact have specifically consciousness, then consciousness must be among the things shared between me and my simulation. 

Summary. 

If the argument I have given succeeds then the Chinese Room argument fails to achieve its goal. It's true that for many things simulation is not duplication, but in the case of consciousness it would seem that it is.

Let me know in the comments if you have any questions or objections about the argument I presented here. If there's a troublesome spot in it, I'll be happy to try to expand on it and improve the argument.



200 Comments - Go to bottom

  1. Hey Dmitriy,

    I apologize yet again for once more dropping off the face of the earth. After recently enduring a few such episodes; I can now testify that this flat earth stuff isn't funny anymore! Fortunately, I'm now back and ready and eager to comment. In the meantime I will watch my step more carefully...

    I can see I have a lot to catch up on. There are things I want to say on almost every new article you've written. Not to mention that I've rather rudely left you hanging on your responses in the free will and liar paradox articles, and I need to get around to responding to your comments on my article. I also have not forgotten about my proposal for the Plantinga article; I'll try to catch up on everything in the next few days.

    Regarding the Chinese Room:

    Don't you think your argument begs the question a little bit?
    You write: "Here's the compact logical form of it:

    1a. A perfect computer simulation of me thinks it's conscious.

    1b. If it thinks it's conscious then it is conscious."

    Premise 1a assumes that a perfect computer simulation can think, but of course that is exactly what is in doubt. Searle is saying that the Chinese room can't think, and that it's not a mind (Generally, when philosophers talk about thinking/cognition, they mean the mental and therefore the conscious). Your defense of 1a seems to be more focused on the fact that a perfect brain simulation would think itself conscious, and not so much interested in defending the belief that such a simulation can think period (e.g. "So its simulated memories and thought processes would mimic mine." already assumes that there is some cognition at play here).

    In other words, Searle is denying that consciousness is just a computation, and so replicating the computational aspects of the brain in our Chinese room simulator isn't sufficient. Instead we need something more, whatever that thing is that Searle thinks the brain has in addition to computation but which is missing in the Chinese room. Basically, this is the thing that Searle thinks underlines semantics; the thing that makes a brain state have 'meaning' (i.e. be conscious). According to Searle, computation by itself is just syntactic symbol manipulation.

    So, Searle is trying to play on our intuitions that the Chinese room is not conscious to then infer that computation is insufficient for consciousness. One way to refute Searle is to abandon intuition (i.e. why should we think that folk psychological 'common sense' is any reliable guide here?). Alternatively, we can accept that our intuitions are telling us something useful here, but that it's not what Searle thinks. After all, there are a lot of differences between the computation that the Chinese room undergoes and what the human brain does.

    One obvious difference is this: the Chinese room doesn't understand what Chinese is precisely because it's not acquainted with the meanings of the symbols it manipulates. But such acquaintance or meaning is really nothing more than certain kinds of additional 'special' computation. For instance, the human brain associates certain Chinese words with sensory data (e.g. the word 'butterfly' with sensory information invoked by butterflies). The Chinese room doesn't have any sensory capabilities and so is missing this particular computational aspect.

    Thus, under this approach (which I endorse), Searle is right to say that the Chinese room doesn't grasp the meaning of the Chinese language, but only because he's guilty of a faulty analogy. He didn't accurately capture in his analogy what it really means to have a complete computational simulation of the human brain.

    ReplyDelete
    Replies
    1. I should add as a caveat that when I write: "So, Searle is trying to play on our intuitions that the Chinese room is not conscious"

      I mean that the Chinese room is not conscious of the Chinese language (i.e. it has no idea what the words really mean). Of course, Searle makes an additional leap of inference from here to the conclusion that the Chinese room is really not conscious at all. But obviously that doesn't have to follow.

      Delete
  2. Hey Alex,

    great to have you back and no worries about your temporary absence. It's hard not to get sucked in by various things online, only to realize two weeks down the line that you spent way more of your limited time on it than you ever intended. I can most certainly relate!

    I have been trying to manage my time more efficiently. I have joined Reddit recently and have participated in some philosophy discussions. I have also kept occasionally participating on the Neurologica blog. So I am kind of being pulled in a few different directions, and it's hard to figure out how to spend my time most productively. One thing is I should prcut down on very long discussions, no matter how fascinating, and maybe focus more on trying to write more interesting content and see what works and what doesn't.

    But I want to reply to your primary objection:

    >Premise 1a assumes that a perfect computer simulation can think, but of course that is exactly what is in doubt.

    By "think X" I simply mean "evaluate X as true". Perhaps I should have made that clearer. So I am not begging the question, thinking in that sense doesn't presuppose consciousness, a chess program can evaluate positions without needing to have a mind.

    ReplyDelete
    Replies
    1. I’m glad that you seem to have gravitated towards philosophical discussions as of late. Maybe you can try to link your blog to some of those sites you visit in order to attract more views. It would certainly be nice to get additional commentators.

      About the Chinese room again:
      “ By "think X" I simply mean "evaluate X as true".”

      But doesn’t this now have the unfortunate consequence of being a self-defeater for premise 1b? If “think x” just means anything a chess program can do, then it doesn’t really seem to follow that evaluating the statement “I’m conscious” as true means anything. A simple program can do the same if given the appropriate conditions.

      So how do we determine that the evaluation of “I am conscious” is empty or not? Is it just meaningless syntactic manipulation, or is something more going on?

      The point is that you still have to suppose that such syntactic manipulation in the simulation isn’t empty of content. So it seems that you’re still sneaking in the presupposition that the simulation can think in the mental/consciousness sense. Don’t you agree?

      Delete
  3. >it doesn’t really seem to follow that evaluating the statement “I’m conscious” as true means anything. A simple program can do the same if given the appropriate conditions.

    That's an important question but there's a huge difference between our faithful brain simulation and some simple program that we just "told" to evaluate "I am conscious" as true. We know that the brain simulation actually speaks normal English and is literally exactly as intelligent as me (the person whose brain it's simulating)! It evaluates every statement according to what it means in English because that's what I do, and it does so exactly as reasonably as I do.

    ReplyDelete
    Replies
    1. To be clear, when I say "speaks" or "intelligent" I am not begging the question by presupposing consciousness. I just mean that it evaluates all statements as I would, and thus as intelligently or stupidly as me.

      Delete
  4. Hey Dmitriy,

    "there's a huge difference between our faithful brain simulation and some simple program that we just "told" to evaluate "I am conscious" as true. We know that the brain simulation actually speaks normal English and is literally exactly as intelligent as me"

    Right. But it seems like the whole "I am conscious" argument is irrelevant; an unnecessary cherry on top if you will. The real justification for the premise "a whole brain emulation of me is conscious" would be our intuition and the behavioral characteristics of the simulated brain.

    The point about evaluating consciousness as true is neither sufficient nor necessary. It's not sufficient because of the chess program stuff, and it's not necessary because there are a lot of crazy philosophers out there who insist there is no consciousness/they are not conscious. Presumably a whole brain emulation of them would do likewise, and presumably we still think the crazy philosophers are conscious.

    ReplyDelete
    Replies
    1. But I gave my reason for thinking it is in fact sufficient - by giving a reason why "the chess program stuff" doesn't apply. Can you clarify how you respond to that?

      Delete
    2. Sorry. I meant it's not sufficient by itself, it needs the other stuff (behavioral equivalence) to work. And because it's not necessary we don't need it anyway; we can just drop the "I think I'm conscious" phrase from the conjunction of "I think I'm conscious" + behavioral equivalence. We can see that just behavioral equivalence by itself is sufficient.

      Delete
  5. Of course, we should realize that Searle's position in the main is precisely that behavioral equivalence to a conscious entity is NOT sufficient for consciousness (and he gives several reasons why outside of the Chinese room). So this argument still begs the question a bit since it can hardly be a rebuttal to Searle.

    That's fine as long as we interpret this as not being a response to Searle, but rather as an outreach to potential viewers who might be taken in by the Chinese room but who do not necessarily have an understanding of the full extent of Searle's position nor agree with it.

    ReplyDelete
  6. >Sorry. I meant it's not sufficient by itself, it needs the other stuff (behavioral equivalence) to work. And because it's not necessary we don't need it anyway; we can just drop the "I think I'm conscious" phrase from the conjunction of "I think I'm conscious" + behavioral equivalence. We can see that just behavioral equivalence by itself is sufficient.

    I don't quite follow. What's "behavioral equivalence"? And how do we see that it by itself is sufficient, by what argument?

    Whatever that argument may be, my argument definitely needs the "I think I am conscious " bit. That is how I refute Searle, by arguing for 1b.

    ReplyDelete
    Replies
    1. Behavioral equivalence means that the whole brain emulation (WBE) behaves exactly/analogously as you would in any given situation. Consequently it would evaluate “I am conscious” as true, and therefore the WBE of you is conscious. This is your argument yes?

      Well, we know that evaluating “I am conscious by itself” is insufficient. We need the WBE to be behaviorally equivalent first before we would accept its statement as valid. However, I just pointed out that such evaluation is completely unnecessary.

      Let’s say you were that crazy philosopher; your WBE would insist it’s not conscious. So the evaluation of “I think I’m conscious” as true is superfluous. We can see that it adds nothing, because the WBD by definitional fiat is just going to evaluate whatever you think is true, as true.

      Because your beliefs on conscious are irrelevant to whether you are actually conscious, it follows that any evaluation by the WBE on this topic is irrelevant.

      Therefore let me summarize my argument as such:

      1. In your thought scenario you invoke certain intuitions that your WBE is conscious

      2. (Basically) everyone who agrees that your WBE is conscious would also agree that the WBE of the crazy philosopher (that behaves exactly like him) is conscious

      C: our intuitions about the consciousness of WBE’s are in no way formed by the WBE’s evaluation of “I am conscious” as true (except as a further confirmation of behavioral equivalence).

      Delete
    2. A better way to put this argument (which doesn’t rely on popular opinion) is this:

      1. A WBE of Dmitriy who says it is conscious is conscious.

      2. A person’s (or WBE’s) beliefs on whether they are conscious are not actually relevant to determining whether they are conscious (see: crazy philosopher or just people who don’t know about consciousness).

      C: The evaluation by the WBE of “I am conscious” as true, is irrelevant to whether it is actually conscious (expect as a further confirmation of behavioral equivalence).

      Delete
  7. This argument's structure is puzzling. C seems to just follow from 2, without making use of 1.

    Also, "relevant" is vague. Your crazy philosopher example shows that saying "i am conscious" is not a necessary condition for consciousness. Sure, but that's not important for my argument. I am claiming saying it, while using English correctly and while being reasonable enough (like me or like most people), is a sufficient condition for consciousness. Do you dispute that?

    ReplyDelete
    Replies
    1. You may think that instead of 1b you can have a different premise:
      1b'. An entity behaviorally equivalent to a conscious entity is also conscious

      But then:
      - that would just be a different argument for the same conclusion, it would not constitute an objection to my argument
      - you will have a harder time defending 1b' than I in defending my 1b, I think. I haven't yet seen your defense of it, but I am pretty sure Chalmers will start shouting about p zombies.

      Delete
    2. I'm not disputing the soundness of your argument. I'm saying we can get an indication of consciousness from observations of the WBE "using English correctly and while being reasonable enough " (or behavioral equivalence in other words) without it saying "I am conscious". Moreover, I am claiming that our intuitions that the WBE is conscious are invoked by the above and not by the declaration of consciousness.

      And so smuggling in the latter is unnecessary and irrelevant. It's like saying:
      A) I like cheese
      B) I'm male
      C) I like dairy products

      This was meant to be demonstrated by the crazy philosopher case; because we are supposed to be relying on the same intuitions.

      Delete
    3. "But then:
      - that would just be a different argument for the same conclusion, it would not constitute an objection to my argument
      - you will have a harder time defending 1b' than I in defending my 1b, I think. I haven't yet seen your defense of it, but I am pretty sure Chalmers will start shouting about p zombies"

      Yes but your argument hardly overturns Chalmers objections either. I'm saying that we can accomplish everything that your argument does without the declaration of consciousness. Also, if you think that, then presumably you believe that the example of the WBE of the crazy philosopher is somehow less indicative of consciousness than the WBE of you. Why so?

      Delete
  8. I think my 1b does deal a blow to Chalmers. I am saying it's analytically, by definition, true that there's no possible world in which a reasonable intelligent evaluator of statements "thinks" it's conscious but isn't.

    ReplyDelete
    Replies
    1. In other words, a world of putative p zombies would have them walking around talking about consciousness, how mysterious it is, how they know they are conscious but are not sure if computers can be. Exactly as we do. And if we peek inside their brains we will see they are not lying, they are convinced they are conscious.

      And this, I say, by definition means they are in fact conscious, they are not p zombies after all.

      Delete
    2. I understand, but it’s not enough to simply make analytic claims about the nature of consciousness. Anyone can make a claim, presumably Chalmers thinks that “consciousness is not behavioral” is an analytic truth.

      What we need of course is an argument, and the argument you were making presumably relies on our human intuition about what we think consciousness is.

      Therefore, it is essential to explore the nature of these intuitions. Are they triggered equally so in the crazy philosopher case etc??

      If so, I argue that our intuition (at least for those who find the WBE argument convincing) really rests on behavioral equivalence, and therefore there is no reason to think that your analytic claim about the evaluation of statements about consciousness actually captures anything indicative of consciousnesses.

      In other words, I think we agree that hearing a robot claiming it is a WBE of you and that it is conscious is not impressive by itself. We need further proof that it’s really a WBE (e.g. I want to engage in an argument). Once it meets the above criterion, that would be really impressive.

      What I am saying is that the vast majority of the people who find your argument compelling, would find it equally impressive (and indicative of consciousness) in the above case, absent a declaration that it/he is conscious.

      As for the analytic nature of your claim, let’s realize that there are a whole bunch of preconditions necessary to make your claim analytic.

      It’s not literally true that consciousness IS saying “I am conscious” while being a whole brain emulation. That’s not an identity claim.

      So we need some auxiliary assumptions like “computation is consciousness” or “if A is behaviorally equivalent to a conscious entity then A is conscious” if we want to make the above claim (an entity is conscious if it says x while being y) analytic.

      And that’s where our intuitions come into play (to find support or defeaters for the auxiliary assumptions).

      Delete
  9. Oh, and why do I say that? Because the alternative is worse, as I described in the article, and also doesn't save the Chinese Room.

    ReplyDelete
    Replies
    1. The alternative doesn’t really follow. Let’s say I’m convinced that only carbon entities are conscious. It follows that I can be skeptical of the robot being conscious while still thinking that human beings are conscious.

      Searle thinks that there is some additional element inherent to the human brain outside of computation that gives us consciousness. Presumably he uses this extra element to be skeptical of claims like yours, without having to accept or face the epistemological uncertainty about the conscious nature of human beings.

      Delete
  10. >What I am saying is that the vast majority of the people who find your argument compelling, would find it equally impressive (and indicative of consciousness) in the above case, absent a declaration that it/he is conscious.

    I think that's the core disagreement. I think there's a significant fraction of people who may find the idea of p zombies plausible, until they imagine that picture of the world with them, walking around believing they are conscious.

    ReplyDelete
    Replies
    1. >The alternative doesn’t really follow. Let’s say I’m convinced that only carbon entities are conscious. It follows that I can be skeptical of the robot being conscious while still thinking that human beings are conscious.

      I am saying in the absence of the principle 1b the alternative is we have no great argument to even be sure we ourselves are conscious - since reasonable entities can apparently be mistaken about being conscious why can't we?

      And that's a pretty bad alternative.

      Delete
    2. Hey Dmitriy,

      The above quote wasn’t about p-zombies. It was about people who found your argument compelling also finding the case of a whole brain emulation of you acting and talking like yourself but absent a declaration of consciousness (let’s say it was never asked) equally compelling.

      It was meant to support my “behavioral equivalence is sufficient for your own argument” case.

      Delete
  11. Oh, I basically used "putative p zombies" and WBE interchangeably, for this discussion.

    ReplyDelete
  12. Hey Dmitriy,

    A few things.

    On Chalmers:

    Yes most philosophers believe in principle 1b, but keep in mind that when we say “I think” we mean that ‘I’ have a mental life (and not just that I evaluate stuff as true/false). Hence, the analyticity of the claim “I think I’m conscious” and why it seems so certain.

    Generally, we don’t use reason to derive whether or not we are conscious. Instead, it’s something one just experiences; if you have it you know you are conscious. It’s similar to being awake vs being in a dream.

    It’s possible that I could be in a dream and confused about being awake even with my rational faculties intact, but once I’m awake there is a clear and distinct feeling of awareness that I experience which is absent in the dream-state. I just know I’m awake when I’m awake (at least it has never happened that I was mistaken when experiencing this feeling).

    The possibility of my dream self being mistaken in no way imperils my certainty of my being awake in the present moment.

    That’s because my dream self and awake self used different methods to arrive at the same conclusion (that they were awake). My awake self uses the evidence of the feeling of alertness and mental clarity that can only be achieved while awake, whereas my dream self does not.

    Similarly, whatever process the WBE uses to parse “I am conscious” as true, if it’s not in fact conscious then this process will not involve using the evidence of ones conscious experiences.

    In other words, to doubt that the WBE is conscious is not to doubt our process of determining consciousness (to feel that you’re conscious), but rather to doubt that the WBE uses it in the first place.

    About my argument and people’s intuitions:

    Certainly there are such people, but does their ultimate belief in the impossibility of the p-zombie stem from their understanding the full implications of behavioral equivalence alone, or was it helped in some significant way by the additional declaration of consciousness? I argue mostly for the former.

    Most people, I think, intuitively and overwhelmengily use behavioral standards to adjudicate personhood and the existence of consciousness.

    If we take ordinary peoples reactions to the robotic boy David in the movie A.I. by Steven Spielberg (great movie by the way), most people from the audience reviews were apparently convinced that David was a thinking and conscious creature.

    David didn’t need to go around saying “I’m conscious” to establish this, he just acted like a normal boy would and that was enough. There are indeed hard-nosed skeptics (i.e. certain philosophers) who say that David is just a non-feeling robot. But most such skeptics are people like Searle who think that computation is insufficient for consciousness, and who don’t lend any credence to anything a robot like David says anyway.

    I’m not saying that there aren’t people like what you described, but I think they are a small minority, and this matters when we want to assess our intuitions and why most people might be convinced in the WBE of its consciousness.

    ReplyDelete
  13. Hey Alex,

    About the first part, how we use feeling and not reason to determine we are conscious: the WBE would say exactly the same thing! Imagine again that world populated by WBEs, with them walking around talking about the mystery of consciousness and such. They, like you, wouldn't be lying when they say they say:

    ". Instead, it’s something one just experiences; if you have it you know you are conscious. "

    WBE Alex would honestly report that he has it and therefore knows he's conscious, not from reason, from his viceral experience.

    The critical question to somebody denying 1b is: if WBE Alex can be mistaken, why can't real Alex? So again, the alternative to denying 1b is pretty pricy.

    ReplyDelete
    Replies
    1. Hey,

      "About the first part, how we use feeling and not reason to determine we are conscious: the WBE would say exactly the same thing!...WBE Alex would honestly report that he has it and therefore knows he's conscious, not from reason, from his visceral experience."

      But I already addressed this with the example of my dreaming self saying the same thing. We might even stipulate the my dreaming self proclaims that he knows what it is like to feel as if one is awake etc... Now of course this only goes so far as to establish that it could be (logically) possible for a WBE to insist it is conscious and yet not be so. Naturally, there must be some additional argument we would have to adopt if we wanted to argue that WBE's don't actually have access to the same experiential feelings we do.

      But the point is that simply saying one is conscious, and that one has arrived at this conclusion through the same means as we normally do so, is by itself an insufficient guarantee (for a third party) that they are conscious. Of course it's sufficient for the WBE (it will know if it is conscious), but it's not sufficient for ourselves who do not have access to the conscious experiences that the WBE may or may not have.

      This applies to any inference of consciousness that goes beyond oneself. So we need some kind of additional argument if we are to assume that other beings like humans or WBE's have consciousness. This additional argument might be very permissive, allowing all of the above to be conscious, but it doesn't have to be. Presumably, a guy like Searle has an argument that excludes giving consciousness to WBE's while retaining it for other humans and certain animals.

      Delete
    2. In other words, anytime that we make an inference of consciousness about another being, we are basically arguing:
      A) I am conscious
      B) This being is similar to me
      C) This being is conscious

      How we construe “similar” depends on ones theory of consciousness. There are plenty of theories of consciousness that would exclude WBE’s while including humans.

      Simply asserting “I am conscious” even while having a full computational representation may be insufficient for such theories. Therefore, your point about what the WBE does can hardly be considered problematic for those theories.

      You seem to be assuming that your opponents already adopt a theory of consciousness similar enough to yours, but obviously that doesn’t have to be the case.

      Delete
    3. As an example, those who believe in a quantum theory of consciousness (which Searle by the way leans towards/takes seriously) might think that the unique quantum conditions in the microtubules in our brain grant us consciousness, and not computation nor behavioral equivalence.

      Such people can rightly be completely untroubled by your example. On the other hand, a guy like Chalmers (a functionalist) would definitely accept and believe that the WBE is conscious.

      Delete
  14. I still don't understand how you respond to my question: "The critical question to somebody denying 1b is: if WBE Alex can be mistaken, why can't real Alex?" Sorry if I am being dense.

    I don't think your dream reply answers that question. When I am asleep I have diminished intelligence, so in that case there's no mystery in the analogous question: "if sleeping Alex can be mistaken, why can't Alex be mistaken now?"

    In that case one has a different intelligence from the other so there's no problem for one to be mistaken but not the other. But for WBE we don't have such a response available since it's equally intelligent.

    1 is of course true.

    ReplyDelete
    Replies
    1. Ignore the "1 is of course true." bit. Also, in the interest of efficiency, readability, and that good time management stuff we discussed I am trying to be very brief and to the point - I hope it doesn't come off as rude. I am thinking maybe small bite size pieces might be a cleaner more efficient way to sort out the core disagreement.

      Delete
    2. Well I stipulated by definitional fiat that dreaming Alex had his rational faculties mostly intact (indeed I have had dreams wherein I worked through some complicated philosophical problems, and remembered how I did so upon awakening). Imagine if you will a lucid dream, or alternatively a hallucination.

      If the analogies don’t work just discard them. Like I said, the example of the WBE saying it is conscious is only problematic if you already subscribe to a requisite theory of conscious.

      For instance, in the quantum case I can deny that the example is problematic because the real Alex has the necessary physical structures and therefore he is conscious.

      In other words, it’s only “pretty pricey” to reject 1b if you already subscribe to a theory of consciousness which makes the WBE conscious, and that completely defeats the point.

      Delete
  15. So what would then be, in your mind, a possible reply to my question from one of these people? Tell me a sample reply and I will give you my objection to it.

    ReplyDelete
    Replies
    1. Well as I said, in the quantum case the response is simply that we know real Alex is conscious because of his physical structures and we know WBE is not because it lacks such structures. Any talk about the evaluation of statements by the WBE is not a defeater of the above, and that answers your question.

      As for Searle, note that he doesn’t need a positive theory of what consciousness is. He can claim that his Chinese room establishes that computation is insufficient for consciousness.

      He also knows he’s conscious, and that he has a brain, therefore it is reasonable to suppose that there is some component x in the brain outside of computation which produces consciousness.

      Since the whole brain emulation only simulates the computational aspects of a human mind and not the physical components, it is reasonable to suppose that the WBE lacks x (and consciousness) even if we are unsure as to what x really is.

      Since the point about the WBE ‘thinking’ it is conscious is not a defeater for the above, there is no problem.

      Delete
  16. > the response is simply that we know real Alex is conscious because of his physical structures and we know WBE is not because it lacks such structures.

    Ok, as promised, my objection: that's not an adequate answer to "If WBE Alex can be mistaken about being conscious why can't Alex?" because it's question begging, or else ad hoc.

    Does their justification for thinking having microtubules or whatever is crucial for consciousness rely on the datum that people like Alex are conscious?

    If the answer is yes (and it is, probably), then their answer is question begging: you can't prove that people like Alex are right that they are conscious by relying on the datum that they are - that's circular.

    If the answer is no, then what makes them think microtubules are crucial, as opposed to blarfengals, or being made entirely of rubber? What possible non-ad hoc justification do they have if they don't rely on the datum that microtubuled Alex is conscious?

    ReplyDelete
    Replies
    1. To clarify, they would need a justification that would be non-arbitrary and yet would not apply to WBE Alex.

      Delete
  17. Hey Dmitriy,

    "Ok, as promised, my objection: that's not an adequate answer to "If WBE Alex can be mistaken about being conscious why can't Alex?" because it's question begging, or else ad hoc."

    :)

    The whole point of that was to demonstrate that your question supposes that those who reject the WBE metaphor have similar theories of consciousness to yours. But obviously that need not be so, and that of course is a little bit begging the question. Anytime we make an inference that another being is conscious we do so based on our personal theory of consciousness. This might be a very simple theory like what a layperson might adopt, or a much more sophisticated one adopted by a philosopher of mind.

    Since these theories of consciousness are not ad hoc nor based on my specific datum, I don't see how can you have meaningfully replied to such objections. As for the quantum theory of consciousness, well that stems from a long line of philosophical thought that I will attempt to briefly summarize. Basically, the first most promising candidate theory known as the computationalist theory of mind (consciousness is a computation) arose in the early 20th century and was developed by a guy called Putnam.

    This theory was largely based on two solid observations. First, humans seem to be conscious (we know we ourselves are conscious), and the human brain is the most intelligent thing in the known universe, this seems too good of a coincidence to be true. Secondly, we can internally notice that our qualitative states are linked to computational states.

    However, the computationalist theory has a bunch of problems, one of the most significant is that it is difficult to constrain our mapping relations between computational states and semantic states. In the most permissive form we get what is known as pancomputationalism, where even rocks have minds (we can arbitrarily map human brain states to rock physical states like the energy quanta of the rock electrons). Unfortunately, theories which attempt to restrict the mapping relations so that rocks don't have minds seem really ad hoc.

    Chalmers himself advocates for functionalism (the next big theory) because of this problem. See here: http://www.consc.net/papers/rock.html

    So this is where functionalism comes in, but functionalism also has a bunch of problems that I won't get into (e.g. the twin earth problem, the China problem). Not to mention that Searle uses his Chinese room problem (not to be confused with the China problem) as a purported refutation of both functionalism and computationalism.

    I wish to again reiterate my point (perhaps the most important point) from my last post about how Searle doesn't need to actually have a well established theory of consciousness. He just needs to have strong defeaters for both functionalism and computationalism to feel confident that a WBE is not conscious.

    ReplyDelete
    Replies
    1. On the quantum theory of consciousness:

      Having seen the problems with computationalism and functionalism (the two most promising theories of mind), this is where the quantum theory comes in. Quantum theories of consciousness are not so readily endorsed by many philosophers of mind, but they do have many promising aspects. First, for those who advocate for non-deterministic free will, a quantum theory of mind gives a causal account of consciousness, where consciousness is directly responsible for wavefunction collapse of certain particles (e.g. electrons). This is a plus for such people, because consciousness is epiphenomenal under both functionalism and computationalism.

      Secondly, similar to the argument for computationalism we can notice that quantum computers are very rare (entirely non-existent?) in nature. It is very difficult to naturally control the decoherence of particles (as we are finding out in our constructions of quantum computers), even more so difficult in the intensely conglomerated, wet and crowded environment that is the brain. Yes there is macroscopic quantum coherence in nature, but this isn't used for computation.

      And yet there has been some research as of late which shows promising signs that the brain is potentially a quantum computer of some sort. The reason the microtubules were selected is because they seem to be the most promising candidate structures which can hold large amounts of electrons in quantum cohered states. The decoherence of these electrons can, it is thought, have some influence on the higher level neuronal (computational) states. The fact that our brain is both conscious and potentially a quantum computer is also way too coincidental and therefore is evidence for the quantum theory of consciousness (if they are right of course).

      All of this hopefully demonstrates that philosophical theories of consciousness are in no way ad hoc, and it is also hopefully clear that Searle doesn't need a well established theory of consciousness to reject the WBE metaphor.

      Best,

      Alex

      Delete
    2. Also, note that the quantum theory of consciousness avoids both the problems of arbitrary mapping inherent to computationalism and the problems of functionalism; this is another plus for it (I think at this point we can see why philosophers like Searle take it seriously). Its weakness lies in the fact that there remains a lot of work needed to show some mechanism for how structures in the brain like microtubules can plausibly influence neural computational states through quantum effects.

      All of the above theories that were described start from the knowledge that oneself is conscious and that one has a brain; then they attempt to identify the salient features of the brain that grant consciousness.

      Now are there going to be philosophers who conveniently gravitate to a particular theory of consciousness because it happens to fit in best with their preconceived biases as to whether things like WBE's are conscious? Of course, but that's no strike against the theories themselves, and such biases are inherent to every field of study.

      Delete
  18. Hey Alex,

    Thanks for giving this overview, it clarified some things for me. I think the point I was making in giving my objection to the sample reply may have gotten lost, so let me start with a quick recap:

    you: people with a different theory of consciousness need not buy 1b (if it thinks it's conscious then it is)
    I: but rejecting 1b would mean we, people could also be mistaken about being conscious - that's a high price to pay for them
    you: no, it would not mean that
    I: I think it would, because they don't have a good answer to "If WBE Alex can be mistaken about being conscious - then why can't real Alex?" To demonstrate that, if you give me a sample answer I will show why it's not a good one.
    you: a sample answer: real Alex can't be mistaken because he has the requisite structures (microtubules and such)
    I: This answer is inadequate because it's question begging: their justification for thinking microtubules make all the difference relies on the assumption that people like Alex are conscious. You can't prove that Alex is definitely conscious with a theory that presupposes that he is conscious - that's circular.

    So do you agree then that the sample answer is circular and thus inadequate? Then do you agree that rejecting 1b creates a challenge: either admit that Alex can be mistaken (unpalatable for most) or find a non-circular, good answer to my question?

    ReplyDelete
    Replies
    1. "You can't prove that Alex is definitely conscious with a theory that presupposes that he is conscious - that's circular."

      But it doesn't. To quote myself: "All of the above theories that were described start from the knowledge that *oneself is conscious".

      I think it might be helpful to clarify two things here. Part one has to do with one's own self-knowledge of themselves being conscious, and part two has to do with our inference of other beings exhibiting consciousness. Part one is self-evident, part two must be shown.

      Knowledge of the former stems from personal acquaintance; I know I'm conscious because I can *feel it. Whatever uncertainty that may arise from someone thinking that real Alex may be conscious but that WBE Alex may not be, is only a potential defeater for part 2, not part 1.

      In fact, nothing can be a defeater for part one, it's meant to be the most certain truth out there and completely self-evident. Other people can be mistaken about the consciousness of WBE or real Alex, but real or WBE Alex can never be mistaken about their own consciousness (if they have it). There is no circularity because all of the described theories of consciousness use the part 1 evidence to support their assertions about part 2.

      Let me now address the other source of confusion. You write that "You can't prove that Alex is definitely conscious with a theory that presupposes that he is conscious - that's circular."

      But this charge is mistaken. I've already shown that this can't be true because all the reasoning is ultimately based on the evidence from part one. However, your charge isn't in any case valid. No one has to prove that real Alex is conscious as rejecting the WBE rejoinder only requires that one thinks that WBE's aren't conscious. Hence, using the assumption of the consciousness of real Alex to form a theory that ends up rejecting the consciousness of WBE (rejecting premise 1b) is not circular.

      What you are actually saying is that there is a degree of ad hoc-ness to putting forth this argument. But we can quickly see that this charge is unfair. From the evidence of part one we know that a brain *can be sufficient for consciousness. It's not clear what part of the brain makes ourselves conscious, whether it's just the computation, something more, or something else entirely. But the point is that it is a lot more reasonable to suppose that other human beings with similar brains are conscious than that a WBE on a silicon substrate is conscious.

      The latter only replicates the computational/functional aspect of our brain, whereas the other human beings' brains replicate far more. Hence, in the absence of a theory of consciousness our standards of evidence are deservedly lower for the other human persons than the WBE emulation. So we can see that the assumption of consciousness for other human beings is a rather easy bar to pass.

      There is no circularity, and there is no ad hoc-ness. On the contrary, the WBE analogy only works if we assume something like the above theories are true in the first place. Remember, the burden of proof lies with those who want to assert that any other being is conscious (whether human or otherwise).

      So the real claim of Searle isn't that WBE Alex lacks consciousness (for that's the default position in the absence of any knowledge/evidence) but rather that humans can be conscious. He has a theory which supports this (and reasons for it), and it just so happens that this theory doesn't allow for WBE's to be conscious.

      To demand the possibility of the latter is to demand more, and the burden of proof is on you to show it. Hence, it is your argument which begs the question; it only works if you assume a theory of consciousness which allows WBE's to be conscious which is exactly what Searle is trying to disprove with his chinese room analogy!. I tried to hint at this earlier but I think I wasn't quite clear.

      Delete
    2. Searle's argument goes like this:

      1) I am conscious (self-evident/part one)
      2) From this and other auxiliary hypotheses I form the plausible assumption that something in my brain makes me conscious
      3) From here I make the plausible assumption that other people's brains also make them conscious
      4) I have a defeater for computationalism/functionalism (Chinese room)
      5) A WBE only emulates the computational/functional aspects of the human brain and not the other aspects (e.g. physical structure/chemical makeup)
      6) A WBE is not conscious, and premise 1b is rejected (From 4 & 5)

      Notice that 4 and 5 are not defeaters for 3, so there is no problem here. To assume that the conclusion of 6 imperils the consciousness of real Alex is to assume that the argument is a self-defeater for 3. But this is to assume that 4 is incorrect. Since you are trying to show that 4 is incorrect in the first place, this would be question begging.

      Delete
    3. Of course the same above argument allows us to derive the following:

      7) The non-computational aspects of the human brain (e.g. physical structures/chemical makeup) are necessary conditions for consciousness. (From 3 & 4)

      Delete
  19. I am not clear on what exactly you are saying - can you pick one of the three options below and tell me which one represents what you are saying? I hope one of them does:)

    Are you saying that the sample answer "real Alex can't be mistaken because he has the requisite structures (microtubules and such)" is not circular even though you accept that they justify their theory by, among other things, assuming that people can't be mistaken about being conscious?

    Or, do you not accept that this is their assumption? Are you saying that they only assume they themselves can't be mistaken (what you call part one)?

    Or, are you saying that's not even the right sample answer, that their answer to my question is something totally different?

    ReplyDelete
    Replies
    1. My guess is option 2, but I want to confirm so I can respond without accidentally strawmanning you.

      Delete
    2. “Why can’t they be mistaken about Alex” is a claim about part two. The reason why this is unlikely will hinge on the support for a theory of consciousness which is ultimately based on part 1 evidence.

      I think my above post about the Searle argument lays it out more neatly and shows conclusively why your argument begs the question. Let me know if there is a premise you take issue with.

      Delete
    3. To make it clear. No one who is conscious can be mistaken about whether they are conscious. This is all about part 2, meaning our inferring whether other beings are conscious.

      Delete


  20. After reading this, I think that there may be some unintentional equivocation going on here. Remember that you defined “thinking” in your own special way.

    Notice that no is arguing/needs to argue that WBE Alex can be mistaken about its own consciousness.

    We all agree that it would know it is conscious. This is all about the inference from the WBE saying I feel I am conscious” to it knowing it is actually conscious. Our rejecting premise 1b just means that we reject that inference and nothing more.

    ReplyDelete
    Replies
    1. For some reason the above quote didn’t go through. It was this:

      Delete
    2. Okay the formatting gods hate me. I give up on providing the relevant quote

      Delete
    3. Also I meant to write: “we all agree that the WBE would know it was conscious if it was actually conscious”

      And not that it would know by default!

      Delete
  21. After re-reading your initial post (and seeing the changes you’ve made), it’s clear that to me now that much of this confusion stems from an unintentional equivocation over the word “think”.

    Remember, I told you that philosophers use the word to mean the performing of a mental action (i.e. to have conscious awareness). So certainly Searle by fiat doesn’t think a WBE thinks. By redefining it to mean “evaluation of a statement” you can no longer claim that it is universally acknowledged that if the WBE “thinks it is conscious, then it is conscious”.

    What philosophers of mind mean when they say that is “if an entity feels/experiences firsthand its own consciousness, then it is conscious”. This is important because we can tell that the WBE evaluates statements to be true, but we can’t tell if it actually feels itself to be conscious. Therefore your defense of premise 1b no longer follows.

    It relies on a self-evident (analytic) phrase to justify itself, but by equivocating on the definition of “think” (in order to make sure it applies to the WBE) you strip the phrase of all self-evidentiality.

    ReplyDelete
    Replies
    1. Thus the real answer to “why not Alex” is:

      Actually, it’s totally acceptable to doubt that real Alex is conscious on that basis alone, because using the evidence of real Alex saying “I feel I’m conscious” as the basis for him/me being conscious is not the standard Searle and others like him endorse.

      So it’s definitely not universally acknowledged that behavior of that sort is an indicator of consciousness (unless you’re a behavioralist/computationalist/functionalist), and so we have no problem.

      It’s definitely universally acknowledged that if you think you’re conscious then you are conscious, but like I said, this doesn’t mean what you think it means.

      I hope this clarifies much of the confusion. I did mention the same point in a much earlier post of mine, but perhaps this was missed.

      Best,

      Alex

      Delete
  22. Hey Alex,

    Things are becoming clearer, so for now we are no longer considering the previous sample answer, you are giving a different one.

    1. I am not really equivocating on the expression "think blah", because I am always using it in the same sense, which I specified.
    2. Now, with this sense in mind, your new answer to my question for "1b-deniers", "Why can't Alex be mistaken about being conscious if WBE Alex can?" is:

    it is not because he thinks (in my sense) he is conscious. It's because of the widely acknowledged principle that “if an entity feels/experiences firsthand its own consciousness, then it is conscious”.

    My objection to the new answer: it is also begging the question! "Feeling/experiencing" already presupposes consciousness and therefore it is circular to say that Alex is definitely conscious because he feels blah blah.

    ReplyDelete
    Replies
    1. Hey Dmitriy,

      "1. I am not really equivocating on the expression "think blah", because I am always using it in the same sense, which I specified."

      I meant you were equivocating a bit in implying in your initial post that the universal acknowledgement by philosophers that "if an entity thinks it's is conscious then it is conscious" is applicable to your situation.

      "My objection to the new answer: it is also begging the question! "Feeling/experiencing" already presupposes consciousness and therefore it is circular to say that Alex is definitely conscious because he feels blah blah."

      No this is a misunderstanding of what I was attempting to show. The point about "feeling I'm conscious" was to show why it is logically possible for a reasonable thinking entity to lack consciousness. This was important because on the face of if it, "I think I'm conscious, so I'm actually conscious" seems analytic, but under your definition it is not.

      Thus, when you were asking "why can't real Alex be confused/mistaken about its own self having consciousness?" this is answered by pointing out that if real Alex (or WBE Alex) were conscious then they could not be confused. In other words, I interpreted the question as asking "If entity A is reasonable but lacks consciousness, then entity B being reasonable must also lack consciousness."

      And all I needed was a simple thought experiment of one lacking the "feeling of consciousness" to determine it was in fact possible. It was just an attempt to clarify that an entity "thinking" it was conscious (under your definition) but not being so is easily compatible with Searle's position.

      The real question therefore is why does Searle think real Alex and not WBE Alex is conscious? And this is answered by his theory of consciousness and defeaters like the Chinese room. Such reasoning is neither circular nor ad hoc; instead as I explained it was based on part 1 evidence from Searle's own conscious experiences and auxiliary hypotheses like the fact that he has a brain which produces his consciousness (see the above posting for a methodology of a sample argument he might put forward). Presumably, we are also conscious and therefore according to Searle we should use the same argument to arrive at the conclusion that real but not WBE Alex is conscious.

      Once we have done so we can now answer (at the very end) why we think real Alex isn't mistaken about his own consciousness (because we think he has that feeling) while nevertheless WBE can be (because reason is insufficient by itself). To simply claim that Searle and such people are being circular is to skip over that entire argument which I've made the focus of many posts attempting to explain.

      Delete
  23. Hey Alex,

    I am having trouble understanding what I got wrong in summarizing the new answer to my challenge as:

    it is not because he thinks (in my sense) he is conscious. It's because of the widely acknowledged principle that “if an entity feels/experiences firsthand its own consciousness, then it is conscious”.

    Did I get your answer wrong? What's wrong with that summary? I don't see how it is different from what you say:

    "we can now answer (at the very end) why we think real Alex isn't mistaken about his own consciousness (because we think he has that feeling)".

    ReplyDelete
    Replies
    1. And I can specifically address your version of Searle's argument, but first I think we should clarify the new answer - I really thought I summarized it correctly. Please fix it if that's not the case, hopefully preserving its concise form.

      Delete
    2. You got it right. But you erroneously assumed that our belief in real Alex having that feeling was just because we started from the assumption that real Alex must be conscious (a circular argument in other words), when this was not the case.

      Rather the answer is that “we” have a theory of consciousness formed from our best available data; this theory tells us that real Alex should have this feeling but WBE Alex should not, and from there we infer that the former but not the latter is conscious.

      Delete
  24. Ok, I think I see the problem - it seems your rejoinder treats Alex as some dude, some third person. No, Alex is one of those people to whom my challenge is directed. Do you see why that makes a big difference?

    ReplyDelete
    Replies
    1. Alex built his theory of consciousness on what you call part one. Alex is attempting to deny 1b. Alex says: "WBE Alex can be mistaken about being conscious", I ask: "Then why can't you?"

      Delete
    2. If Alex says: "Because I feel I am conscious, not just Dmitriy-think", then
      I say:

      Feeling already presupposes consciousness, so you haven't explained anything, you are essentially saying I know I am conscious because I am conscious of being conscious, which is circular.

      WBE also says he feels blah blah. If you deny 1b and insist that WBE Alex can be mistaken about feeling stuff, then why can't you?

      Delete
    3. But I already addressed this with the part one stuff. If real Alex is conscious and listening to this, (which he is by the way) :)
      then he knows via the “feeling” that he is conscious.

      The supposition that WBE isn’t conscious, but thinks it is, is irrelevant because real Alex knows that reason is insufficient.

      Imagine if we had some magic method which could instantly tell us if we were conscious (e.g. magic genie), let’s call this method A. We agree that reason is insufficient to determine consciousness, but real Alex isn’t worried because real Alex has method A. Real Alex just doubts whether WBE Alex has access to method A and isn’t just faking it.

      For it just happens that method A is private (just like the feeling of consciousness), if you’ve got it the genie is super shy and you can’t show anyone that you actually do (although you can tell people about it). No matter how many times you attempt to show someone your genie, he will always be hidden to them but not to you.

      This is analogous to what we think the real situation is with consciousness. Now obviously we can make inferences about whether people have method A. For instance, I infer that using method A is personally always correlated with having part B (When I lack part B, I also lack method A); therefore I infer that all people with part B have method A. It just so happens that WBE Alex always lacks part B etc…

      *An example of lacking part B would be being in a deep sleep drug-induced coma, and I notice this is correlated with no active brain wave states. WBE Alex doesn’t have active brain wave states but it has something like it. Unfortunately for WBE Alex, I have a strong defeater for the claim that “something like it” is good enough.

      You get the picture I think.

      Delete
  25. About the circularity of the part one claim:
    “ Feeling already presupposes consciousness, so you haven't explained anything, you are essentially saying I know I am conscious because I am conscious of being conscious, which is circular.”

    This doesn’t follow, like I said our believing we are conscious is not based on any reasoning but rather direct experience, hence there can be no charge of circulatory because there is no reasoning to be circular about in the first place.

    It is basically the difference between knowledge via direct acquaintance (e.g. I can taste it) vs reasoning. Note that we can’t provide any justification for whether we taste something outside of a justification by direct experience (I know because I can taste it) but this circularity does not/should not imperil our belief. Because again, knowledge comes in many forms (not just from reason).

    ReplyDelete
  26. So can you be mistaken about having feelings/direct experience of blah blah?

    ReplyDelete
    Replies
    1. No. You can only be mistaken about any inferences you draw from such experiences. For instance, I am experiencing a visual sensation of Dmitriy, and I infer that he is in the room with me. This inference can be mistaken (e.g. hallucination) but the experience cannot be.

      Of course my memories of the experience later on can be in error, but the experience itself cannot be mistaken (when I’m having it I know I have it).

      Note that my memories of having the experience can be in error precisely because I have to make an inference of reason from my experiencing the memory (right now) to there being an experience in the past which my memory is about.

      Delete
    2. Also I should say that it’s certainly logically possible that we could be mistaken about our even having experience. So I’m not going to argue for the logical certainty of that, what I will say (and what most philosophers say) is that we should have the highest confidence in that type of knowledge. We’ve never observed that such a thing could be in error, but we have observed countless instances of our reason being in error.

      Delete
  27. I’m glad that much of the confusion over what we were saying seems to have been cleared up. I will say that your WBE analogy can only work as an argument if one doubts that one can be conscious in the first place. Since this is one of the most self-evident principles out there, this is quite a hefty price to pay (and something Searle doesn’t accept obviously).

    Can you reasonably doubt such a thing? Of course, you can reasonably doubt anything basically, even the laws of identity etc… At some point however you either have to believe that circularity is acceptable (a coherentist approach to justification), or you believe that there must be a solid foundation that grounds all of our knowledge (foundationalism).

    Either way involves an unpalatable option of taking the knowledge of direct experience on faith (the latter approach) or putting forth a circular argument (the former approach).

    ReplyDelete
  28. So let's call, for an entity x:

    FS(x) = x Feels Stuff / has direct experience of stuff / is consciously aware of stuff / has qualias
    TFS(x) = x thinks (in my sense) that x Feels Stuff.

    Now say we have a hundred entities that implement/simulate Alex on different substrates. You are one of them, your substrate is "carbon-based-brain-stuff". Let x run only over this set of Alexes.

    Clearly for all x TFS(x). I affirm 1b so I conclude that for all x FS(x). But you, or at least you playing the role of a defender of these other theories and 1b-denier, have a different take:

    You think: for some x [TFS(x) but not FS(x)], in other words some Alexes are mistaken about feeling stuff. So there's a group of deluded Alexes and a group of normal Alexes.

    You think you are normal, but of course deluded Alexes think they are normal. Can you point to any reason for thinking (and I keep using this word in my sense) that you don't belong to the deluded group? What breaks the symmetry between you and others?

    In other words, since, unlike me, you think that the deluded group is non-empty, and probably larger than the normal group, can you point to any reason for thinking

    Master Statement: I am not in the deluded group.

    Note that I don't face this challenge because I don't think there's any asymmetry, I don't think there are two groups.

    It seems from what you've written before that you would say:

    Master Justification: I am not in the deluded group because I feel stuff.

    Unfortunately, that just says "FS(me) because FS(me)".

    ReplyDelete
    Replies
    1. I wrote that before reading your last comment. So it seems your position would be to just take it as an axiom that FS(me) but not FS(other Alexes), without any non-circular justification for such an asymmetry among Alexes. Is that correct?

      Delete
    2. The concept of axiomaticity is interesting. I personally regard it to simply mean "we agree on concept x and hence will no longer justify it further". Some concepts like the law of non-excluded middle, the law of non-contradiction etc... can be termed "widely axiomatic" meaning that almost everyone agrees that such concepts are not in need of justification/are self evident.

      As such, the "widely axiomatic" concepts can be safely employed throughout the wider academic community and with most reasonable speakers. The belief in one's own consciousness is definitely considered "widely axiomatic" in that sense throughout the larger philosophical community, and indeed most academic circles.

      But we also have to square this concept of axiomaticity with our treatment of the natural languages. We often take them to be semantically closed, and so everything is, in principle, liable to the requirement of justification (including this principle). In other words, to call something "axiomatic" is really just a way to shut down debate, whether justified (we all agree that we don't need to debate this) or not.

      Saying that English isn't a semantically closed language and that we lack the languages to talk about our "axiomatic" stuff is not in my opinion a better approach. It's just shutting down the debate through external means ("we can't talk about this but I honestly wished we could").

      I suppose this is a matter of taste really, but I personally am willing to have a conversation about the justification of anything really. There are definitely philosophers whose job it is to challenge/talk about the supposed axiomaticity of the consciousness principle (e.g. foundational epistemologists), and I expound on this further in my post below.

      Delete
  29. There are two possible approaches to take here. One approach endorsed by many philosophers is to make a distinction between knowledge by acquaintance and knowledge by description/inference. The other approach is to deny that we are "given" knowledge by acquaintance. See here for an excellent primer: https://plato.stanford.edu/entries/knowledge-acquaindescrip/#ConVieAcq

    The first approach basically holds that we are trying to get at something called "truth". One way to get at this is through the traditional way, knowledge through inference. We are all familiar with this method, or reason as we call it, and it seems reliable enough. The other method (still applied to the first approach) is the direct acquaintance method, I just "feel" something and I am directly given the knowledge of it thereof.

    Notice that this knowledge is not derived from reason, I don't have to give any argument or justification for it because such knowledge is not comparable to the propositional elements in the discourse of reason. Hence, the proponent of the first approach charges that the person who asks for a justification of the knowledge acquired by acquaintance is committing a category mistake. It would be like saying "Justify that blue is" or "Justify that pencils make". It's both incoherent and inapplicable to the category of knowledge by acquaintance. So think of it as some form of special reason which doesn't depend on justification or other such similar things.

    The second approach is to deny that knowledge by acquaintance is special in any given way. This is problematic for those who want to say that they know that are having an experience because it seems we can't really justify this otherwise (except with a bunch of auxiliary hypotheses). The problem is that our auxiliary hypotheses (e.g. I have a brain) are in turn supposed to be based on our direct experiences, so if we abandon the first approach we quickly get circularity.

    That's why those who endorse the first approach are all foundationalists, and those who endorse the second are usually comfortable with circularity in reasoning (they are called coherentists). What you choose is up to you, but note that even if you pick the second approach you can't avoid the charge of circularity if you are a coherentist, and so it's no problem to accuse real Alex of engaging in a circular argument.

    On the other hand, you can adopt the second approach without being a coherentist (you just have to remain skeptical that you have knowledge), but then you either have to admit that you don't really know you are conscious and that you just take it on faith, or you take other things on faith. And it's a lot easier to take the assumption that I am conscious on faith, then whatever justificatory reasons may be offered for it (e.g. I have a brain).

    That's because as I mentioned, we have plenty of experiences of inferences about the outside world being wrong, but no such experiences about our inner experiences being wrong (where we thought we were experiencing something but we actually weren't). Thus, I don't see how any of the combinations of the above methodologies (first and second approaches combined with foundationalist/coherentist/skeptic) are in any way going to make a profitable argument for you.

    The best, like I said, would be to be a skeptic and deny knowledge by acquaintance, but then are you really going to claim that it is easier to take the hypotheses that justify our direct experience on faith, versus direct experience on faith alone?

    ReplyDelete
    Replies
    1. So all of this leads up to the point that what seemed initially like a reasonable assumption and a straightforward application of Occam’s razor (there is no asymmetry in the set of Alexes) is in fact anything but.

      That’s because this assumption posits that it is possible for you to be mistaken about your direct experience, but this brings on far harsher epistemological worries of the justification of your entire world view.

      All of your beliefs about the external world stem from justification based on your direct experience. Hence, you must assume you can be wrong about everything, in which case you should probably be more willing to take your basic beliefs (direct experience) on faith instead of the other hypotheses (the idea of course is to take as little as possible on faith). Or you adopt one of the above philosophical theories.

      Delete
    2. Also please note that when I say “ That’s because this assumption posits that it is possible for you to be mistaken about…”

      I mean that the assumption in combination with the WBE rejoinder does so. The assumption by itself is harmless, but also doesn’t do anything to refute Searle’s Chinese room.

      Delete
  30. You have convinced me that you can escape circularity, but that account has a related serious problem.

    Basically, we need to distinguish the statement that you feel stuff (SFS(me)) from the actual fact of you feeling stuff (FFS(me)). Then your knowledge by acquaintance reply is basically:

    My justification for the statement that I feel stuff is not some other statement, but the fact of me feeling stuff, i.e.

    My justification for SFS(me) is FFS(me).

    That's not circular. But there is a problem, the justification is inadequate because it's "fake". What do I mean?

    You happen to be right in evaluating SFS(me) as true (assuming FFS(me)). But, if 1b is rejected, then, like with Gettier cases, you are right by accident!

    More specifically, for the justification to be valid, we want TFS(me) (your evaluation of SFS(me) as true) to be causally connected to the thing you are using as the justification, FFS(me), but it's not!

    Why not? Because other Alexes replicate completely the chain of events in your brain that results in TFS, and yet they (if Searle is right and 1b is rejected) don't have FFS. Having or not having FFS apparently is immaterial for causing TFS in an Alex.

    Other Alexes are identical to you in terms of reasoning, yet they are wrong - this particular Alex reasons the same, he is right by accident.

    ReplyDelete
    Replies
    1. To be clear, 'me' is of course from your perspective, it refers to you. Sorry for the clumsiness.

      Delete
  31. Hey Dmitriy,

    We are really getting into the philosophical weeds here! :)

    The following is going to be quite complicated, and I'm worried I didn't convey the concepts too well.

    I think your account of what the person who rejects the WBE rejoinder (e.g. Searle) must do assumes too much. It assumes a causal account of consciousness and it also assumes an external account of semantics (I'll get to this latter part later). To explain the former I'll have to talk about the link between FFS and TFS; while explaining the latter will require exploring the meaning of statements like SFS(me).

    To begin with the former, it's important to point out that the epiphenomenalist doesn't believe in any causal link between FFS and TFS. For Searle (who I assume is still an epiphenomenalist) and others like him, there is no need to use FFS to justify TFS. That's because we can be easily justified in believing both real and WBE Alex have TFS; we do so by pointing out that they are both intelligent creatures. Thus, Searle would simply reject the link/equivalence between TFS and SFS (I will explore this later in part two).

    By assuming that there is a link between the two you are implicitly buying into a causal account of consciousness. That said, as I previously mentioned, there are people who do believe in such causal accounts (which is possible under certain quantum-consciousness theories). So what about them, does your argument disprove their positions? The answer is still no, because such people are going to reject this part: "Because other Alexes replicate completely the chain of events in your brain that results in TFS" or more precisely, they are going to reject that a WBE Alex is behaviorally equivalent to real Alex.

    If you believe that lower level quantum events (i.e. consciousness) influence the higher level neuronal computational states in our brain, then it follows that a WBE of Alex, lacking the former, is not going to be behaviorally equivalent to the real Alex. So the quantum-consciousness theorist who doesn't subscribe to epiphenomenalism could readily say that WBE Alex isn't going to have a good understanding of consciousness. He will often be confused by statements like SFS, and we will be able to observe this confusion.

    ReplyDelete
    Replies
    1. Part two:

      Okay, so we can see that the critic of the WBE analogy, like Searle, doesn't have to assume a casual link between FFS and TFS (and hence the charge of FFS being immaterial to TFS is not problematic). But what about statements about one's own consciousness; does epiphenomenalism render all such statements impotent? Your charge after all, is that such statements fall afoul of Gettier's problem, meaning that my 'justification' of SFS is completely unreliable, and therefore I should have no confidence in it.

      The answer is going to depend on your semantic account, and what you think SFS "means". If you are a semantic internalist (like Searle), to justify SFS just means to have FFS. Thus, the semantic internalist rejects the FFS ->TFS -> SFS link, and instead puts forward a simpler FFS -> SFS justificatory link. In other words, the justification for SFS is "causally closed"; there's no way to justify statements like SFS (or any other proposition) by pointing to external phenomena (like TFS) and then relating the two.

      That's because the meaning of words/phrases/propositions lies in their having "content", and the having of content just means the having of experiences. So our conscious locus is causally imponent, it's completely incapable of syntactically relating the words and the behaviors and the external phenomena together, our brain does that for us. Instead, our consciousness attributes the meaning, and so meaning goes "over and above" the relation of thought (under your definition) to language.

      If this is true, it means that I can't assign the meaning for you; even your understanding of the above explanation doesn't guarantee that you grasp the meaning of the phrases, you would still need to have FFS be true for you. But this isn't a problem for the described theories of consciousness because all such theories stem from knowledge of FFS (what I called part one).

      So, in the end, the answer for the justification of SFS is that all justification is dependent on one having/grasping the meaning of phrases. But you can only do so if you have FFS. Thus, "true understanding" doesn't consist in using TFS (the evaluation of the statement) but rather in having/experiencing the phenomenal aspects (FFS) that SFS is about. So, according to Searle, WBE's and all other non-conscious intelligent things don't' really have an understanding of the meaning of our language. They're just manipulating syntactic symbols in an intelligent way.

      It's this part which is the most important, and which refutes the assertion that Searle runs afoul of Gettier's problem. Since we don't use TFS to get at the meaning of/justify SFS (according to Searle), it doesn't matter that TFS is an unreliable indicator of FFS.

      Of course, if you're a materialist then you believe that you need TFS to have FFS in the first place, but as long as you accept that TFS isn't a sufficient condition for FFS, it's perfectly plausible for Searle to argue that WBE Alex can have TFS but not FFS. And also, to argue that he can reliably know/reason that WBE Alex lacks FFS, (through the above theories which are based on his having FFS).

      This is what I meant when I said that you were assuming an external account of semantics in your attempted refutation of Searle. Your criticism can only work if you reject Searle's internalist account of semantics.

      Addendum: Note that the above quantum-consciousness theorists can be semantic externalists.

      Delete
    2. “if you're a materialist then you believe that you need TFS to have FFS in the first place”

      More precisely, you need an ability to think (under your definition) to have FFS.

      The above should give us an indicator as to why Searle’s semantic account is so closely tied to his philosophical theory of consciousness.

      Hopefully, it’s a bit clearer now why Searle uses the conclusions of the Chinese room (that it has a deficiency in grasping meaning) to justify the assertion that the Chinese room lacks consciousness.

      Also, perhaps this has created a better understanding of why epiphenomenalists like Chalmers can rationally believe in p-zombies. It’s because their semantic account isn’t tied to the external world or our thought processes (your definition again)

      Otherwise, it would be logically impossible for a behaviorally equivalent WBE Alex that thinks like me, to lack justification for any statement about its own consciousnesses (presuming that I can justify it).

      Delete
  32. In case the points about internalism weren’t so clear:

    Consider that if meaning is internal then obviously WBE Alex (who we suppose isn’t conscious) isn’t saying anything meaningful when asserting SFS(me). Therefore, real Alex isn’t in trouble. Remember, the point was that real Alex had to believe that an equally rational other Alex in the same position as himself evaluated the same statement SFS(me) erroneously.

    But if meaning is internal then WBE Alex didn’t truly grasp/understand what real Alex meant by the phrase. Hence, WBE Alex’s confusion about FFS being true for him doesn’t make it irrational for real Alex to believe the same thing (provided he has knowledge via direct acquaintance).

    Real Alex shouldn’t suppose that he could also be confused (to an equally likely degree) because real Alex thinks he knows what the phrase means and that WBE Alex doesn’t.

    ReplyDelete
    Replies
    1. “because real Alex thinks he knows what the phrase means and that WBE Alex doesn’t”

      And of course, I don’t use the word “think” in the sense you’ve been using it (it and every other word will instead have an internal/mental meaning).

      This theory of semantic internalism might seem rather ad hoc, since we’ve basically said that a WBE Alex can’t say/mean any of the things that real Alex can by definition. But to be fair, some degree of ad-hocness is justifiable when required to support “widely axiomatic” statements like “I can’t be mistaken about being conscious”.

      One might level the same charge of ad-hocness to your Tarski-like account of the natural languages which conveniently does away with the liar paradox. Since it is “widely axiomatic” that a statement can’t be both true and false; we feel more than justified in engaging in what might otherwise be interpreted as special pleading. It’s a small price to pay in other words.

      Delete
  33. Hey Alex,

    1. I define WBE as computationally replicating all brain function, so one part of your reply doesn't apply (the part about the WBE possibly being behaviorally different from real Alex). At worst you could perhaps say a WBE can't exist.

    But Searle's argument is that a WBE still wouldn't be conscious, so we are just granting that it can exist (which Searle would agree with) and arguing about its consciousness status.

    2. My argument does not presuppose non-epiphenomenalism (a causal theory). My recent point about causality was concerning *your* position: if you think that your justification of your evaluation of SFS as true is valid, then presumably you believe that the thing you are using to justify it, FFS, is causally connected to the thing you are justifying, to your evaluation of SFS. Otherwise it's just a "coincidence".

    But that belief seems to be in conflict with rejecting 1b, as I explained above.

    3. However, you deny that you should believe that in order for the justification to be valid. In other words, you deny:

    "More specifically, for the justification to be valid, we want TFS(me) (your evaluation of SFS(me) as true) to be causally connected to the thing you are using as the justification, FFS(me),"

    Or at least you say a semantic internalist would just deny it. First point about that: if Searle needs to rely on semantic internalism to address my objections to the Chinese Room then the argument already doesn't succeed because it relies on a premise that is not widely accepted, nor inuitive, nor argued for in the Chinese Room argument itself.

    4. But is it so easy to deny that quote above? I don't think so, but let me address it a little later.

    ReplyDelete
    Replies
    1. Ok, so you deny it by saying WBE has access to the syntax of SFS but not to semantics:

      >if meaning is internal then obviously WBE Alex (who we suppose isn’t conscious) isn’t saying anything meaningful when asserting SFS(me).

      So let's be careful and distinguish between sentences and statements / propositions. SeFS(x) will mean the sentence "x feels stuff", while StFS(x) will mean the meaning of that sentence in English, i.e. the statement / proposition that x feels stuff.

      You are saying, it seems, that all Alexes express SeFS and evaluate it the same as "2+2=4", both as true, but only you, the real Alex, evaluate StFS as true - because you have access to the realm of meanings / propositions and WBEs don't.

      But I think that doesn't help much:

      4a. If consciousness is produced by specific physical structures, let's say microtubules, then how does having microtubules connect you to the realm of propositions if simulated microtubules don't. If propositions were physical objects then that would make sense, but they are not.

      If we don't go to the route of dualism (and Searle doesn't) then it seems like an ad hoc and unclear idea - that somehow having some specific physical structures grants you, by an unclear mechanism, access to the magical land of unphysical things, propositions; while having the same structures implemented on a different substrate but otherwise functionally identical doesn't.

      This is all very bizarre. My position doesn't suffer from any of this weirdness and ad hoc-ness: propositions are abstract objects, they don't interact with anything, like numbers, so it doesn't matter whether it's WBE Alex or real Alex - both of their brains, which have by definition identical structures and dynamics, implements a language, a correspondence between sentences and internal representations (non-abstract brain patterns); these internal representations can be put into further correspondence with propositions. No magic needed.

      Delete
    2. 4b. We can even forget about StFS and just focus on something WBE definitely does - asserts SeFS. Your position would mean that you are justifying StFS by FFS, yet all of your physical, potentially observable internal and external dialog - you asserting SeFS - is causally disconnected from FFS, and apparently now even from StFS (if WBE Alexes don't have access to meanings/ propositions).

      Then what kind of justification is that, if it is completely causally disconnected from even you typing the words that express that justification?!

      Delete
  34. Hey Dmitriy,

    "I define WBE as computationally replicating all brain function, so one part of your reply doesn't apply"

    Sure, as long as we realize that Searle is trying to include some WBE's (he'll allow that my clone is conscious) while excluding others (digital brain simulation on my computer). The quantum causal theorist of consciousness believes that being a WBE is sufficient but would agree with Searle that a digital brain simulation is inadequate for consciousness.

    Remember that Searle's argument is against computationalism/functionalism, and the self-described quantum-consciousness theorist doesn't accept the above theories. That's because computationalism asserts that all computation is conscious, whereas quantum-consciousness asserts that only special computations (i.e. quantum ones) are. Therefore, Searle's rejoinder is still applicable in the causal cases. So we see that your WBE analogy doesn't refute Searle's argument in all areas.

    "first point about that: if Searle needs to rely on semantic internalism to address my objections to the Chinese Room then the argument already doesn't succeed because it relies on a premise that is not widely accepted, nor inuitive, nor argued for in the Chinese Room argument itself."

    Well of course it's not argued for in the Chinese room itself, that's meant to be just one of his many arguments (perhaps the most intuitive). Also, we often do buttress our points by relying on our individual perspectives (that would be almost impossible to avoid). For instance, in your earlier thread on the argument for EP (ensemble principle), your argument seemed to be contingent on our rejecting an objective probabilistic interpretation (if I'm remembering right).

    So yes, I agree the reliance definitely makes it weaker. But we already knew that the Chinese room wasn't meant to be a knockdown argument, but rather an intuition pump.

    That said, I'm not sure I agree with your assessment of semantic internalism, on the contrary I would say that semantic internalism is the most intuitive notion of semantics that exists and comports the best with how most laypeople think about it. Indeed, it was the most commonly accepted philosophical theory until about the 80's when the academic tide began to turn against it (most philosophers of language are semantic externalists of some sort).

    The reason for that is we often think that our meanings are internal to our mental life. For instance, imagine I was a brain in a vat. If meaning is external then what I call a 'car' is actually just the electrochemical impulse in my brain. Remember, meaning is supposed to be that thing in the external world I was referring to all along outside of my mental conception.

    Therefore, car really means "certain electrochemical impulse in my brain" unbeknownst to me. Heck if semantic externalism is true, we can't even be wrong about the following statement "I'm not a brain in a vat"!

    Because to us, "brain in a vat" internally means the above scenario I described, but externally it just refers to "more specific electrochemical impulses in my brain" triggered when I see a brain in a vat, since we are not just those specific electrochemical impulses (we are the whole brain), it follows that we can't be brains in vats. Needless to say, none of this feels very intuitive.

    To be fair, what I described was a sort of broad content semantic externalist account (e.g. meaning is in the interaction between the brain and the world). We might instead say that meaning is just given by a particular brain state. But again, suppose that my mental understanding of what my brain is, is in fact totally mistaken. In that case, I would be totally wrong about the meaning of every word, because that meaning is given by having a brain state which I don't have (but thought I did). Again, very non-intuitive.

    ReplyDelete
  35. Okay lets finally get to the main argument:

    Your argument it seems, isn't so much against semantic internalism but rather against epiphenomenalism itself: "yet all of your physical, potentially observable internal and external dialog - you asserting SeFS - is causally disconnected from FFS"

    Note that this is just the definition for epiphenomenalism. It's this which semantic internalism is actually about "Your position would mean that you are justifying StFS by FFS".

    So my conscious locus is completely causally disconnected from my behavioral self. Therefore, we have two options, we can take meaning to be consciously internal, in which case some of what you said follows (I'll get to this later), which obviously seems unpalatable. On the other hand, if meaning is external and dependent on the relations between my brain and the world alone, then my conscious locus is still going to be imponent. Worse, we wouldn't even be able to talk about our own consciousness!

    Consider the following statement "I am conscious", what could such a statement even mean if meaning is just external? Presumably, we are just talking about the external world of brains and physical relations, and never actually talking about what we thought we were saying (consciousness). Thus, saying "I am conscious" is logically identical to saying something akin to "I have a brain". That makes this entire discussion a waste of time, since apparently this was all much ado about nothing (no one doubts that WBE Alex has a brain-like structure).

    If, on the other hand, you presume that there is such a thing as consciousness, and that it supervenes on our physical brain state (e.g. a brain state has certain physical and also mental properties), then presumably you believe that a statement like "I am conscious" makes sense, and that the phrase has some meaning outside the physical.

    Thus, we can see that the unpalatable consequences cut both ways, on the one hand (externalism) we should abandon all talk of consciousness, as people like Dennett have done; for it is a superfluous word at best. Now we are starting to sound somewhat like those crazy philosophers I've just mentioned. On the other hand, we would have to accept that our ability to determine meaning is causally impotent to the outside world.

    However, let me say something more to the defense of Searle's form of internalism. "Then what kind of justification is that, if it is completely causally disconnected from even you typing the words that express that justification?!"

    But this slightly misunderstands the position, since under Searle the words aren't expressing justification in and of themselves. Rather, it's our mental states which attribute meaning to the words and therefore justification. Note also that causal impotence doesn't really impugn the internal justification. That's because the justification of SFS(me) consists entirely of using an internal process; so the point about it being unreliable on account of WBE Alex using the same process but failing to get at the truth, doesn't really apply because it's not using the same internal process in the first place.

    ReplyDelete
    Replies
    1. Onwards: "If we don't go to the route of dualism (and Searle doesn't) then it seems like an ad hoc and unclear idea - that somehow having some specific physical structures grants you, by an unclear mechanism, access to the magical land of unphysical things, propositions."

      But again this is slightly a misunderstanding, it's not that the semantic internalist position supposes that we have some magical access to propositions, rather it proposes to redefine propositional content to be internal. Hence, it's no mystery that we've got "access" to propositional content, because that just means the having of specific internal mental content!

      On the contrary, it is the externalist who faces a similar problem to what you've wrote, which is known as the "problem of privileged access" in the literature. Basically it's the issue I described above; if meaning is external then how can we have privileged access to our own thoughts and consciousness (i.e. direct experience), and moreover how could we talk about them? This would seem just as magical a connection as the one you wrote about above.

      Of course the simplest way to avoid this is to deny that there is direct experience outside of computation, but again this sounds a bit like the crazy philosopher approach...

      Does it seem magical that the brain and our consciousness are connected (non-causally) in the right way? Yes of course it seems magical, it's not logically impossible but it's enormously coincidental that my brain would just happen to give the above argument (which my conscious self thinks is about an internal phenomena). But this is a problem inherent to epiphenomenalism which is applicable to both types of semantic internalism and externalism.

      Consider that our conscious selves are basically going to be helpless mutes under both scenarios, even under semantic internalism, my attribution of meaning is still "private". There's no guarantee that I'm in actual communication with you (outside the words on the page) because I don't know if you have a conscious self which attributes the same internal content that I do (assuming it even makes sense to suppose that two different conscious being can have the same/similar internal content).

      Delete
    2. My personal intuition is that something is deeply wrong here with the entire concept of epiphenomenalism. I've struggled with this topic for quite some time, but haven't really arrived at a satisfactory answer (there are problems with the causal accounts as well).

      Delete
  36. Also in case my first part about the quantum stuff wasn’t clear. Keep in mind that it’s still not analogous to functionalism. That’s because functionalism is a much broader theory which proposes that to be conscious is to have general intelligent functions. So the quantum theorist can agree that the digital clone is mostly intelligent, but still not equally functional to real Alex in all ways (it will be confused by statements about its own consciousness).

    The functionalist doesn’t have to suppose exactly equal functionality; general intelligence/function is enough to get consciousness. Also, the quantum theorist doesn’t have to suppose equal functionality either. Let’s say clone Alex had some serious brain damage, clone Alex is still conscious (and capable of taking about its consciousness) even though he can’t tie his shoes.

    So the quantum theory is about *specific computation and implies a very specific function; whereas Searle is attempting to refute the general theories. Hence, it’s still appropriate to point out that the WBE example can’t refute the quantum theory, even though the quantum theory comports with Searle’s main argument (of course he still is an epiphenomenalist).

    ReplyDelete
  37. Hey Alex,

    Wow, we have together produced a lot of content, I have a feeling the only people who would ever have the willingness and patience to read all of that are just us :)

    I think we should try to summarize the main ideas and maybe come to some conclusions. It seems you agree that there are some uncomfortable elements someone denying 1b would have to contend with, notably the causal disconnectedness of feeling stuff from any utterances about feeling stuff.

    But you believe that the unpalatable consequences cut both ways, that my position has them too. I couldn't quite figure out what my difficulty is supposed to be though.

    > Consider the following statement "I am conscious", what could such a statement even mean if meaning is just external?

    Both Searle and I agree that there's no magical ingredient, consciousness is just a property of some physical systems, brains in particular. It is an objective property of those systems, we just disagree what systems qualify.

    But when one says "I am conscious", one generally isn't talking about their physical structure. For comparison, water is H2O, but when we speak of water we generally are not talking about its chemical structure, we are just talking about the clear colorless liquid that is in rivers, lakes etc. With consciousness, one basically means that one is functioning in a certain way: not in a dreamless sleep, but receiving and processing external sensory input (sights, sounds) as well as internal input (thoughts, feelings). No particular mystery here.

    On my account, thoughts, feelings, etc are all generated by the rich structure of our brains, where the substrate is immaterial, it can be made of anything as long as it replicates the right dynamics. I don't see any unpalatable consequences I have to contend with.

    ReplyDelete
    Replies
    1. Perhaps, if you like this topic, you could explain what unpalatable consequences my view entails in the form of another article, and I can then respond. That way we can actually produce content that more than just the two of us will read:)

      Ideally it would be self-contained, short, and structured. It could be a series of points, maybe with each point being a challenging for me question - kind of like I tried to pose a challenge to your view as "If WBE Alex can be mistaken, why can't Alex?"

      You already did pose something: what would a "car" mean to a brain in a vat? My answer: it doesn't mean some brain pattern, it means the same thing it means to us, an external physical object. It's just the brain is mistaken if it thinks there's a car nearby in the same way he is mistaken if it thinks it has a body.

      Delete
    2. Hey,

      "Wow, we have together produced a lot of content, I have a feeling the only people who would ever have the willingness and patience to read all of that are just us :)"

      I would say welcome to the life of the academic, but you already know this! :)

      Also, do you even sleep Dmitriy? I seem to be catching you at all times of the day/night in conversation.

      "Both Searle and I agree that there's no magical ingredient, consciousness is just a property of some physical systems, brains in particular. It is an objective property of those systems, we just disagree what systems qualify."

      There's not just a disagreement on what systems qualify. Searle and Chalmers don't think that mental phenomena are just particular descriptions of certain physical phenomena (hence the p-zombie), they really think that the mental stands in its own special category.

      Searle, for instance, thinks that consciousness is ontologically irreducible to physical stuff. He's not a substance dualist though because he accepts that consciousness is causally reducible to the physical (it supervenes on the physical stuff). So there isn't conscious stuff out there which interacts in its own special way.

      Because of causal reducibility, we can use descriptions of physical systems, like brain function, to determine the right theories of consciousness. Because consciousness is ontologically irreducible (according to Searle/Chalmers etc...), we can conceive of p-zombies while also rejecting the materialist dictum.

      The hard materialist theories of the mind don't just entail that consciousness supervenes on the brain, but also that it is identical to physical stuff or physical properties (e.g. computational patterns). So if you're an externalist you have to accept that you can't talk about consciousness in the sense that Searle and Chalmers mean it, and this is what seems unpalatable.

      So this: "one basically means that one is functioning in a certain way: not in a dreamless sleep, but receiving and processing external...."

      Obviously can't be right since Searle and others reject precisely this! In other words, the word "consciousness" is meant to capture that aspect that Searle and others think they are talking about; this is the philosophically understood meaning. If you don't agree that there is such a thing, then you don't agree that there is consciousness (or at least you believe that we can't talk about it). That's fine of course, many philosophers think we can't actually talk about such stuff (like Daniel Dennett).

      Of course you might be trying to say that we're just trying to find the right definition for the 'mind'. But then what is the mind? The mind is just going to be the thing that we end up defining! And so this whole undertaking is (excuse the phrase) "just semantics".

      Unless you believe that the word "consciousness" already means something useful outside of established definitions about computational function, then the term is vacuous outside of the context and such discussions are rather pointless.

      It's the difference between attempting to define blue as a wavelength (we already know that blue is a sensation invoked by a wavelength) and attempting to define grue (we don't know what it is, besides the fact that it is going to be a wavelength). In the blue case, we can compare our sensory stimulation with the wavelength, in the grue case there is nothing to compare and we are free to define the term how we wish.

      On the other hand, if you think consciousness is some kind of special phenomena, then we already know what it is in one sense, and therefore such conversations are quite meaningful. The Chinese room and most philosophical talk on consciousness is based on the presumption that you accept the above definition.

      Best,

      Alex

      Delete
  38. Hey, sorry I missed your last comment. I think there is no need for that because I think we can quickly see that this discussion is approaching a disagreement over terms. I think you and Searle are just talking about different things when you say "consciousness". Note that the standard term is defined in the way Searle uses it. There are definitely materialists/physicalists out there who reject the notion that we have consciousness (in the sense described). It was probably uncharitable of me to call them crazy philosophers.

    It's definitely reasonable to be such people and to reject the notion that talk about consciousness is meaningful in any way.

    ReplyDelete
  39. I disagree that Searle and I are talking about different things when we say "consciousness". He thinks consciousness is entirely produced by our physical brains, but unlike other things it has a first person ontology in the sense that, unlike mountains or molecules, it can only exist if there's a subject experiencing it.

    All of that is totally fine with me. I am talking about the same thing, we just disagree on what physical systems can experience things.

    Some relevant quotes from Searle:

    "Once we see that consciousness is a biological phenomenon like any other, then
    it can be investigated neurobiologically. Consciousness is entirely caused by neurobiological processes and is realized in brain structures. "

    "Conscious states only exist when they are experienced by some human or animal subject. In that sense, they are essentially subjective... Because conscious states are subjective in this sense, they have what I will call a first-person ontology, as opposed to the third-person ontology of mountains and molecules, which can exist even if no living creatures exist. Subjective conscious states have a first-person ontology (“ontology” here means mode of existence) because they only exist when they are experienced by some human or animal agent."

    All of that is perfectly consistent with what I mean by consciousness.

    ReplyDelete
    Replies
    1. Thank god; I almost thought you were one of those crazy philosophers! :)

      It was this quote which threw me off, "With consciousness, one basically means that one is functioning in a certain way:"

      However, I want to clarify even further. It's not just that Searle accepts that consciousness has a first-person ontology, he also means that it has qualitative aspects (there is a feeling as to what it is to be like Alex). Therefore, consciousness as defined is not logically identical to either our brain or computation or a functional kind or any other third person physical phenomena we might think of. It may well turn out that consciousness is physically identical to computation (meaning that in our universe they always go together), but the terms still mean different things because they are descriptions of different stuff.

      That's important because third-person ontology doesn't just mean (dependent on the brain), we might imagine all kinds of stuff that are dependent on my brain but which are still not qualitative in the above described sense; these things would still be considered third-person. They are third person because other people can access their contents.

      For instance, people can access my neural contents with some super technology and see what I'm thinking, therefore my "thoughts" are third-person. But they can't access my qualitative states, and therefore such states are first-person.

      Having said all of that, how can you still maintain that there is no problem for the semantic externalist? If you accept semantic externalism, then you accept that semantics is about third person ontology, (e.g. mountains, brains).

      Therefore, you can never get at or describe first-person phenomena. You can only describe intermediate stuff like the brain or mind (where mind here just means the computation). Since we think we can talk about consciousness, and since consciousness is epiphenomenal to meaning, we are in serious trouble.

      Delete
    2. "It's not just that Searle accepts that consciousness has a first-person ontology"

      I meant not just in the sense you described, to have a first-person ontology is to exhibit all those qualities (e.g. dependent on the person; qualitative aspect) that I talked about.

      Delete
  40. "If you accept semantic externalism, then you accept that semantics is about third person ontology, (e.g. mountains, brains)"

    A quick short proof to support this:

    1. If semantic externalism is true then meaning is determined by external phenomena. E.g. the question, "how do we know this word means x?" is solved by checking to see whether the usage of the word is linked in "the right way" to external activity.

    2. Consciousness is epiphenomenal

    3. Consciousness cannot affect external activity/phenomena (from 2)

    4. The word "consciousness" can only mean consciousness if the word was linked to conscious phenomena in "the right way"

    5. But it can't because of 3.

    6. We can't actually meaningfully talk about consciousness

    6 is mean to be the problem for accepting both semantic externalism and epiphenomenalism that I spoke of. Under internalism however, the question "how do we know this word means x?" is solved by seeing whether there is the correct mental (conscious) content that is linked to the usage of the word x. This process of checking the meaning is a purely internal/conscious process that Searle think is unavailable for WBE Alex.

    If internalism is true we can see that the word "consciousness" is meaningful because there exists mental/conscious content that is linked "in the right way" to own's own use of the word "consciousness". I have put "linked in the right way" in quotation marks because one has to sketch out a fully developed theory that could explain how this process works for both internalism and externalism. Usually however, there are some causal criteria which have to be satisfied for externalism, whereas internalism just requires a correlative mental/conscious state for the usage of the word

    So internalism doesn't have the above problem, what it instead has is the problem of trying to explain the gap between syntax (what the brain does) and semantics (what our consciousness does). Because if epiphenomenalism is true, it seems miraculous that the two should be correlated. Under externalism, there is no gap because semantics is just syntax plus the relation to the external world (or even absent the latter in a more "narrow" theory).

    ReplyDelete
    Replies
    1. On a totally unrelated side note, the above helps give an answer to the following quote of yours:

      "You already did pose something: what would a "car" mean to a brain in a vat? My answer: it doesn't mean some brain pattern, it means the same thing it means to us, an external physical object. It's just the brain is mistaken if it thinks there's a car nearby in the same way he is mistaken if it thinks it has a body."

      We can see that under semantic externalism, this can't be right. That's because the meaning of the words in the brain-in-the-vats' language are determined by checking their relations to the external world. Since the brain-in-the-vat doesn't have these relations (but thinks it does), its words aren't going to mean what it thinks it does. The only way to reject this is to suppose that its words mean what it actually thinks it does, but that's semantic internalism.

      Of course you can adopt a narrow version of semantic externalism, wherein "the words mean what the brain-in-the-vat thinks they do", just means "the words mean what the brain-in-the-vats' brain states correlate with".

      But I already refuted this when I pointed out that we can imagine that the "brain-in-the-vat" doesn't even have a brain or is radically mistaken about the nature of its brain, then we'd still have to conclude that it is mistaken about the meaning of its own language. True semantic internalism must therefore come with association with one's mental/conscious contents.

      Delete
  41. Hey Alex,

    >consciousness as defined is not logically identical to either our brain or computation or a functional kind or any other third person physical phenomena we might think of.

    Of course, after all in some possible world there are ghosts, conscious but bodyless.

    How can we talk about consciousness under externalism? I am assuming, since this is a common view, that there are legitimate answers already advanced by people like Putnam, Kripke, etc. But to get into the weeds here would be outside the scope of this article, although - if this is an interesting topic - we can make a new article about it. Superbriefly, I would deny premises 1 and 2. Externalism doesn't say all meaning is external, it just says it's not all internal. And I don't think consciousness is epiphenomenal, any more than a check in chess is. And I don't think Searle thinks that either, he thinks that the mind most definitely affects the body, he talks about two levels of description but talking about the same thing - and here I completely agree with him.

    ReplyDelete
    Replies
    1. Hey,

      "But to get into the weeds here would be outside the scope of this article, although - if this is an interesting topic - we can make a new article about it. "

      No need, we've already went way beyond the original article!

      "How can we talk about consciousness under externalism? I am assuming, since this is a common view, that there are legitimate answers already advanced by people like Putnam, Kripke, etc."

      Not really, that's not to say there aren't attempted answers, just that there's no commonly accepted answer. This, after all, is part of the hard problem of consciousness, so it would be extremely unfair to hold this issue over the head of semantic externalism. Especially since the problem of epiphenomenalism cuts both ways so to speak.

      "Externalism doesn't say all meaning is external, it just says it's not all internal."

      Sure, it's definitely not an all or nothing dichotomy. But note that for the purpose of this discussion I was just talking about internalism or externalism relating to our talk about consciousness.

      Remember that your original claim was that Searle runs into trouble because WBE Alex and real Alex use the same process to justify SFS(me), which works for one but not the other, and that therefore the process is unreliable and runs afoul of Gettier's problem. If we adopt internalism for our talk about consciousness, our justification becomes a purely internal process, and we can avoid this problem.

      I hope I also showed that your arguments against internalism were really more so against epiphenomenalism, and also that it wasn't true that internalism is some super minority position which is completely unintuitive (I hope I showed this with the brain in the vat example).

      It's widely recognized that externalism means we can be completely wrong about our understanding of meaning, and Putnam even thinks that we can't even be wrong about being brains in vats for that reason (I describe how this works in a post above) which is extremely non-intuitive!

      All of this is to say that Searle can plausibly maintain that WBE Alex is not conscious.

      About epiphenomenalism:

      "And I don't think consciousness is epiphenomenal, any more than a check in chess is. And I don't think Searle thinks that either, he thinks that the mind most definitely affects the body, he talks about two levels of description but talking about the same thing"

      We need to be careful here. Searle thinks that the problem's associated with epiphenomenalism (like our not being able to talk about consciousness) aren't applicable to his theory for semantic reasons (too tedious to get into here). It is pretty controversial though as to whether Searle actually succeeds in dismissing the problems via his account of biological naturalism.

      In any case, that doesn't mean that he agrees that consciousness is causally efficacious. Like I said, he thinks that consciousness is causally reducible to neuronal physical states. That alone is sufficient to reject causal accounts of consciousness (like the quantum theory) as well as put him squarely in the camp that needs to argue that behavioral equivalence by WBE Alex is not a sufficient condition for consciousness.

      Finally, we should take my account of Searle with a grain of salt. This is the position that he has held in the past (best laid out in his book, Mind). I think as of late he has warmed up a lot to quantum theories of consciousness and changed his mind, but I'm definitely not an expert on Searle.

      So to sum up:

      1. You can rationally believe in Searle's chinese room argument, if you subscribe to certain causal accounts of consciousness like the quantum theories.
      2. You can rationally believe in it, even if you accept an epiphenomenalist account of consciousness, as long as you believe in knowledge by direct acquaintance, and are a semantic internalist on talk about consciousness.

      Delete
    2. "Searle thinks that the problem's associated with epiphenomenalism (like our not being able to talk about consciousness)"

      Here I mean the problem of how our conscious semantic states can be correlated with our syntactic brain states despite a causal gap.

      Delete
    3. “ Externalism doesn't say all meaning is external, it just says it's not all internal”

      Oh I just realized, just in case you meant that externalism can accommodate internalism. Yes of course, note that this isn’t mandatory though, some externalists (like Putnam) think that meaning is solely in the external environment.

      But this is immaterial because premise 1 is fully compatible with the notion that externalism can accommodate internalism (note that premise 1 doesn’t say “solely determined”).

      Delete
  42. >we've already went way beyond the original article!

    True dat:)

    >Remember that your original claim was that Searle runs into trouble because WBE Alex and real Alex use the same process to justify SFS(me), which works for one but not the other, and that therefore the process is unreliable and runs afoul of Gettier's problem. If we adopt internalism for our talk about consciousness, our justification becomes a purely internal process, and we can avoid this problem...All of this is to say that Searle can plausibly maintain that WBE Alex is not conscious.

    But remember that regardless of externalism vs internalism, Searle still has to contend with the fact that any of his internal or external dialog justifying SeFS(him) is apparently causally unconnected to him having FFS or even, under a particular theory of meaning, to him justifying StFS(him). In my view, this is completely unpalatable: it makes any measurable instantiation of his justification fake (even his internal voice justifying it to himself). For someone not believing in souls and such, believing consciousness is just another biological process produced by the brain, I think this is completely devastating: is he really going to believe that despite that all his talk about him being conscious is unconnected to him being conscious?!

    >1. You can rationally believe in Searle's chinese room argument, if you subscribe to certain causal accounts of consciousness like the quantum theories.

    But I mentioned or hinted earlier I think that we can simulate the quantum stuff too, on a classical computer. Whatever causally determines the behavior, if it follows laws, we can computationally simulate it.

    >2. You can rationally believe in it, even if you accept an epiphenomenalist account of consciousness, as long as you believe in knowledge by direct acquaintance, and are a semantic internalist on talk about consciousness.

    Only if you are prepared to accept the unpalatable consequences above.

    ReplyDelete
  43. Hey Dmitriy,

    Sorry for the late reply.

    "But remember that regardless of externalism vs internalism, Searle still has to contend with the fact that any of his internal or external dialog justifying SeFS(him) is apparently causally unconnected to him having FFS or even, under a particular theory of meaning, to him justifying StFS(him). "

    Yes this is what I meant when I wrote that your critiques are more so against epiphenomenalism, and not so much against Searle's position. Like I said, epiphenomenalism "cuts both ways"; so it would be rather unfair to use such criticisms against Searle I think.

    Consider the argument for epiphenomenalism; it basically goes something like this:

    1. I think I'm consciously moving my hands/ my body when I'm speaking

    2. But every behavior that I attribute to my conscious self can be causally explained in terms of my neural state.

    3. Any causes attributed to my conscious self would therefore be completely overdetermined.

    4. Consciousness is epiphenomenal

    Overdetermined just means that A and B both cause C to take place; meaning that if either A or B (but not both) were to disappear then C would still take place. The jump from 3 to 4 is basically a denial that consciousness represents a "genuine case" of overdetermination. This is justified in part by invoking Ockham's razor.

    Consider that if we have a perfectly good explanation for 1 (and good evidence for that explanation) that doesn't invoke consciousness, then consciousness becomes redundant. Also, if consciousness were to causally affect my behavior "on top" of my physical neural state, we would have to believe in some sort of special mental connection to the world that seems supernatural.

    There are, as I see it, 3 ways to avoid this conclusion, each of which corresponds to a specific denial of one of the three premises. We have talked about all three ways in some form or another throughout our conversation, but I will explore them in a following post in greater detail. The above is a sufficient demonstration of why philosophers like Chalmers and Searle believe in the epiphenomenal nature of consciousness. As we will soon see, denying epiphenomenalism can carry some pretty unsavory consequences.

    ReplyDelete
  44. The first way is to deny that we can meaningfully talk about consciousness. This is a denial of premise 1; I'm not thinking or talking "about" consciousness, because there's no causal link between consciousness and the real world (as required by semantic externalism).

    The second way is to hold that my neural state is logically identical with my conscious state, and so premise 3 is false. There is no causal overdetermination because I'm just saying the same thing (at different levels of explanation). I'm just repeating my previous statement about A causing C (there is no B).

    But no one really believes, except for those hard materialists that I earlier spoke of, that consciousness is logically identical with my neural state. For we could imagine (as you aptly put it) that we were like conscious ghosts, floating around in the world without a body.

    Searle himself tried to argue that the problem of epiphenomenalism "goes away" on his account, because causal talk about consciousness is just causal talk about neural states at a different level of description. This is similar to how the process of digestion is not epiphenomenal to the molecular activity in my digestive system, because I'm talking about the same stuff at a different level.

    The problem for Searle is that he is committed to the ontological irreducibility of consciousness. So I don't see how he can claim the above type of solution works for him, since he really does believe that my neural physical state and my consciousness are not the same "stuff". Hence, we can't be talking about the same stuff causally influencing another thing at a different level of description. It's not a case of us getting confused by our language of description; it seems that according to Searle himself we really are talking about different things here.

    The third way (denial of premise 2) is to adopt a causal account of consciousness (like those quantum theorists I spoke of). Needless to say, this is pretty controversial because it seems like we are getting into "supernatural territory" here. I will have more things to say in the defense of the quantum-consciousness theorist in the following post. For now, I hope what all of this shows is that my original points 1 and 2 still stand.

    It would be unfair to hang the problems of epiphenomenalism over Searle's head; there's really no way to avoid it unless you adopt the above seemingly unpalatable theories, or just avoid talk of consciousness. You can also deny the link between 3 and 4 by positing that consciousness really is causally overdetermined, but I already pointed out why that is hard to stomach (some philosophers doubt whether there even exist genuine cases of overdetermination in the world).

    ReplyDelete
    Replies
    1. Now let me say some things in the defense of the causal quantum theorist. Such a theorist has to be committed to the notion that consciousness (as phenomenal stuff) can somehow influence the physical matter in the universe (say by causing wavefunction collapse).

      But the whole reason we find "supernatural" explanations of physical phenomena so unsatisfying in the first place is precisely because we have readily available physical explanations for the phenomena at hand. E.g. a ghost caused me to feel a sudden cold flash (no it was probably just an aberration in my nervous system).

      Notice that this was partly the justification for going from premise 3 to 4. Since we had a perfectly good explanation for my actions at the level of neurons, the "supernatural" seemed unpalatable. However, because quantum decoherence isn't so readily explained (we take the whole process to be stochastic), it follows that believing in a causal connection between conscious phenomenal states and lower-level electron states (or whatever the quantum theorists think happens) is undeserving of such criticism.

      It does mean that the quantum consciousness theorist has to postulate some new physical interaction in the world outside of the four fundamental forces, but remember that the theorist claims to make testable predictions. And this, I think, answers the following quote of yours: "But I mentioned or hinted earlier I think that we can simulate the quantum stuff too, on a classical computer. Whatever causally determines the behavior, if it follows laws, we can computationally simulate it."

      I'm not sure if it is theoretically possible to simulate every aspect of a quantum computer on a classical computational architecture, but even if it was, the quantum theorist of consciousness can always claim that non-quantum computers will have a more difficult time simulating consciousness.

      Meaning that a purely computational substrate, like a neural network, won't develop the thoughts or behavior associated with consciousness (e.g. making vigorous arguments for its own consciousness) unless its development was constrained by very specific initial conditions. Whereas a quantum computer interacting with a neural net might in general cases spontaneously develop such thoughts/behaviors.

      If true, the more "robust" nature of the latter is highly suggestive of an actual causal link between consciousness and physical behavior, whereas the heavy constraints associated with the former could be explained away as mere correlation.

      Now obviously there's no reason to think that this will end up being the case, but to be fair to the quantum theorists they certainly do have good independent arguments to justify their theory (and at least they have the decency to put forward actual testable predictions).

      Delete
    2. "You can also deny the link between 3 and 4 by positing that consciousness really is causally overdetermined,"

      I also forgot to mention that taking this route would work for Searle, in the sense that he can claim that there is a causal link between his internal justificatory state and his external behavior/talk about consciousness. Of course I don't think it can work, but that also means that I don't think hanging the problem of epiphenomenalism on Searle's head is very charitable either.

      So I think that my original points 1 and 2 still stand.

      Delete
  45. Hey Alex,

    There's a lot there, I'll try to be brief and address your main thesis: epiphenomenalism cuts both ways. I don't think so, and here my solution to the challenge you posed with those four premises is pretty much the same as Searle's.

    First, it's not true that he "believe[s] in the epiphenomenal nature of consciousness." He lists epiphenomenalism in his list of big mistakes people make about consciousness. Take a look at these segments from one of his talks:
    https://youtu.be/6nTQnvGxEXw?t=1145 (a few seconds, he specifically mentions epiphenomenalism)
    https://youtu.be/6nTQnvGxEXw?t=636 (about 4-5min)
    https://youtu.be/6nTQnvGxEXw?t=1050

    In these segments he not only rejects epiphenomenalism but basically gives a solution to the four premise challenge you posed that I agree with. The solution is basically one you already described, referencing two levels of description.

    Your objection to that solution was that Searle himself thinks consciousness is ontologically irreducible. First, that would be an objection to Searle, not to me:) But second, I think by irreducible he doesn't really mean what one might think. In one of those segments he explains a bit about it being irreducible.

    ReplyDelete
    Replies
    1. Hey Dmitriy,

      Thanks for the clips.

      About Searle, yes like I mentioned he thinks the problems of epiphenomenalist aren’t applicable to his account (biological naturalism). I don’t think it works at all, but it was still unfair of me to call him an epiphenomenalist.

      On another note, you say you don’t think the problem “cuts both ways”, but I don’t see any support for that statement (or how it’s relevant to our discussion).

      To say that it cuts both ways, is just to say (quoting you):
      “that regardless of externalism vs internalism, Searle still has to contend with the fact that…his…dialog…is apparently causally unconnected”

      And, I might add, that his actions and behavior are as well.

      In any case, if you think his solution works then obviously you don’t think there’s a problem of causal disconnection, and hence don’t buy into the above statement of yours. I therefore take it you agree with me that people like Searle can rationally maintain their stance in spite of such criticisms.

      So, the above posts should be construed as a response to your last criticism, which is only applicable to the person who believes in the problems of epiphenomenalism. Since you apparently don’t subscribe to such criticisms we should just read my posts as a reply to the potential viewer who might.

      In those posts I’ve laid out the case for why we might think this problem exists in the hopes of showing that any criticism directed towards the person who rejects your WBE analogy on that grounds is uncharitable. So forgot Searle for the moment.

      If you want, I can address Searle’s position in more detail (since you say you also subscribe to it), but I fear sidetracking.

      Delete
    2. “In these segments he not only rejects epiphenomenalism but basically gives a solution to the four premise challenge you posed that I agree with“

      Wait, did you mean to agree with his solution or with my challenge? :)

      I’ve been assuming the former.

      Delete
  46. Hey Alex,

    I didn't quite follow all of what you wrote. My point was this:
    1. A defender of the Chinese Room argument would have to, as my argument shows, contend with some degree of causal disconnection between consciousness and behavior
    2. Such disconnection (epiphenomenalism or something like it) is unpalatable. (Searle agrees with this)
    3. My account, unlike that of the Chinese Room defender's, doesn't entail such a disconnect. That's what I mean when I say the problem of epiphenomenalism doesn't cut both ways.
    4. How can I avoid epiphenomenalism? By adopting Searle's explanation of the two levels.

    So then I am claiming Searle's position is inconsistent: he has a good solution for rejecting epiphenomenalism, and he (rightly) rejects it, BUT: his position on the Chinese Room implies a form of epiphenomenalism.

    My position seems to me to be consistent and successfully avoids epiphenomenalism.

    ReplyDelete
    Replies
    1. Hey,

      Okay, I see what you're driving at. I must have missed your argument for why you think the defense of the Chinese room entails epiphenomenalism.

      Part of my above postings are still relevant here, as I've tried to show that the reasons for believing in epiphenomenalism hinge on the argument I put forward. They are not dependent in any way on semantic internalism/externalism or one's beliefs about the Chinese room. Anyone who advocates for the four solutions I've outlined above can get out of the problems associated with epiphenomenalism.

      I don't see how you could have refuted this. The only attempt that I've noticed was your rebuttal against the causal theorists (who reject my premise 2) when you tried to show that your WBE rejoinder still applies to them. I think I showed why your rebuttal doesn't work, but even if it did there are still three other options to pick from.

      "So then I am claiming Searle's position is inconsistent: he has a good solution for rejecting epiphenomenalism, and he (rightly) rejects it"

      ?
      What makes you think that he rejects his solution? Even if you have a knockdown argument for why any defense of the Chinese room entails epiphenomenalism, it's not like Searle is aware of it.

      Delete
  47. There's some miscommunication here:
    1. I wasn't saying Searle rejects his own solution, that would be weird. He rejects epiphenomenalism. His solution tells us one way to avoid your four premise argument for epiphenomenalism, and I agree with it.

    2. "I don't see how you could have refuted this. "
    I wasn't refuting this, I do support one of the solutions, Searle's.

    So, I basically agree with Searle on everything except his conclusion about the Chinese Room. His conclusion leads, as I thought we had agreed, to a causal disconnect between consciousness and behavior, and is therefore inconsistent with his rejection of epiphenomenalism.

    By the way, I have done a major revamp of my defense of 1b in the main post, based on our conversation.

    ReplyDelete
    Replies
    1. “ His conclusion leads, as I thought we had agreed, to a causal disconnect between consciousness and behavior”

      We are definitely not in agreement here, can you point me to where exactly you make this argument? Or just give me your best summary of why you think this is the case.

      What I did say many times is that your critiques against Searle were more so against epiphenomenalism. I previously assumed, in other words, that you had arrived at the conclusion that epiphenomenalism is true precisely because adopting one of the four solutions was unpalatable for you.

      My point was just that it would be uncharitable to level any critiques against Searle (or the Chinese room defender) on these grounds, because the arguments for epiphenomenalism are independent to the arguments for the Chinese room (at least I’ve seen nothing to dispute this).

      Delete
  48. Okay I’ve read your revamped defense of 1b and think I see why you believe that the defense of the Chinese room entails epiphenomenalism. Let me knew if this is not the case and you have some other argument.

    “Let's try to assume it thinks/says it's conscious, while not actually conscious, and flesh out the consequences.

    An immediate consequence is that forming the brain patterns for thinking/saying it's conscious happens perfectly fine without actual consciousness.”

    But this doesn’t follow (at least not in our case), all that follows from the first part is that it is possible to think/say you are conscious without having a causal link to consciousness. It doesn’t therefore follow that we ourselves don’t have a causal link between our consciousness and our behavior.

    For instance, you could:

    A) Be a causal quantum theorist, in which case you think that there is a causal link between our consciousness and our behavior. The explanation for a computer doing the same behavior is just mere correlation and not causation (I explain how we can give a testable prediction to differentiate the two in my above posts).

    B) You can believe in causal overdetermination. In which case both my consciousness and neural outputs casually produce the same behavior, hence it’s no surprise that I could still have the same behavior while missing consciousness.

    C) Do all the other things I spoke of, one of which includes adopting Searle’s approach (which I actually think is logically incoherent but never mind).

    ReplyDelete
    Replies
    1. Tl;dr

      Accepting the Chinese room argument means you think an entity can be behaviorally equivalent to a conscious entity without it being conscious.

      But it doesn’t follow from this that behavior has no causal connection to consciousness.

      I.e. showing that a rock (neural net) could have broken my window (produced equivalent behavior) doesn’t mean that my window didn’t break because of some other object (consciousness).

      Delete
  49. >What I did say many times is that your critiques against Searle were more so against epiphenomenalism. I previously assumed, in other words, that you had arrived at the conclusion that epiphenomenalism is true

    You thought I was critiquing epiphenomenalism while believing it is true? I am confused:) In any case, do you feel you now understand the logical structure of my argument? Just to make sure, it's this: epiphenomenalism is false, but denying 1b leads to epiphenomenalism.

    >We are definitely not in agreement here, can you point me to where exactly you make this argument?
    After revamping my article it is now there.

    ReplyDelete
    Replies
    1. I thought you might have believed epiphenomenalism to be true, but just didn’t realize that your critiques against Searle were actually critiques against epiphenomenalism.

      In other words, I thought that you thought the problems of epiphenomenalism were somehow only applicable to the semantic internalist. That’s how I interpreted your insistence that “it doesn’t cut both ways”

      Delete
    2. To make it clearer:

      I thought that you thought that I thought you might have been thinking some things about what I thought…..


      Did you get all that?

      :)

      Delete
  50. I don't follow the logic of A, what do you mean that it could just be correlation? Remember, we are simulating as deeply as necessary to reproduce how neurons trigger each other, to the point that we can for example predict the future dynamics of the brain. Of course maybe it's just not possible, for example maybe the soul can interfere with neurons, but that's irrelevant to the Chinese Room argument - it says that even if simulation was possible it would not reproduce consciousness. So we are presupposing that WBE is possible.

    B I basically just reject without a specific argument, it's just bizarre, violates Occam's razor, etc. So, if you'd like, it could just be one of the underlying intuitive assumptions, shared I would guess by most philosophers.

    C I don't get: the solutions we talked about are for how to avoid the four premise argument for epiphenomenalism you gave. Searle's solution for example can't help him if we already established that both I and my simulation share the same reason for our behavior - if he wants to believe that my simulation is not conscious then that common reason doesn't involve consciousness, which automatically means consciousness is not a necessary ingredient in my behavior, and (bracketing B) we get epiphenomenalism.

    For clarity, can you articulate a specific example of C, one that allows us to assume a perfect simulation is possible while avoiding the conclusion that consciousness is epiphenomenal?

    ReplyDelete
    Replies
    1. Let's clarify two things; on the one hand we have the issue of whether "defending the Chinese room entails epiphenomenalism". I think I showed this argument fails because "Accepting the Chinese room argument means you think an entity can be behaviorally equivalent to a conscious entity without it being conscious." But it doesn’t follow from this that behavior has no causal connection to consciousness.

      On the other hand, we have the issue of the four solutions I discussed. Previously you mentioned you weren't interested in refuting them, but now it seems like you are attempting to give arguments against them (e.g. B violates Ockham's razor). If this is so, just remember that refuting the solutions against epiphenomenalism doesn't in any way make the WBE argument better. We would just be stuck with a causal disconnect and have to deal with it.

      Moving on:

      "I don't follow the logic of A, what do you mean that it could just be correlation? Remember, we are simulating as deeply as necessary to reproduce how neurons trigger each other"

      I mean that in the case of a computer which perfectly replicates my behavior, the perceived causal link between its consciousness and behavior could just be mere correlation (i.e. its not actually conscious).

      "Remember, we are simulating as deeply as necessary.... Of course maybe it's just not possible...but that's irrelevant to the Chinese Room argument - it says that even if simulation was possible it would not reproduce consciousness. So we are presupposing that WBE is possible."

      It doesn't follow from (even if simulation was possible it would not reproduce consciousness) that (we are presupposing that WBE [Alex] is possible). If A is true, then it won't actually be the case that WBE Alex is behaviorally equivalent, because real Alex needs his conscious self to develop his full behavior.

      In response, you pointed out that we could simulate every quantum computation on a classical architecture. While this might be true, this just means that we can construct a computer which perfectly replicates real Alex's behavior, it doesn't mean that WBE Alex (who lacks the quantum stuff) also does the same!

      So accepting A means that you have to accept potential behavioral equivalence between conscious and non-conscious entities, but that doesn't mean that you have to accept it between WBE Alex and real Alex.

      "Searle's solution for example can't help him if we already established that both I and my simulation share the same reason for our behavior"

      But we haven't! Like I said, there is a logical gap between "A computer can replicate my behavior" to "Consciousness is epiphenomenal".

      However, since you asked how Searle's solution might work, let me try to put myself in his shoes and pretend for a moment that I think his solution can do away with the problem. In that case, we would find that WBE and real Alex actually are behaviorally equivalent. Nevertheless, because talk about conscious causation is just talk about neural causation at a different level, it follows that there is no overdetermination and hence no possible problem.

      At the same time, since consciousness is causal, Searle can assert that he has internal justification for his beliefs in his theories of consciousness which excludes WBE Alex from consciousness. He doesn't run afoul of the "how can he justify this if there is no causal link between..." objection that you previously brought up.

      Presumably you believe this is valid since you buy into his solution.

      As for C, I can't think of any palatable option, but again that's just a problem pertaining to epiphenomenalism itself.

      Delete
    2. “because talk about conscious causation is just talk about neural causation at a different level”

      I meant that Searle thinks that it’s just talk about the causation of real Alex’s biological neural state.

      Delete
    3. “ It doesn't follow from (even if simulation was possible it would not reproduce consciousness) that (we are presupposing that WBE [Alex] is possible)”

      I also obviously meant that it doesn’t follow that WBE Alex is behaviorally equivalent.

      Delete
  51. >Accepting the Chinese room argument means you think an entity can be behaviorally equivalent to a conscious entity without it being conscious.

    Yes, exactly.

    >But it doesn’t follow from this that behavior has no causal connection to consciousness.
    I.e. showing that a rock (neural net) could have broken my window (produced equivalent behavior) doesn’t mean that my window didn’t break because of some other object (consciousness).

    Right, you are saying maybe it's causal overdetermination, which I reject, along with Searle and most other philosophers. But the point is if you reject 1b, then you arrive at the conclusion that consciousness is not needed for your neural net to think and say it's conscious.

    And so the price for rejecting 1b is the unpalatable epiphenomenalism or the unpalatable causal overdetermination.

    ReplyDelete
    Replies
    1. Oh, and just to reiterate why they are unpalatable - because I, along with most people I would think, accept the premise that the reason we think and say we are conscious is specifically because we are conscious, that consciousness plays a vital role in us thinking / saying we have it.

      Delete
  52. >I thought that you thought that I thought you might have been thinking some things about what I thought…..
    Did you get all that?

    I thought I did, but I think I thought wrong:)

    ReplyDelete
  53. "But the point is if you reject 1b, then you arrive at the conclusion that consciousness is not needed for your neural net to think and say it's conscious."

    I think this is the key disagreement. The causal quantum theorist can reject 1b on account of their thinking that a computer thinking/saying it is conscious doesn't mean it actually is. But that doesn't also mean that the above theorist accepts that WBE Alex is behaviorally equivalent.

    ReplyDelete
    Replies
    1. I probably didn't read 1b as closely as I should have. You obviously meant it to be about just WBE Alex, whereas I interpreted as a more general case of (if a reasonable entity thinks it is conscious, then it is). In any case, I don't think that adopting the Chinese room argument means that you have to deny premise 1b.

      Delete
    2. *Doesn't mean that you have to deny premise 1b in the sense you think of it.

      Delete
  54. In case my point about Searle’s defense wasn’t so clear.

    Basically, his line of argumentation supposes that we are speaking of the same thing/event/state x and y when we are talking about causation. Hence, since x and y are the same thing at different levels of description, there can be no causal overdetermination when we attribute causes to consciousness (so there’s no epiphenomenalism).

    It doesn’t follow that the above reasoning can only work if you accept the WBE rejoinder, because what you choose to substitute for x and y is up to you. Searle is going to say that y is consciousness and x is the biological neural state (not just any neural state). And so he doesn’t have to buy that the behavioral equivalence of WBE Alex implies consciousness.

    ReplyDelete
    Replies
    1. Specifically, Searle thinks that consciousness is the same thing as the neural “state” of the brain (it’s a feature of the brain). It’s not ontologically reducible to the brain, but it is a state of it.

      It’s not ontologically reducible because consciousness isn’t physical. And it’s not physical because a state/feature of the brain is an arbitrary designation. I.e. there is no such thing as “a state of the brain” in nature. Rather, it’s something that we arbitrarily recognize/impose.

      This is similar to the mapping problem I previously described in computationalism. What we choose to call a computer is based on subjective criteria according to Searle; since we can map any computational relations to any physical process (like a rock). This, incidentally, is another reason Searle thinks computationalism is bunk.

      I am trying to be as charitable to Searle as I can possibly be, but it’s difficult when I think the argument (both arguments actually) doesn’t/don’t work.

      Delete
  55. Hey Alex,

    >I think this is the key disagreement. The causal quantum theorist can reject 1b on account of their thinking that a computer thinking/saying it is conscious doesn't mean it actually is.

    That's just a denial of 1b, it's not an objection to the argument showing the unpalatable consequences of denying 1b.

    >But that doesn't also mean that the above theorist accepts that WBE Alex is behaviorally equivalent.

    WBE is *by definition* behaviorally equivalent.

    >Doesn't mean that you have to deny premise 1b in the sense you think of it.

    I am not sure what you mean. 1a and 1b logically imply the conclusion.

    >It doesn’t follow that the above reasoning can only work if you accept the WBE rejoinder, because what you choose to substitute for x and y is up to you. Searle is going to say that y is consciousness and x is the biological neural state (not just any neural state). And so he doesn’t have to buy that the behavioral equivalence of WBE Alex implies consciousness.

    The objection and response about the parabola in the main text answers this part.

    ReplyDelete
    Replies
    1. Hey,

      “ WBE is *by definition* behaviorally equivalent.”

      Okay fine. It doesn’t mean that WBE Alex has to exist!

      “ I am not sure what you mean. 1a and 1b logically imply the conclusion”

      No they do not! The argument is not sound. Remember that the conclusion you are trying to establish is “simulation implies consciousness”. To get there for the causal quantum theorist, we would need to show that behaviroal equivalence to a conscious entity is sufficient for consciousness.

      The causal quantum theorist doesn’t believe this, and doesn’t believe that a whole brain emulation of Alex (which is not behaviorally equivalent) is conscious* What you would need to refute the causal quantum theorist is some knock down argument for how/why a whole brain emulation of Alex is behaviorally equivalent.

      *(Confusingly, you define WBE Alex as behaviorally equivalent. Whereas a whole brain emulation of me isn’t by definitional fiat)

      But you lack this argument, or at you least you lack the means to make this argument unless you already presume that the causal quantum theorist is wrong from the get go.

      About Searle:

      I’m not sure I understand the parabola analogy? It’s basically someone just reiterating the “simulation is not sufficient for consciousness” objection that you were responding to. I.e. They are just asserting that this analogy is comparable to the situation without argument.

      Let’s back up for a moment. Remember that it was your claim that epiphenomenalism doesn’t cut both ways for you and Searle which brought us here. Meaning that the problem can only be solved by Searle’s solution in your case. Accepting the solution somehow means that you have to accept that Searle’s “simulation of computation is not consciousness” objection is bunk.

      I was showing this doesn’t follow because Searle’s solution is position relative. You can choose to substitute anything you want for x (where y is consciousness). Since Searle says x is neurobiological stuff, it follows that accepting his solution doesn’t compel him to accept your argument as you originally claimed/needed to claim.

      Obviously, Searle has arguments for why x needs to be the way it does (in part backed by the Chinese room). Hence, I don’t see how refuting the parabola analogy (where the person simply makes Searle’s case without argument) helps refute the above in any way.

      Delete
    2. “The causal quantum theorist doesn’t believe this“

      Doesn’t believe that simulation of computation implies consciousness I mean.

      Before you claimed that we are basically assuming a whole brain emulation of Alex has to be behaviorally equivalent to me. Although Searle would grant this, it obviously doesn’t follow unless you believe that a whole brain emulation of me (I must say it’s annoying that I can’t write WBE Alex because of the way you defined it! Ahem) is going to be computationally equivalent.

      The causal quantum theorist don’t accept this principle, hence what you say doesn’t follow. Let me reiterate yet again, “In response, you pointed out that we could simulate every quantum computation on a classical architecture. While this might be true, this just means that we can construct a computer which perfectly replicates real Alex's behavior, it doesn't mean that WBE Alex (who lacks the quantum stuff) also does the same! ”

      So you are lacking the case for why a whole brain emulation of me must be behaviorally equivalent.

      Delete
    3. “I am not sure what you mean. 1a and 1b logically imply the conclusion”

      “No they do not! “

      Like I said, I just woke up today!

      Delete
  56. “That's just a denial of 1b, it's not an objection to the argument showing the unpalatable consequences of denying 1b.“

    But I already showed that your argument for the unpalatable consequences (I.e. epiphenomenalism) only works if you reject my four points or somehow show that acceptance of them is contingent on accepting your position.

    One of those four points was the approach the causal quantum theorist took, hence the burden of proof was on you to show that their case failed. You needed to do this precisely to make your unpalatable consequences come true in the first place!

    So simply saying that they reject premise 1b is not good enough! That’s what I meant when I said you can’t refute them unless you already assume they are incorrect in the first place (circular). That’s because they are making empirical claims, and we don’t have enough data yet to determine who’s right.

    ReplyDelete
    Replies
    1. P.S. Sorry if I sound a little grumpy this morning. I just woke up and my coffee machine is broken.

      :)

      Delete
    2. *They would actually be rejecting premise 1a

      Delete
  57. Tl;Dr:

    The burden of proof is on you to show that rejecting your argument entails unpalatable consequences (basically epiphenomenalism).

    You failed to show this for the causal quantum theorists because while they
    A) accept that an entity can be behaviorally equivalent to me without being conscious

    Still think that

    B) a whole brain emulation of me isn’t behaviorally equivalent. So they reject premise 1a. Their argument is that the quantum stuff in our brains (e.g. microtubules) influences our computation/behavior.

    You also failed to show this for Searle because you need an argument as to why acceptance of Searle’s solution for epiphenomenalism doesn’t work for his position.

    ReplyDelete
    Replies
    1. “So they reject premise 1a.“

      Therefore, they don’t have to accept causal overdetermination like you were implying (the other unpalatable consequence)

      Delete
  58. I know I've written a lot on this topic. But I still feel I need to clarify what I previously wrote so as to avoid any further potential misinterpretations (plus everything I wrote was quite scattered all over the place).

    To start from the beginning, we seemed to have agreed that there is a logical gap in your argument. The conclusion that "Therefore, a perfect computer simulation of me is conscious." does not by itself refute Searle's dictum that "Simulating (the right kind of) computation is sufficient for consciousness". Searle's point is that there is no right kind of computation which is sufficient (he explores various kinds of computation in many Chinese room variants) because its all just syntactic manipulation.

    What we need is some kind of additional step from (if my simulated self is conscious) to "my simulated self was conscious, precisely because it had the right kind of computational element". I hope you can see why this is vitally important. If you wish to define WBE Alex as behaviorally/computationally equivalent by definitional fiat, it follows that the causal quantum theorist is going to be in disagreement with you over what kind of properties WBE Alex is going to exhibit.

    The causal quantum theorist is going to argue that WBE Alex needs to exhibit quantum components like microtubules. This is important because if WBE Alex has the right sort of quantum processing, it no longer follows that we can refute Searle's dictum by saying WBE Alex is conscious. Why? Because any classical computer can simulate real Alex's behavior/computation, hence refuting Searle's dictum implies that such computers have to be conscious too. Since they emulate every computational aspect in my brain, then they clearly have the right kind of computation.

    But we've already established that believing that (a WBE Alex which has the right quantum stuff) is conscious, in no way implies that you have to buy that any "classical computer [which] can simulate real Alex's behavior/computation" is going to be conscious. Hence, it follows that if the causal quantum theorists are right, your argument proving WBE Alex's consciousness (which has the right quantum stuff) is now going to be insufficient to refute Searle.

    It also follows that your attempt to show unpalatable consequences for the causal quantum theorist fails on account of them neither having to accept epiphenomenalism (they think our consciousness causally determines our behavior) nor overdeterminism (they reject premise 1a).

    What about Searle's position? Well, you made no argument as to why accepting Searle's two levels of description explanation for epiphenomenalism is contingent on refuting Searle's dictum. Remember, you can't just reject his argument because you also rely on his same solution. You have to specifically show that one is contingent on the other.

    In the above posts I showed that this (contingency) can't be the case, because Searle's argument against epiphenomenalism is fully compatible with his position (and with yours).

    I hope this has all made things clearer.

    ReplyDelete
    Replies
    1. "it follows that the causal quantum theorist is going to be in disagreement with you over what kind of properties WBE Alex is going to exhibit."

      Presuming of course that WBE Alex is constructed by replicating my brain to the degree of detail required to achieve the described computational/behavioral equivalence.

      Delete
    2. "Simulating (the right kind of) computation is sufficient for consciousness"

      I obviously meant insufficient.

      Delete
    3. To be clear: the causal quantum theorist is going to assert that the classical computer (lacking the quantum stuff) which perfectly emulates my computational functions, does so through some means other than just replicating my brain.

      According to them, replicating brain functions on a classical computer is insufficient for behavioral equivalence. So yes, a computer can perfectly simulate me, but only through adopting some other means (replicating quantum computation on a classical computer is really difficult, and not just a matter of simulating a neural net).

      Further, the theorist can assert that these “other means” can only come about if we rig the program in a very specific way. On the other hand, such precise “rigging” is not needed if you have the quantum stuff.

      If this is true, that would give us confidence that there is a real causal link between our consciousness and our behavior, but just mere correlation for the computer.

      Delete
    4. In other words,

      A classical and quantum computer can do the same functions, but they implement these functions using very different means. It would take way more computational resources to implement the quantum stuff on a classical level for example.

      The causal quantum theorist asserts that consciousness is in “the means” and not the function.

      Whew! I’ve written a lot!

      Delete
  59. Hey Alex,

    it seems the core idea (and also, I think, the core misunderstanding) you are expressing is that there are two kinds of simulation.

    But in my argument I am always talking about one kind of simulation:
    I define a "perfect simulation of my brain" to be a simulation, on a classical computer, of the dynamics of my neurons to the precision necessary to achieve behavioral equivalence (which doesn't just include external behavior, internal patterns of neuronal activations must be accurate as well).

    If it helps, imagine we take the full equations of physics, be they deterministic or involving probabilistic transitions like in QM, and solve them numerically to the desired precision. As long as the dynamics is governed by some equation (such as the Schrodinger equation), a classical computer can in principle simulate it to any precision.

    That's what I have been calling a WBE, "my simulation", etc. So what is the quantum people's objection to my argument as applied to this type of simulation?

    ReplyDelete
    Replies
    1. Hey,

      I already addressed this in my last post above yours which attempts to clarify everything. It’s the part about the quantum theorist saying WBE Alex will need to have microtubules (or equivalent quantum stuff).

      Once you concede that, we can no longer go from “WBE Alex is conscious” to “Chinese room is refuted”.

      Delete
    2. “If it helps, imagine we take the full equations of physics, be they deterministic or involving probabilistic transitions like in QM, and solve them numerically to the desired precision.“

      Right, so you’ve emulated the function of real Alex. But this computer doesn’t have “the means” of consciousness. And “the means” is basically going to be the microtubules (quantum stuff).

      So the argument is that consciousness is in the implementation, and not the algorithms.

      Delete
  60. Do you mean WBE will need to have *simulated* microtubules, simulated on a classical computer per my definition?

    If so, I don't see anything special about simulated microtubules, compared to simulated other structures.

    ReplyDelete
    Replies
    1. I already pointed out the implementation/functionality difference. So that’s sufficient to demonstrate that there can be a difference. However, one might respond that there is no practical test to differentiate the two (I.e. how do we know one is sufficient but not the other?).

      This objection would not be fatal to the quantum theorist, because she still has good independent reasoning for her theory which is ultimately based on evidence from her own consciousness. However, it would certainly still be a blow if we can’t empirically differentiate the two.

      But I actually pointed out how we might give an empirical test to differentiate the two in the same post above.

      Delete
  61. But I just want to make sure I understand you, so is the answer to my question yes?

    ReplyDelete
    Replies
    1. And another yes or no question, to make sure we converge on the core issue: do you agree that a WBE, as I have defined it, automatically satisfies 1a?

      Delete
  62. I don't know exactly what you mean by "simulated microtubules". Are you talking about simulation of function? I.e. I have a computer which does the same thing the microtubule can do but in a different way? Or do you mean a complete algorithmic simulation? I achieve the same function using the exact same method?

    I think a classical computer can emulate both, but not as efficiently as a quantum computer. Functional simulation is enough to prove behavioral equivalence, but neither functional nor algorithmic simulation is going to produce consciousness for the causal quantum theorist. They are going to say that WBE Alex, if he is conscious, will need to have the actual quantum stuff.

    ReplyDelete
    Replies
    1. "And another yes or no question, to make sure we converge on the core issue: do you agree that a WBE, as I have defined it, automatically satisfies 1a?"

      Yes I agree

      Delete
  63. "it seems the core idea (and also, I think, the core misunderstanding) you are expressing is that there are two kinds of simulation."

    Yes I tried to remain as consistent as possible in my language. I used WBE Alex to mean exactly what you mean, whereas "a whole brain emulation of me" I used to mean something that was not behaviorally equivalent to myself by definition.

    The reason for this is simply that I was responding to your argument that rejecting the WBE argument implies unpalatable consequences. I was trying to show that the causal quantum theorist (who think consciousness is not epiphenomenal) doesn't run into causal overdeterminism. The easiest way to see this is to imagine a whole brain emulation of me which lacks that conscious component, and that's why I used "whole brain emulation" in a different sense.

    So a whole brain emulation which lacks the conscious phenomenal stuff but has everything else (even microtubules) is not going to be behaviorally equivalent to me according to these theorists. My consciousness is the thing which causes the electrons in my microtubules to behave in certain ways.

    In any case, the main post right before your reply which summarizes everything speaks in your same language. I only talk about WBE Alex there and nothing else.

    ReplyDelete
    Replies
    1. If I just used "simulation" and "emulation" in the sense you intended it, then there would be no way to express that above thought.

      Delete
    2. Do you see now why it was so important to make this distinction of language?

      There are two ways to make an entity behaviorally equivalent to myself. One way is to give it consciousness plus all the other things in my brain. The second way would be to give it all the other things in my brain, minus consciousness, but plus some simulated (non-conscious) component which replicates the function that consciousness previously fulfilled.

      Using your above definition means that WBE Alex can be either kind of entity. Hence the need for a clearer way to delineate the two.

      Delete
    3. *By "all the other things in my brain" I mean just the computational simulated aspects.

      This hopefully explain my previous vacillation on whether WBE Alex was conscious; I wasn't sure which type of entity you were referring to!

      Delete
  64. “ This hopefully explain my previous vacillation on whether WBE Alex was conscious; I wasn't sure which type of entity you were referring to!”

    I mean it wasn’t clear to me whether you thought WBE Alex was incapable of having actual microtubules (not just simulated ones).

    It’s not logically contradictory to suppose that the behaviorally equivalent classical computer (WBE Alex) also has microtubules. So I wasn’t sure whether your definition could allow the possibility for consciousness or not.

    If, however, you wish to maintain that being WBE Alex is mutually exclusive to hosting real microtubules, then my previous statement doesn’t follow.

    Instead, we would just believe that WBE Alex as defined is the non-conscious entity which behaviorally/computationally replicates myself through other means.

    This just straightforward follows from the causal quantum theorists’ belief on consciousness. There’s no overdetermination because if you stripped the phenomenal stuff from real Alex, that new entity would no longer be behaviorally equivalent to me.

    I hope things are starting to make more sense. Do you agree with me that the burden was on you to show epiphenomenalism is entailed by accepting Searle’s position? Do you agree with me that so far you haven’t shown any reason for thinking it is entailed by either Searle’s position or the quantum theorists? Hence, that particular line of argumentation can’t work.

    ReplyDelete
  65. Hey Alex,

    Sorry, took me a while to respond. I am working through your recent comments backwards, one issue we can settle right away is:

    By my definition the simulation (WBE) doesn't have real microtubules, only simulated ones. Every physical aspect of the brain is simulated. So those quantum people would, you assert, say the simulation is not conscious - since it lacks real microtubules.

    > Do you agree with me that the burden was on you to show epiphenomenalism is entailed by accepting Searle’s position?

    Yes

    >Do you agree with me that so far you haven’t shown any reason for thinking it is entailed by either Searle’s position or the quantum theorists?

    No:) My defense of 1b gives such a reason.

    ReplyDelete
  66. 2. Recall that by my definition WBE simulates perfectly the physical aspects of the brain (most importantly, how my neurons trigger one another). If the physical world is causally closed (the sufficiency principle from your article) and follows some equation (like the Schrodinger equation) then this is in principle possible on a classical computer.

    Another way of saying it is that WBE is by definition " behaviorally" equivalent to the real Alex. I hope it's been clear throughout this conversation that "behavior" for us here includes the behavior of neurons, how they trigger each other, what I elsewhere called brain patterns.

    The argument in my article isn't trying to demonstrate that WBE is possible. It attempts to refute the idea that WBE - if it existed - wouldn't be conscious. My thesis is it would.

    Hence, if a quantum person or a believer in souls affecting the physical asserts that WBE can't exist that does absolutely no damage to my argument.

    ReplyDelete
  67. 3. >In any case, the main post right before your reply which summarizes everything speaks in your same language. I only talk about WBE Alex there and nothing else.

    I think you are referring to the post starting with "I know I've written a lot on..." But it doesn't speak in the same language, it describes WBE as having real microtubules ("quantum stuff"). It also says that the quantum consciousness people would reject 1a - but later, once we clarified definitions better, you agreed that 1a is automatically satisfied.

    So with the previous two parts in mind (WBE has no actual microtubules, WBE is behaviorally equivalent) what is a quantum person's objection?

    If she would just say such a WBE is impossible, then it's the same reply as a soul believer's: classically simulating the relevant equation of physics won't produce the same behavior - there's something extra, something that doesn't follow an equation (soul or something else) that intervenes and makes particles do something other than the equation predicts.

    As I explained above, this would not affect the thesis of my argument, because, like Searle's, it's a conditional: my simulation, if it can exist (no souls etc), would be conscious. Searle says it wouldn't.

    ReplyDelete
  68. So in light of these clarifications about what I mean by WBE/"my simulation", the questions are:

    1. What's the new objection to my defense of 1b from the quantum consciousness people? Or, if one of the old ones is still applicable, which one?
    2. What objections can Searle's side put forward?

    Let me take a stab at number 2. I think you indicated that Searle could say: consciousness could just be a description, at another level, specifically of neuronal stuff, involving real, "wet", neurons. Then WBE isn't conscious because it doesn't have real neurons, and there's no epiphenomenalism problem because consciousness is not a separate thing, it's just an alternative description of lower level neuronal processes.

    I can answer that, but first - let me know if that's a fair compact description? If not, let me know your version of it.

    I guess I can also take a stab at 1. Maybe you think they would say: only real microtubules give you consciousness. WBE can exist, maybe, and behave the same, but still wouldn't be conscious - simply because it doesn't have real microtubules. Is that a fair description or do you have a better version for 1?

    ReplyDelete
  69. Hey Dmitriy,

    ">Do you agree with me that so far you haven’t shown any reason for thinking it is entailed by either Searle’s position or the quantum theorists?

    No:) My defense of 1b gives such a reason."

    But your defense of 1b can't work. As I already said, "Accepting the Chinese room argument means you think an entity can be behaviorally equivalent to a conscious entity without it being conscious. But it doesn’t follow from this that behavior has no causal connection to consciousness."

    In response, you attempted to argue that the only palatable alternative is causal overdeterminism, but I already pointed out why the causal quantum theorists don't have to accept overdeterminism (and we clearly agree that Searle doesn't either).

    So to reiterate again, do you agree with me that you've put forward no clear reason to think that either Searle's position or the quantum theorists entails epiphenomenalism/other unpalatable consequences?

    On the causal quantum theorists (CQT):

    Yes they would say you need the real quantum stuff (e.g. microtubules) to produce consciousness. In turn, consciousness is causal, meaning that it interacts with the physical quantum components in a bi-causal way. Simulated microtubules are insufficient to produce consciousness, because it's not the computation which yields it.

    On WBE Alex:

    Whether WBE Alex is impossible depends on how rigidly you define him. Note that it is possible to replicate WBE Alex (the behaviorally equivalent guy) if you simulate his neural state, and simulate his microtubules, and then develop some (non-conscious) computational component which can replicate the function of consciousness. Of course, the addition of such a computational component means that WBE Alex's "brain" will look pretty different from my brain, so I'm not sure whether this counts as WBE Alex under your definition.

    If your definition of WBE Alex is so rigid that the above behaviorally equivalent computer doesn't count as WBE Alex, then just keep in mind that the CQT will reject 1a. The "perfect computer simulation" which has the simulated neurons and simulated microtubules, but lacks that extra computational component, won't in turn be behaviorally equivalent to me.

    If, on the other hand, it is not so rigid, then the CQT will accept the possible existence of WBE Alex, but will reject 1b. And I already showed that your defense of 1b is woefully incomplete.

    On the phenomenal-physical connection:

    "If she would just say such a WBE is impossible, then it's the same reply as a soul believer's: classically simulating the relevant equation of physics won't produce the same behavior - there's something extra, something that doesn't follow an equation (soul or something else) that intervenes and makes particles do something other than the equation predicts."

    Note that I already showed this doesn't follow, as I mentioned that I agree you can develop a computational element which replicates the consciousness function.
    However!

    Caveat: At this point I should probably point out that there are causal quantum theorists (like Penrose) who think that consciousness acts in some non-computable fashion. Meaning that it can solve undecidable problems which computers can't (e.g. halting problem). So we have "free will", and free will isn't bound by the laws of computation. I've kind of avoided this position because I find the concept of free will as being neither deterministic nor stochastic (but rather some third thing) to be rather nebulous.

    In any case, such a prediction would be even more open to empirical verification/falsification since, if true, strong AI is impossible. Obviously, this position is fully compatible with Searle's Chinese room argument.

    For now, let's ignore the above caveat, and let's just assume that the causal influences of our consciousness are computationally replicable.

    ReplyDelete
    Replies
    1. On Searle:

      So remember that the burden of proof is still on you to show his argument entail unpalatable consequences, given that your defense of 1b is incomplete.

      " Searle says] consciousness could just be a description, at another level, specifically of neuronal stuff, involving real, "wet", neurons."

      Not quite. Searle isn't saying that consciousness is "just" that description, consciousness is partly physical and partly phenomenal. We can speak of consciousness in the sense you describe, as long as we keep in mind that we aren't eliminative reductionists. There's no epiphenomenalism because consciousness is physical in some sense, and the biological stuff is necessary for the production of the phenomenal components of consciousness.

      Delete
  70. "If she would just say such a WBE is impossible, then it's the same reply as a soul believer's: classically simulating the relevant equation of physics won't produce the same behavior - there's something extra"

    To be clear, the rejection of WBE as impossible doesn't imply the latter stuff provided you define WBE Alex in that rigid way I spoke of. If WBE Alex just has the simulated brain components and nothing else (i.e. no component which replicates the consciousness function), then he won't be behaviorally equivalent according to the CQT's.

    Thus, the CQT will reject the possible existence of WBE Alex. But note that this in no way entails that the CQT has to believe that consciousness acts in some non-physical, non-computable way.

    Finally, for those theorists who do think that consciousness behaves in that fashion, I think the "but you're speaking of souls!" rejoinder can't work. CQT's like Penrose definitely accept the Chinese room argument, and they also give plenty of testable predictions (since they claim strong AI can't exist) for their hypothesis.

    ReplyDelete
    Replies
    1. To clarify further:

      "CQT's like Penrose definitely accept the Chinese room argument"
      but doesn't that contradict my previous statement that
      "Accepting the Chinese room argument means you think an entity can be behaviorally equivalent to a conscious entity without it being conscious"?

      No it doesn't. Rather,
      1.Accepting the Chinese room argument (computation is insufficient for consciousness)
      +
      2.Believing that computation is sufficient for behavioral equivalence

      yields the belief that behavioral equivalence ≠ consciousness

      CQT's like Penrose (who are in their own special class) don't buy into premise 2.

      Of course, I have basically been assuming all along that premise 2 is correct, since it is one of the prerequisites of your argument.

      Delete
  71. Hey Alex,

    I would like to turn to the defense of 1b, but it seems from the following passage that I still haven't been able to successfully communicate the definition of WBE I am using in this argument, and without this step we can't properly analyze the arguments. The passage that tells me this is:

    "Whether WBE Alex is impossible depends on how rigidly you define him. Note that it is possible to replicate WBE Alex (the behaviorally equivalent guy) if you simulate his neural state, and simulate his microtubules, and then develop some (non-conscious) computational component which can replicate the function of consciousness. Of course, the addition of such a computational component means that WBE Alex's "brain" will look pretty different from my brain, so I'm not sure whether this counts as WBE Alex under your definition.

    If your definition of WBE Alex is so rigid that the above behaviorally equivalent computer doesn't count as WBE Alex, then just keep in mind that the CQT will reject 1a. The "perfect computer simulation" which has the simulated neurons and simulated microtubules, but lacks that extra computational component, won't in turn be behaviorally equivalent to me."

    CQT can't reject 1a, because, as I thought you already agreed, 1a is automatically satisfied for my definition of WBE. Recall that - by definition - WBE is behaviorally equivalent. So CQT can at most say WBE can't exist. I attempted to clarify this with part 2 of my response. Because it's so crucial I will copy the most important part:

    "2. Recall that by my definition WBE simulates perfectly the physical aspects of the brain (most importantly, how my neurons trigger one another). If the physical world is causally closed (the sufficiency principle from your article) and follows some equation (like the Schrodinger equation) then this is in principle possible on a classical computer.

    Another way of saying it is that WBE is by definition " behaviorally" equivalent to the real Alex. I hope it's been clear throughout this conversation that "behavior" for us here includes the behavior of neurons, how they trigger each other, what I elsewhere called brain patterns."

    So again, WBE by definition has no real microtubules, only simulated ones, and also by definition is behaviorally equivalent, thus making 1a automatic as you previously agreed.

    Does this passage I just quoted clarify my definition of WBE? Do you disagree with the claim that if the physical world follows the Schrodinger or some other reasonable equation then that equation can in principle be simulated on a classical computer to whatever precision we want, thus ensuring behavioral equivalence? Do you agree that in that case such a simulation will satisfy the definition of WBE?

    Let's make sure we are on the same page about how I define WBE, so let me know if anything still needs to be clarified about the definition.

    ReplyDelete
    Replies
    1. Hey Dmitriy,

      I meant that the CQT's would deny premise 1a for that definition of the rigid perfect computer simulation that I gave (which isn't behaviorally equivalent). Of course I understand that WBE Alex is behaviorally equivalent by definition. I didn't mean to imply that they deny premise 1a for your version of WBE Alex.

      So, either you:

      1. Define WBE Alex as a (behaviorally equivalent) rigid version of that computer simulation, meaning that it has every simulated component the real Alex has, but lacks the component that simulates the function of consciousness.
      or
      2. You define WBE Alex in the non-rigid sense; meaning you agree that WBE Alex needs to have the additional simulated component (which real Alex lacks) that replicates the function of consciousness.

      As I said, the CQT's would say that 1 is an oxymoron. They would accept the existence of 2, but consequently reject premise 1b. I tried to establish that arguing for 1 being an oxymoron does not in any way imply that the supernatural stuff follows, and also that rejecting 2 is not problematic.

      Delete
    2. So a quick summary:
      1. The "rigid" definition from your passage I quoted is not my definition.
      2. The other one is closer but not mine either.
      3. My definition of WBE has these features:
      - it simulates the physics of the brain
      - it is behaviorally equivalent to real Alex
      4. The second feature follows from the first if physics is all that affects the physical stuff.

      Delete
    3. So yes I completely agree that 1a is automatic under your presumptions. I'm just curious whether (in your mind) you presume that WBE Alex is some sort of rigid entity, implying of course that there is no function of consciousness in my brain which goes above and beyond my brain function at whatever level you think is needed (e.g. neural/quantum).

      Or do you concede the possibility of the CQT's claim that consciousness is causal, and hence conceive of WBE Alex in the non-rigid sense?

      Delete
    4. "3. My definition of WBE has these features:
      - it simulates the physics of the brain
      - it is behaviorally equivalent to real Alex"

      I wasn't saying that either the rigid or non-rigid was your definition. I'm saying that your definition (which we agree is the above) logically entails either that WBE Alex is rigid or non-rigid. And I was exploring the consequences of either approach and what the CQT would say to either kind of entity.

      Since there are two possible entities encapsulated by your definition of WBE Alex, it is important to clarify the above (as the CQT has different responses to each).

      Delete
    5. "I'm saying that your definition (which we agree is the above) logically entails either that WBE Alex is rigid or non-rigid."

      Meaning that your definition is somewhat too open-ended, it doesn't exclude the possibility of one, and therefore we have multiple types of entities with multiple potential responses by the hypothetical CQT. Hence the need to categorize WBE Alex into two parts.

      Delete
  72. Keep in mind that the CQT rejects the sufficiency principle from my article, but this doesn't mean that they think you can't simulate a behaviorally equivalent WBE Alex. So I'm still not sure what kind of entity you intuitively take WBE Alex to be.

    ReplyDelete
  73. "And I already showed that your defense of 1b is woefully incomplete."

    Wow, I'm more of an asshole then I realized. Sorry about that!

    But don't worry, while my brain externally behaves in a boorish and rude way, just know that internally I'm honestly a nice guy.

    :)

    ReplyDelete
  74. I don't completely understand your definitions of rigid and non-rigid, so I am having trouble relating my definition "3." to yours. Yours refer to the function of consciousness, without clarifying whether this function is or isn't or isn't captured by the equations of physics.

    If physical stuff follows the equations of physics, then obviously consciousness' role in affecting behavior is captured by physics. In that case the function of consciousness will be simulated. Also in that case the second feature in 3 follows from the first. So does that case correspond to rigid, non-rigid or neither?

    If physical stuff doesn't always follow the equations of physics then we are in the realm of souls.

    ReplyDelete
    Replies
    1. "Yours refer to the function of consciousness, without clarifying whether this function is or isn't or isn't captured by the equations of physics."

      The CQT I'm talking about thinks it can be fully captured yes, it's only the (caveat) CQT's like Penrose who think otherwise, but for the purpose of this conversation we're assuming they don't exist.

      So the answer is that it corresponds to both rigid and non-rigid. The above isn't sufficient to delineate the two. The only way to delineate the two is to clarify whether you think consciousness is causal.

      If you don't think it is, then you can buy into the existence of the rigid entity, if you think it is then you can buy the existence of the non-rigid entity (we really would need some additional component to simulate the function of consciousness).

      Delete
    2. "If physical stuff doesn't always follow the equations of physics then we are in the realm of souls."

      As a total side note, I already addressed why I think this is an unfair charge. People like Penrose give plenty of testable predictions for their claimed interactions, and I don't see how this is any different than someone postulating some new equations of physics.

      Keep in mind that physics doesn't have to be computable. While it's true that we think all physical laws can be computable, it doesn't therefore follow that we can't imagine physical laws which aren't computable.

      See for instance, the busy beaver: https://en.wikipedia.org/wiki/Busy_beaver#Non-computability

      A game with laws that output non-computable functions. And it might be that in such a hypothetical case there exists a physical system whose solutions are non-computable.

      Delete
  75. I think consciousness is causal, I reject epiphenomenalism.

    What do you mean by additional component? Again, 3 says we are to simulate the equations of physics as applied to the brain. Are you saying we need to do something additional?

    ReplyDelete
    Replies
    1. If physical stuff follows physics then simulating physics would reproduce the behavior of the physical stuff. Nothing additional is needed to get behavioral equivalence.

      If it doesn't follow physics then it's "souls".

      Delete
    2. "Again, 3 says we are to simulate the equations of physics as applied to the brain. Are you saying we need to do something additional?"

      No, I'm not saying that. I'm saying we may or may not have to do something addiotnal to simulating the functions of the physical brain (neurons/microtubules etc...) if you buy into either the rigid or non-rigid cases.

      Delete
    3. Specifically, in the non-rigid case you have to do additional stuff. But at no point are you violating the laws of physics; the additional stuff is fully computable.

      Delete
  76. If you think consciousness is causal, then presumably you think that brain function isn't enough to explain our behavior, we need brain function + conscious (phenomenal) function. The only way out of this is to equate consciousness with physical stuff (avoiding all the other solutions we spoke of). As my recent paper demonstrates.

    ReplyDelete
  77. What's the difference between "simulating the functions of the physical brain" and "simulating the physics of the brain"? My definition requires the latter and thus ensures behavioral equivalence (I will assume that we agree to stipulate that physical stuff follows equations of physics from now on).

    ReplyDelete
    Replies
    1. Well if you think consciousness is non-physical and causal (like the CQT people think), it follows that a simulation of the functions of the physical brain will be insufficient to explain behavior.

      But since consciousness, the non-physical stuff, acts analogously to a computational system, it follows that it a behaviorally equivalent version to me can be simulated (without being conscious).

      Delete
  78. So "simulating the functions of the brain" is not the same as "simulating the physics of the brain"? Then I don't even understand what you mean by the former.

    Regardless, we don't need to worry about the former, because it's the latter that is stipulated in my definition of WBE (3 in the summary).

    Do you agree that if the latter is possible (and it is for the current equations of physics, including GR and QM, although yes, we can imagine non-computable rules) then we get behavioral equivalence?

    P.S. Recall I'll assume for the purpose of this conversation that physical stuff follows physics, which I will also refer to as "no souls".

    ReplyDelete
    Replies
    1. “ Recall I'll assume for the purpose of this conversation that physical stuff follows physics, which I will also refer to as "no souls".”

      Yes I’ve been assuming the same.

      “So "simulating the functions of the brain" is not the same as "simulating the physics of the brain"? Then I don't even understand what you mean by the former”

      Well, I guess by the former (simulating the functions) I imagined that we would take every component of the brain and then just simulate their function according to the physical laws that detail how such components should function. Since our best physical laws don’t allow for conscious (the phenomenal stuff) interaction with the world, it follows that there won’t be behavioral equivalence according to the CQT (under the former stipulation, WBE Alex is an oxymoron).

      Remember that according to the CQT, our physical laws can’t explain the way my consciousness influences my brain activity. In other words, there is no mechanism which could explain the physics of my brain in real Alex according to the CQT.

      But that doesn’t mean that we can’t simulate the physics of my real brain using some real physical (simulated) mechanism.

      “simulating the physics of the brain"

      Yes we can do this; I think the CQT would say that this logically implies the non-rigid scenario I described above. In other words, they reject premise 1b.

      “ Do you agree that if the latter is possible… then we get behavioral equivalence?”

      Yep!

      Delete
  79. For the purposes of this conversation, I’ve been assuming that you mean to use “the laws of physics” as analogous to “is computable”. So then we could say that my consciousness “follows the laws of physics”. Even if there’s no physical mechanism that would actually explain the full physics in my brain.

    If this is not what you mean, and instead you meant something like (explainable under our best current physical models of the world) then my apologies. In that case, the CQT would not accept that consciousness follows the laws of physics as we currently understand them (as I’ve said many times, they need to postulate some new form of physical interaction), but they would (for the purposes of our conversation) accept that it is computable.

    ReplyDelete
  80. Is this where the confusion has been along along? When you’ve been saying things like “without clarifying whether this function is or isn't or isn't captured by the equations of physics."

    And I’ve replied “The CQT I'm talking about thinks it can be fully captured yes”

    I just meant that the computable functions of consciousness can be physically replicated and nothing else. I.e. that the behavior of the physical stuff isn’t something that the physical laws can’t replicate; it’s just that we have no physical mechanism in my actual brain that would explain the behavior of the physical stuff.

    My apologies if that’s the case.

    ReplyDelete
    Replies
    1. I suppose this is also a result of my bias. I don’t think that causal quantum theory violates quantum mechanics, anymore than I think that the behavior of large scale macroscopic bodies violates it.

      I just think this shows that QM is incomplete.

      Delete
  81. To use an analogy that I hope makes everything easier to follow.

    Imagine we had a ball which was inexplicably bouncing up and down despite the absence of any known physical force affecting it (gravity, EM, strong and weak interactions).

    Our best physical laws are incapable of explaining why it behaves this way. But it doesn’t therefore follow that we can’t simulate the motion of the ball in reality.

    This is what the CQT person thinks is happening with our brain and consciousness. The special theorists like Penrose (and his OOR hypothesis) on the other hand reject the last part, they think even simulation will be impossible. But like I said, that’s a caveat which we are not really concerned with.

    ReplyDelete
  82. Hey Alex,

    it looks like we are converging on the source of the confusion. I still don't understand exactly what you mean:

    >For the purposes of this conversation, I’ve been assuming that you mean to use “the laws of physics” as analogous to “is computable”. So then we could say that my consciousness “follows the laws of physics”. Even if there’s no physical mechanism that would actually explain the full physics in my brain.

    >I just meant that the computable functions of consciousness can be physically replicated and nothing else. I.e. that the behavior of the physical stuff isn’t something that the physical laws can’t replicate; it’s just that we have no physical mechanism in my actual brain that would explain the behavior of the physical stuff.

    I still don't understand what in what sense the two parts of the last sentence are both true. This seems like something very esoteric. We seem to agree, for the purposes of this discussion, that the behavior of the physical stuff can be replicated by computable physical laws. Then that's all we need, that's what physics is: finding a law to replicate behavior of physical stuff.

    If such a law exists (and we agree it does) then WBE is automatically possible.






    ReplyDelete
    Replies
    1. I meant the our (known) physical laws are insufficient to explain the behavior of the brain. But it’s definitely computationally replicable we agree.

      Delete
  83. For my argument and for the definition of WBE it doesn't matter if that law is known or not, only that there is one. Simulating that law is what I mean by "simulating the physics of the brain" in my definition.

    ReplyDelete
    Replies
    1. Good, then I didn’t misunderstand you after all. In that case, like I said, the CQT will accept the possible existence of the WBE but deny premise 1b

      Delete
    2. Just saw your ball analogy, it's very helpful. So yes, it doesn't matter whether the full law is currently known to physicists or not, as long as the law exists. In your ball case it does - assuming the ball keeps following that motion and doesn't start doing something else.

      Delete
  84. Hey Alex,

    this is the 199th comment and Blogger will not show comments past the 200th it seems, so I won't be able to respond to the 200th one.

    It seems we finally clarified the definition of WBE: it simulates physics, i.e. the law (whether currently known or not) that replicates the behavior of physical stuff.

    We also agreed that 1a is then automatically satisfied. You are then saying that for example CQTs would reject 1b. Then perhaps you can formulate a compact objection to my argument for 1b, in light of us now being on the same page about how I define WBE. If it can be stated compactly (ideally one paragraph) I can include it in the article along with my response. If it can't, then we need a new rebuttal article from you :)

    ReplyDelete
  85. Hey Dmitriy,

    This is the last comment; let's just hope I don't mistakenly hit 'publish' before it's ready!

    Okay so here is the compact argument against premise 1b; I will put it in quotes:

    "The problem is that there is a logical gap between premise 1a and premise 1b; the former does not entail the latter. It doesn't follow that if it is possible for something to think it is conscious without actually being conscious, then we ourselves think we are conscious despite our being conscious. You can't go from (thinking is not a sufficient condition for consciousness) to (consciousness is epiphenomenal)."

    Okay here's my defense of the above statement. From the CQT's belief in (thinking is not a sufficient condition for consciousness) we can deduce:

    1. Thinking does not cause consciousness. ~(T > C)

    We also all (meaning you and the CQT guys) believe that,

    2. Thinking causes all of our behavior, (T > B)
    &
    3. Consciousness (in ourselves) causes behavior. (C > B)

    Can we go from that to epiphenomenalism (refutation of 3)? Well you would need to argue that the above is an oxymoron (I'm assuming that giving up 1 or 2 is completely unpalatable). So can we get a self-contradiction between the three principles? The answer is no, the only way to do that is to introduce another principle which is this:

    4. Consciousness doesn't cause thinking

    To see this, let's give the causal account that the the CQT people subscribe to. They think that consciousness causes (some of our) thinking, which in turn causes our behavior. Or [C -> T -> B] for short. But note that this is just what is logically entailed by,
    1. ~[T > C]
    2. T >B
    3. C > B

    Hence, it is only when you introduce premise 4 that you get an oxymoron and ultimately epiphenomenalism. Of course, premise 4 is completely begging the question, since that is precisely what the CQT people are assuming is not the case. The CQT people think that the quantum stuff (e.g. real microtubules) causes consciousness which in turn causes some of our thinking and behavior.

    Addendum:

    By the way, the above short proof (combined with the reasoning from my article) also shows that 2 & 3 & 4 are compatible under the following causal theory [T > C > B]. I presume this is the theory you subscribe to, but if we think about it, it doesn't seem especially coherent. What would it even mean to say that consciousness causes behavior without influencing the thinking in our brain? That doesn't seem to make much sense.

    Of course, you can believe both that thinking causes consciousness and vice versa (if you give up premise 4). But if you give up 4, then you're still going to have to postulate some interaction between the phenomenal stuff and the physical stuff which our current best physical laws can't account for.

    And the problem is that there really is no available place to postulate such an interaction except in the quantum realm. Everything else (the macro-physics of our brains and computers) seems well accounted for. We think, for instance, we can give a complete accounting of the brain at the neural level in terms of physical interactions. So, speculating about some new interaction at the level of neurons doesn't seem to work out. This implies multiple realizability isn't feasible, as it requires we assume the same interaction exists in computers (at the macro-level) etc...

    Thus, if you're going down the route of phenomenal-physical interaction of some sort, then you might as well accept the implications of the Chinese room.

    Best,

    Alex

    ReplyDelete

Post a Comment