Thursday, January 5, 2012

So You're In A Simulation

Looking at what we can see of our universe, there don't seem to be many folks like us around.

There don't seem to be many folks around at all.

Aware beings seem to arise only rarely, from what we can tell. That's Fact #1. Fact #2 is that we are, nonetheless, aware beings. (Hi!)

Should we assume things are as they seem - that our experiencing selves arose through rare natural processes? That we are among the few beings who will ever wake up and experience the universe?

Or should we wonder at the coincidence of our existence and our a priori unlikeliness? Should we suspect, perhaps, that we are not the special unicorns of the galaxy that we appear to be, but rather are ordinary in a way that is hidden to us? Because which is more likely: that we are among the 200 billion aware beings who will ever live in our universe, or that we are as grains of sand among the ten-to-the-fuckload beings who might exist in the simulations of some simulation-capable entity?

So you're in a simulation. What can you do if you don't want to play anymore?

From the existence of miserable folks since Jeremiah,* we have some evidence as to our simulator overlords' value system: they don't give a fuck if we don't want to be here. Actively trying to get out of the simulation and hitting the "off" button over and over again does not seem to exempt us from participation. They do not seem to restrict themselves to creating only creatures that are glad to be alive. We know this about them; what else do we know? What could their purposes be for having us here, and what, if anything, would motivate them to let us stop existing?

Some folks seem to leave for good and we don't interact with them again - death seems final, in our universe. But what if that's not how it works in the big simulation? Given the information we have about our simulation and our overlords, is there any strategy to speak of for (a) getting out of the current simulation and (b) preventing oneself from being recreated in other simulations?

I imagine longevity enthusiasts would be interested in the flip side: is there anything they can do to ensure they get copied and re-used as widely as possible in everybody and their mother's simulation?

And if our universe is as it seems but simulation capabilities are in our own society's future, is my boyfriend correct in suggesting that I'm putting myself at risk of future involuntary simulation by being friends with quirky AI geeks?



* 20:14-18 "Cursed be the day on which I was born; let not the day on which my mother bore me be blessed. Cursed be the man. . . because he slew me not from the womb; so that my mother might have been my grave and her womb always great. Why did I come out of the womb to see labour and sorrow?" Quoted in Benatar, "Abortion: The 'Pro-Death' View," Chapter 5 of Better Never to have Been: The Harm of Coming Into Existence.

38 comments:

  1. I've always thought that the badness of things is strong evidence against any possibility of intentional simulation. Such technological advancement would be accompanied by moral advancement, one must assume.

    Such is my faith that the Gods are not evil, just non-existent.

    ReplyDelete
  2. One possibility (proposed by Tipler) is that we're in a rescue sim, i.e. our AI overlords are recreating Earth's history to copy all minds that died before making it to our transhuman utopia. Obviously they can't interfere with the current simulation to prevent suffering, or they would change history and lose some minds.

    This idea has two nice advantages: it's optimistic and entirely impossible to disprove. What more do you need?

    (Also, our sim overlords don't care about *my* consent either. I've asked for the simulation to be shut down twice, but we're still here.)

    ReplyDelete
  3. muflax, that would require that they have detailed information to re-create the exact history of the universe, down to every single butterfly effect. That seems very implausible.

    ReplyDelete
  4. The problem is rather how much computation power they have. If you can run simulations as reversible computation, for example, (or less plausibly, have access to infinite resources as Tipler imagines), you can just simulate *all* possible histories.

    ReplyDelete
  5. It doesn't make conceptual sense to simulate all possible histories to save actual minds. You can just come up with a mathematical definition of what a mind is and simulate the total parameter space to get the total set of all possible individuals.

    ReplyDelete
  6. I don't see how this is any easier. The point is not to rescue all possible minds, but all possible humans. You'd have to implement evolutionary models and so on, so you're probably not saving much, if anything. Also, if you already have *some* idea what actual human history looked like, then you can limit yourself to all plausible histories.

    Also note that if you simulate *all* minds, you also implement their memories, and from the point of view of the simulated mind in question, the scenarios "simulate all Earth histories" and "simulate all possible human minds" are indistinguishable.

    (... except maybe for an anthropic argument - if the set of minds that have a possible evolutionary history and the set of all human minds in general have a sufficiently different distribution of features, you may be able to (probabilistically) figure out if you're part of the first or the second.)

    (Note that I'm not arguing that rescue sims are a likely idea at all. But given Vast resources and certain utility assumptions about transhuman utopia, they do make sense.)

    ReplyDelete
  7. Oh, and about Sister Y's actual question:

    I guess it depends on how much of yourself you care about. For example, if you care about your genes, don't open source your DNA. (Maybe don't even get it sequenced at all. Who knows who keeps a backup. Try to get your family cremated to be sure.) If you care about memes, stop blogging for Cthulhu's sake. (Please don't.)

    But more importantly (and I actually have considered how to prevent becoming a mass-copied upload in a Hansonian hell world), don't be (uniquely) economically valuable. I mean, if you don't take explicit measures to preserve yourself, who would possibly pay to have you reconstructed, unless you have some really valuable skill set (and seriously, who needs even more depressed bloggers)?

    Minds are fragile, and unless you actively preserve them (don't make backups kthx), or someone throws Vast resources at the problem to reconstruct large parts of history, no future overlord will ever find you. So really, the only one you'd have to be afraid of is a mad scientist running lab universes (http://www.utilitarian-essays.com/lab-universes.html), and it seems to me that someone with so much concern for power and so little concern for safety will wipe themselves out first.

    (And that I don't buy infinitist arguments and so find highly expensive simulations extremely implausible in general.)

    ReplyDelete
  8. I don't know about asking our overlords to stop the simulation, but I just don't get why longevity enthusiasts care if they get recreated or not. It would not be them in any familiar sense of the term. It's like psychological continuity doesn't even matter at all! But then again, Bryan Caplan seems to have trouble realizing that his clone would be and his spawn are different individuals from him, so maybe these folks just have a very confused notion of personal identity.

    ReplyDelete
  9. It's like psychological continuity doesn't even matter at all!
    It doesn't. A copy of you is you in the complete sense in which any future entity can ever be you.

    ReplyDelete
  10. But unless the copy has all the memories you have ever had, it is a copy of a past you. So maybe it is possible to create a copy of Robin Hanson based on his frozen head. Maybe they'll have a really good way to get around the freezer burn, the cracks, and, you know, the whole "death" part of it and then the copy will remember everything Hanson remembered up until ceasing to have experiences, so it will be like he time-traveled. But if we are talking about a simulation, it's like you're asking to import a character I created into your game, while I also continue to use that character in my game. At some point they will be very different and will each have acquired new memories the other one doesn't have. And if we are talking about recreating a longevity enthusiast in a different simulation from infancy, i.e., before s/he even was a longevity enthusiast, it would make even less sense. Not that Robin Hanson's hopes make sense, but at least there would be only one of him at a time.

    ReplyDelete
  11. What does it matter if there's more than one of someone?

    ReplyDelete
  12. Sorry, Anonymous, I've just been reading some Derek Parfit lately. I suppose we would have to ask a longevity enthusiast what matters them. As for your question: if you were told that you were copied at age 4 and that copy has been living in the world unbeknownst to you ever since, would you seriously consider it to be you?

    ReplyDelete
  13. Not really. He would have different social ties and identifiers. Maybe some different political and philosophical views. Definitely different biographical memories.

    But personal identity is a tricky thing. Looking back at my past self from years ago creates an equally ambigious sense of identity. With some aspects, I strongly identify, with others not at all.

    The problem is that lossy copies are all we will ever have regarding personal identity through time. Personal survival is a copying process. Some people deflect this with imaginary concepts like a soul or fancy-sounding quantum equivalents that have no more empirical basis. I see no rational grounds for that.

    ReplyDelete
  14. Oh, btw, I'm just watching Dollhouse, an awesome show I've missed that really has some nice narrative intuition pumps regarding personal identity.

    http://www.fastpasstv.ms/tv/dollhouse/

    ReplyDelete
  15. Anonymous says: "I see no rational grounds for that."

    Consciousness is experienced as a unity that persists in time. Brain processes, analyzed reductionistically and computationally, look like a trillion separate events that are islands in space and time, although causally connected by their effects. There is a radical ontological mismatch between the unity of consciousness and the disunity of the brain as classical computer.

    Most materialists are dualists but don't realize it. They are dualists because they believe in this physical picture of the brain *and* they believe in consciousness; they don't realize it because they have learned to habitually associate consciousness with their physical picture of the brain, and only the careful ones (like David Chalmers) manage to see that this is a habitual association of two dissimilar things, and not a genuine identity.

    Meanwhile, the avantgarde of materialist thought has leaped ahead to draw further conclusions, by imagining simulations of the computational brain, copies of it, pauses, rewinds, all the various transmutations that can be performed on a "pattern" or a "program". The power of habitual association is then used to conclude that the identity of a person through time is exactly as mutable as the identity of a program or a pattern, because it is just an instance of the latter.

    I am unimpressed. On the contrary, I think the superstitious attitude is the one which imagines that any simulation of a person will be a person. I admit that it is a consistent attitude: being a crypto-dualist explanation of consciousness to begin with - the brain "implements a computation", and then consciousness somehow inhabits the computation - it is only reasonable to imagine that consciousness might, equally mysteriously, come to inhabit a different "computation" in a different physical system.

    But it is curious to assert, as Anonymous does, that there is no empirical basis for a belief in genuine identity of the self over time. Hello, consciousness is *nothing but* awareness that persists through time. The fact that these materialist or computationalist philosophies of consciousness, which are supposedly empirically motivated, end up requiring us to interpret utterly elementary and ubiquitous aspects of subjective experiences (like time, like colors) as illusions, tells you how anti-empirical they actually are. Empiricism originally means based in *experience*. It is mildly ironic that the scientific philosophy, which started out with an emphasis on seeing everything for yourself, has given rise to this new theoretical outlook which requires the believing practitioner to denigrate the reality of their own perceptions in favor of a theoretical apriori, an apriori that is loosely justified by elaborate reasonings and highly indirect evidence.

    ReplyDelete
  16. I've said things like this so many times now that I bore myself. So I will try to sum up. There is such a thing as physical computation, in the sense of elaborately coordinated transformations of state occurring in a network of causally coupled state machines. There is also such a thing as consciousness, which, as the state of something, exhibits a peculiar, distinctive complex unity, and which not only persists in time but which is capable of registering this fact. This latter type of "state" cannot be identified with the former, which is actually a collective state, made out of numerous black-box entities whose intrinsic character is deemed irrelevant when we wish only to talk of their computational properties.

    Therefore, if you wish to believe that reality consists fundamentally of innumerably many very simple finite-state machines, causally coupled, and if you also wish to believe in consciousness, then you must be a dualist. Alternatively, you can suppose that there *are* entities in physics which have the attributes of consciousness-bearers; and this leads towards one particular type of quantum mind theory, one which says that an entangled system is ontologically a single object, that conscious states - in purely physical terms - must be the states of such an entity, and so therefore that there is cognitively relevant entanglement in the brain. The empirical bases for this hypothesis are (1) the subjectively manifest attributes of consciousness (2) the existence of entanglement in nature. Actually validating the hypothesis requires (3) the discovery of a macroscopic quantum system in the brain (4) the discovery of a physical ontology in which entangled systems are ontological unities.

    There may be some other possibility that I haven't thought of. But the philosophies of mind which try to make do, solely with classical computation, look like they are trying to solve a hard problem with a theoretical toolkit which intrinsically can't solve it. So to maintain the illusion of the toolkit's adequacy, they either can't become too aware of what they are actually saying (or they would notice, for example, that the alleged identity theory is actually just a habitual association of two things), or else they must redefine the problem in terms which manifestly *can* be solved by that toolkit - whereupon we suddenly discover all sorts of new things that the toolkit can do for us. For example, we can survive physical death by being simulated on a computer, since "personal survival is a copying process" anyway.

    I'm not saying that the idea of brain simulations, personlike artificial intelligences, and so on, is itself an illusion. No, all of that would appear to be very possible. But the attempt to understand and engage with these emerging technological realities is being made on the basis of a wrong and hollow philosophy. To really understand the relation of consciousness and computation, we need to understand how these things work together in the brain. Presumably at some level, there's a "conscious part" - such as this hypothesized quantum macrosystem - and then there's an "unconscious coprocessor" coupled to the conscious part - perhaps the whole rest of the nervous system falls into that category. With *that* sort of understanding, the understanding of artificial minds, brain prostheses and so forth should also be much clearer. We could say: this entity - a solid-state quantum computer, perhaps - clearly also has a conscious part and an unconscious part. Whereas another entity over here is just classical computation and hence entirely unconscious. And a third entity might present the ominous spectacle of something which consists of an unconscious computational simulation of the conscious part of a human brain. If you had a classical computational neuroprosthesis which replaced and functionally substituted for the "conscious quantum part" of a human brain, you would finally have created what philosophers call a zombie.

    ReplyDelete
  17. These concepts of mine are crude and still highly speculative. But I have to regard them as a lot more credible than assertions about the mind and the nature of the self which are based on nothing but an ability to think about computation and an inability to think about consciousness.

    ReplyDelete
  18. Mitchell, thanks for your well-articulated reply, but:

    Consciousness is experienced as a unity that persists in time.

    I actually don't think so. Do you really experience consciousness as a unity with your past version from three years ago? Do you feel their emotions and perceptions? Do you have access to perfect memory? No, scratch that, you wouldn't even need access, it would be unified on an ontological level with all of your current awareness! This is obviously empirically false.

    Note that this is independent of the question whether the hard problem of consciousness can be resolved with reduction to computational processes, or whether we need the quantum macrosystem that you described - of which I currently have no intuition how it would be recognized empirically so that it actually adds much explanatory power.

    We know that memories, perceptions, emotions, personality traits, and cognitive functionality can all be altered by physically altering the brain. We also know with certainty that all of these aspects can greatly change during a human's life time. This means that, independent of the hard problem, we can assert with great confidence that none of these traits unify within a homo sapiens organism. Maybe there is a coherent way in which they can be said to unify in a momentary self, but if there is, that would be a very temporary entity. I cannot see any entity that connects past and future time slices of a homo sapiens organism in such a way that I would comfortably conceptualize them together as a conscious unity that persists in time. Introspectively, that assertion is clearly false for me as an individual. I have very different emotions and perceptions than various past versions of me, even though we are the same biological organism and share a continuous use of the same brain. I don't even have good access to my past conscious experiences, except via limited memory and external representation proxies. For example, I can't feel physical pain that I felt before, unless I cause equivalent nociception in my current body, and even then it's just a constructed approximation.

    ReplyDelete
  19. Suppose you're out for a walk in summer-time, and your mobile phone rings. While you're talking on it, you see two people collide on the opposite side of the street, while also being aware of the heat of the summer day.

    Having all those things coexisting in your awareness is a basic example of "the unity of consciousness". Your experience included, at the same time, talking on the phone, feeling the summer heat, and seeing a distant pedestrian accident.

    Every experience is some sort of gestalt - a simultaneous presence of many appearances. Another factor is the degree to which the gestalt is seen as a whole or as a collection of parts. If your visual field contains a jumble of shapes, you may see them as just a jumble of shapes, or you may see them as an intricately interpreted and meaningful visual scene. The raw sense-data is the same, but the experience feels different, so the conceptual interpretation is also somehow constitutive of the conscious experience.

    I don't want to get lost in the ontological details, but I do want to indicate what I mean by "unity of consciousness". I don't mean that all times in your life are one time. The unity of consciousness is, first of all, the unity of the moment. There is unity through time as well, but that is the "unity" of an A and a B which are connected because the A turned into the B. That's a different sort of unity, that we usually express by talking of persistent existents, "things". I certainly argue that the self persists in time, but when I talk of the self as a complex unity, I am thinking especially of an experiential gestalt contained in a single moment of awareness. That's the reality which functionalist theories of consciousness associate with distributed classical computational states in the brain, an association which I insist can never be an identity, because the "mereology" (analysis into parts) of the two sides of the equation doesn't match.

    An individual conscious being can undergo extremely radical changes and still be "a conscious unity that persists in time". Even if we all believed in temporally persisting quantum souls, on a practical level, if a person changes enough, it seems legitimate to ask whether they are really still the same person. If someone lived a life, and was then somehow denuded of all memory, knowledge, and personality, and then the surviving bare substratum of consciousness embarked on a new life, one might reasonably say that there was a new person, even though the ontological substratum was the same.

    What I am really calling attention to, is the existence of persistent ontological substrates. If you deny they are there, then you cannot believe that we actually experience change, for example. Something has to persist in order to actually see the past become the future, as opposed to just having a momentary illusion of time-flow, as the result of a contrast between perceptions and memories - which seems to be a common computationalist explanation for the experience of time.

    Even if we admit that there is some persistence in time, we can play skeptical games and say, what if the self only ever lasts for 1/500th of a second and anything longer than that *is* an illusion, what if the person who wakes up each day is a freshly minted persistent-ontological-self which will genuinely cease to exist once it goes back to sleep at day's end, etc. But there's certainly no reason to rule out the correctness of the naive belief, which is that it's the same self which inhabits the same body for a whole human life. If the alleged quantum neurosystem is there, then the study of its physics should decide between such possibilities.

    ReplyDelete
  20. I don't think we're "aware beings." I think it's all made up by the brain. Sadly, we still suffer, and that's all that's really important. From an antinatalist standpoint, we should call ourselves "suffering beings."

    If this universe is a simulation, then either the simulators and us exist on the same plane of existence, in which case we can find them somehow and kill them (as happened before on the Holodeck in Star Trek), or it's impossible for us to get to them, in which case there's no point in talking about it. The only question worth answering is "how do we kill the fuckers?"

    Of course, if it really is a simulation, one would expect safeguards against this sort of thing, but I don't expect my memories to get wiped any time soon...

    ReplyDelete
  21. Equal Opportunity TrollJanuary 11, 2012 at 12:45 AM

    As a big fan of over-combing bias, I've named my penis after Robin Hanson. Hanson should be happy to know that, even if he were to die today, he would still live on, as Little Robin. I'm also going to have the head frozen.

    ReplyDelete
  22. "Or should we wonder at the coincidence of our existence and our a priori unlikeliness?"

    Our existence is not a prior unlikely. You are only asking probability questions about our existence because we already exist. It has probability of 1.

    Imagine N balls in an urn. Each ball has a unique ID. We draw one ball at random. The ball we selected is labeled xoermnnory.

    It would be a mistake to say that selection of xoermnory is evidence that N is small, even though the prior probability of xoermnory was 1/N.

    Our existence is not unlikely, and not proof that there are way more of us than we can see. No matter how many of us there were, it would still be 'unlikely' given that reasoning.

    We could be living in a simulation (and if we are it could be a nested simulation), but I just don't see how we can ever observe any data to inform us of this. Sleepy Beauty shouldn't update and neither should we.

    ReplyDelete
  23. Our existence is not a prior unlikely. You are only asking probability questions about our existence because we already exist. It has probability of 1.

    The probability that we exist, given that we exist, is 1... Duh.

    Our existence is not unlikely, and not proof that there are way more of us than we can see.

    In a way, you're contradicting yourself here. The more likely our existence is a-priori, the more likely it is that there are way more of us than we can see.

    ReplyDelete
  24. There is unity through time as well, but that is the "unity" of an A and a B which are connected because the A turned into the B. That's a different sort of unity, that we usually express by talking of persistent existents, "things".
    Hi Mitchell. That's nor really unity, that's just ordinary similarity between A and B. A turning into B is gradual similarity between A and B stretched out in space-time. It's not unity at all, even though we may conceptualize it as identity as an intuitive simplistic approximation (human brains have to work with limited resources).

    Re: gestalt as unitary awareness. I'm not sure. My intuition isn't clear about whether this can reduce to interconnected neurons talking to each other. My problem is that I'm not sure how it would reduce to quantum phenomena, and how we would empirically test that.

    I insist, however, that this does not give you personal unity of consciousness through time. You don't share mental states with your past or future self some time away, period. Despite this, I think personal identity, and associated ethical concepts like consent and accountability are useful, but mostly as feedback mechanisms to prevent suffering and foster well-being in a practical world of interacting humans.

    ReplyDelete
    Replies
    1. Anonymous, do you understand the extreme implications of your outlook? If you deny the persistence of a self through time, then the connectedness of subjective time, even from second to second, must be an illusion. For example, you think you read the previous sentence, but you didn't really, because it takes several seconds to do so. Instead, there were several fundamentally disconnected "instantaneous selves", each of which briefly and deludedly believed that it existed a moment before, before ceasing to exist and being replaced by its equally temporary successor.

      Confirm this much for me: do you agree that in order to uphold this ontology, you have to deny the veridicality of a perception which is otherwise a persistent and ubiquitous part of your experience, namely, the impression of the continuity of time?

      Delete
    2. It depends on how you see continuitiy. Compare a movie that is provably made of individual screens being projected in sequence at a high frequency. Do I deny the fundamental continuity of the movie? In a way, yes. But of course this doesn't deny the value or the information content, or pattern abstraction and reflection possibilities that go beyond the individual screen. Is one screen identical to the next? Of course not, trivially not. Is there a meta-screen that encompasses all screens? Not really, there really is just a sequence of screens. I see mental states in a similar way, except that of course the brain has dynamic memory storage and parallel processing functions.

      Delete
    3. If we are to use this analogy, then the question becomes: Is it the same person in the audience watching each screen as it flashes past, or is there a new person for each new screen?

      Delete
    4. It depends on what your criteria for sameness are, and for what you need the distinction. For legal or social purposes, the answer is yes. In these contexts, personal identity is established through similarity checks, tracing a continuity if necessary (e.g. for punishment or reward). For instance, we recognize people via face recognition, voice recognition, DNA sequence, memories stored in their brains, circumstantial evidence of continuity etc. We go so far as to distinguish twins, but ultimately the chain of continuity boils down to analyzing local similarities between person moments.

      In a strict philosophical or metaphysical sense, I'd say the answer is no. When you watch a movie, you're not strictly one person watching the full movie; you can't finish any one scene without being slightly and permanently changed.

      Delete
    5. Whether you are changed by an experience is not the point. Of course you are. The issue is whether subjective time is real. Do you understand the thoroughness with which you are now denying appearance? If there is no persistent self, then there can't be any such thing as "awareness of change while it happens", there is no actual flow of time, and no actual change within consciousness. You have to deny the reality of time and change at every turn, in favor of calling it simply difference between one person-moment and another person-moment. The impression that the person who reaches for a doorknob is the same person who turns the knob a moment later is an illusion. The impression that your experiences today happened to the same person is an illusion. Your memory that the events of the morning happened to you is an illusion, because fundamentally you are just a person-moment, persons exist only by a sort of combination of cultural convention and cognitive illusion, and fundamentally, the morning events happened to a different being.

      The constant reference to third-person judgments like legal and social criteria evades the point. You need to decide whether you are going to take the form of your own consciousness seriously. The preceding ontology of real person-moments and fictitious persons requires that you comprehensively reject the reality of your own experience. But the real illusion is that you are understanding reality any better by adopting such a perspective.

      Delete
  25. The impression that the person who reaches for a doorknob is the same person who turns the knob a moment later is an illusion. The impression that your experiences today happened to the same person is an illusion. Your memory that the events of the morning happened to you is an illusion, because fundamentally you are just a person-moment, persons exist only by a sort of combination of cultural convention and cognitive illusion, and fundamentally, the morning events happened to a different being.

    Of course, trivially.

    You need to decide whether you are going to take the form of your own consciousness seriously.

    I don't know what that means.

    If there is no persistent self, then there can't be any such thing as "awareness of change while it happens", there is no actual flow of time, and no actual change within consciousness [...] The preceding ontology of real person-moments and fictitious persons requires that you comprehensively reject the reality of your own experience.

    That seems non-sequitur to me. I suspect there is some terminological confusion in how we see concepts such as awareness and experience. To me, these reduce to mental states in the brain, which are all local and transient, but which also encode things like change awareness and time awareness, by detecting differences in patterns and matching them to memory or time-local activation shifts. The idea that this is somehow unreal if we don't have a "persistent self" (whatever that means) seems unjustified to me. But as I said, I suspect that may be confusion over terminology; maybe we simply need more precisely defined categories with clear natural referents.

    At any rate, I'd end the discussion at this point to save time; thanks for the exchange of thoughts.

    ReplyDelete
    Replies
    1. "I don't know what that means."

      It means, are you going to believe that you exist as a being that persists in time - because that is presumably how reality looks to you, in the raw - or are you going to believe that time, self, and continuity are all just approximations to a fundamentally discontinuous reality, because that is your metaphysical ideology?

      Right now, the scientific/technophile part of culture is absolutely overrun with people denying the most basic of realities, either blatantly or stealthily. (I mean denial that time, self, etc exist in any objective sense.) It's a triumph of intellectual construction over the ability to take your own perceptions seriously, and to notice their ontological implications.

      You may feel like my attack is off the mark where you're concerned - you haven't been parading around as an ostentatious eliminativist - but it's pretty clear that you regard your own existence as to some extent a matter of definition or cultural convention. We haven't sat down and explicitly established the nature of your emphasis on how a person is different from one moment to the next. Here I am, stridently proclaiming the reality of a persistent self, and yet I also will concede that people change, in ways both trivial and fundamental.

      What I will assert forcefully and repeatedly - precisely because people do not take it seriously - is that a continuous interval of waking consciousness requires the continuous existence of the conscious being. And conversely, if you believe that consciousness "emerges from" or "supervenes on" some grainy set of flickering discrete events, like neural firings, then either you're a dualist - because the genuine continuity of consciousness has to metaphysically accompany this fundamentally discontinuous series of physical events - or you have to flatly deny that consciousness has genuine continuity. There is no conscious flow, just a set of encapsulated consciousness-moments which each briefly exist, accompanied by the illusion of having emerged from an immediately previous moment.

      Obviously I am not going to endorse this latter ontology; it runs against appearance in the most extreme way possible. Nonetheless, appearance is in conflict with most forms of neuromaterialism. The awareness of this among advocates of materialism varies. Some people seem not to notice at all, mostly because they're busy living their lives like a normal person and the slogans about how mind and brain are related are just part of the wallpaper of their lives. A few people are a little more aware of the tension, but they have an answer that they got somewhere. And finally, at the outer limits, are the thinkers who bite the bullet and say on scientific grounds that, yes, large swathes of experience are illusion.

      Delete
    2. I had better say that, yes, I agree that consciousness can contain illusion and even pervasive illusion. We grow used to filling out, in our imaginations, the unseen qualities of objects. It can be arresting to be reminded of how little we actually know and how little we directly perceive. But the tension between neuromaterialism and appearance - that is, the tension between neuromaterialism and the reality about consciousness - does not stem from this. It stems from the picture of the world resulting from scientific investigation.

      Science doesn't see a soul or even a soul-gland, it sees Turing's bowl of skull-porridge, made of little biochemical switches, which are in turn made of fluctuations in quantum fields. On top of this picture from physics and biology, increasingly we also have ideas coming from computation, which have a slightly different character, since computer programs *are* artefacts of human design, to which we impute meaning just as we impute meaning to the shapes in a book that we call words. In a way, this is a complementary aspect of the problem. The biophysical ontology does not seem to contain various things which experience tells us exist, and the projection of meaning onto computational states is one of the ways that we create a stealth dualism in order to patch up the ontological deficiency of the scientific world-picture.

      What I'm here to tell you is that this ontological deficiency will pass. It is entirely an artefact of the current scientific and cultural level. The attempt to reconcile the current scientific world-picture with the facts of consciousness - generally by taking the correctness of the current scientific world-picture as axiomatic - is largely pointless, because that world-picture is going to change on the ontological level, starting with fundamental physics. New mathematics alone won't solve the problems, because mathematics in itself still doesn't give you the existence of experience as such (qualia, if I must use the Q-word); but new mathematics in physics will give us a new formal structural model of the world, in which it is finally possible to identify certain entities as actually being *us* and the elements of our experiences.

      At this point in time, to say all this is still more an act of prophesy than a scientific argument, but it is the rational thing to expect, once you comprehend the situation and conceive of this future alternative.

      Delete
    3. And now to finally address the post itself - how can you go about dying irreversibly (all your backups erased, etc), if you're "in a simulation" and *want* to die?

      First I have to observe that the spectrum of possibilities here is incredibly broad. Maybe the fundamental thing to expect is that the outside of the simulation is nothing like the inside. Rather than imagining that we are in an "ancestor simulation", try this on for size:

      In the real world, our brains are a sort of junk food for busy 8-dimensional squids on their way to work. In the real world, our ancestors were a type of dumb hyper-fish that the hyper-squids preyed upon. As the hyper-squids developed technology, they domesticated the hyper-fish, and even learnt to culture the tastiest part of the hyper-fish - its little peabrain - as an independently grown organ. Still later, it was found that the tastiness was enhanced if the peabrain developed in a certain way, and our whole world is just a shoddy networked virtual reality cooked up to tenderize our collective brains in the right way. You are actually a brain on a shishkebab in a vending machine in a hyperspace subway, with a few electrodes attached so you can go on having your fake experiences, waiting for a hungry hyper-squid to insert a token in the machine, grab you and eat you.

      Well, maybe that *does* sound a lot like the world that we know. But at least it gets us thinking in new directions, right? So if people are going to think about how to live (or how to die) in a simulation, we should also think explicitly about possibilities in which we are that helpless - just as a corrective to the usual proposals.

      Delete
  26. Maybe we are in simulations and in a real world at the same time, and the simulations are constantly trying to become part of the real world by using sentient beings as belief vectors, after they which they become integrated into the eco-system of simulations.

    So there is a base simulation, which can be compared to planet earth. That is our cognitive apparatus with its forms of space, time, and causality. Then, there are native simulations that evolved off of that. Simulations like the notion "I" which is supposed to refer to a seperate being. Then, perhaps there are exogenous (?) simulations arising from outside our cognitive-apparatus base-simulation. They can be compared to extraterrestrial platonic idea spores that descend from another realm to take root in our reality.

    Take "democracy" for instance. Not a reality, we are in a simulation of it, nevertheless the simulation is attempting to become real by becoming part of an "historical" reality of progress.

    The biggest picture possible that we can have is the simulation based on our cognitive apparatus itself (space, time, causality)-- and so science is the biggest simulation that we have. This gives it the strongest illusion of reality, but there are other simulations that are trying to become real too outside of its boundaries. We step into one of the simulations when we develop a belief in it.

    Take "freedom". There's no such thing, from the big picture point of view, but it wants to become real by co-opting us for belief so that we can actualize an idea of "freedom". Then, by the time the simulation has been made real, it looks like part of the natural world, and can be explained as such. But really it arose from outside of our world as a platonic idea spore.

    Perhaps there is a constant creative explosion of simulations that are trying to become real. There are just millions of them, including belief systems and words themselves.

    Since our own cognitive capacities create a simulation, there is no stepping outside of the simulations, only stepping between simulations for pragmatic reasons.

    ReplyDelete
  27. Julian Baggini on personal identity:

    http://www.ted.com/talks/julian_baggini_is_there_a_real_you.html

    ReplyDelete
    Replies
    1. The problem with Baggini's relationism is that it leaves out the perceiving subject. A person is supposed to be a collection of memories, dispositions, perceptions, etc, held together by ties of an unspecified nature, and the collection is like the Ship of Theseus, you can incrementally replace all the original parts and still have a ship. But the thing that unites all the objects of awareness coexisting in a single conscious experience, is the subject that perceives them! And the persistence of this subject through time is what makes possible the flow of consciousness. As his lecture makes quite clear, despite the shouts out to Buddhism, it's the "neural atomism" of current neuroscience which motivates the disbelief in a substantial self. (Buddhism gets there through phenomenology, but for the same reason it also tends to believe in a pure awareness which is the persistent nexus of the web of transient mental relations constituting the self.) That's exactly what the quantum factor calls in question - some of the possible ontologies of the quantum world do contain complex unities that aren't just mereological sums of elementary particles. What remains to be shown is that such a level of physical reality is functionally relevant somewhere in the brain, and not just the coarser level of action potentials and neuronal architecture.

      Delete
    2. Indeed. And until we have a proper coherent model of that functionally relevant physical reality with appropriate empirical backing, explicable in an intuitive fashion to which I can relate, I'll run with the consciousness-as-composition working model when I make decisions, including those that have ethical consequences ("intrinsic value" of a person essence etc.).

      Delete
  28. "by being friends with quirky AI geeks?"

    No idea what to make of this. I always found the AI community kind of lacking, anti-intellectual even; mostly because of their unbelievable naive "utopian" (transhumanism et al.) futures they were always fixated on. This is as technological deterministic as it gets, colliding harshly with my classical european education. A prime example of such a simpleton might be Paul Graham -- quite an unpleasant character, especially because more often than not, he overestimates his knowledge and education and will write about topics he does not grasp at all. He should remain silent instead. From an educational standpoint, he no better than an Erik Naggum, though Naggum might have been a bit more barbaric. (Hofstadter shares this view, and this is now wonder, knowing how cultured he is. Soon, I am afraid, a new cultural Dark Age will begin ... but when this happens, I want society to have an efficient exit strategy in place.)

    ReplyDelete

Tweets by @TheViewFromHell