Thursday, May 11, 2023

Two ways of evaluating rationality

Suppose that I am reliably informed that I am about to be cloned, with all my memories and personality. Tomorrow, I and my clone will both have apparent memories of having been so informed, though of course my clone will be wrong—my clone will not have been informed of the impending cloning, since he didn’t exist prior to the cloning.

After the cloning, what probability should I assign to the hypothesis that I am the original Alexander Pruss? It seems obvious that it should be 1/2. My evidence is the same as my clone’s, and exactly one of us is right, and so at this time it seems rational to assign 1/2.

But things look different from the forward-looking point of view. Suppose that after being informed that I am about to be cloned, and before the cloning is done, I have the ability to adopt any future epistemic strategy I wish, including the ability to unshakeably force myself to think that I am the original Alexander Pruss. The catch, of course, is that my clone will unshakeably think it’s the original, too. This may cause various inconveniences to me, and is unfortunate for the clone as one of its central beliefs will be wrong. But when one considers what is epistemically rational, one only considers what is epistemically good for oneself. And it is clear that I will have more of the truth if both I and the clone each thing ourselves to be the original person than if we both are sceptical. It thus seems that I ought to adopt the strategy of thinking myself to be the original, come what may.

Of course, after the cloning, I and my clone will both have apparent memories of having adopted that strategy. We may be impressed by the argument that we should assign probability 1/2, and each of us may struggle to suspend judgment on being the original, but in the end we will be stuck with the unshakeable belief—true in my case and false in his.

If my suggestions above are right, then a lesson of the story is that we need be very cautious about inferring what is rational to think now from what was a rational policy to have adopted. This should, for instance, make one cautious about arguments for Bayesian conditionalization on the grounds that such conditionalization is the optimal policy to adopt.

18 comments:

  1. The reinforced belief is deliberately incorrect even if it is adaptive. It is not real knowledge--it is the Huckleberry Finnish faith where Mark Twain comments “Having faith is believing in something you just know ain't true.”

    ReplyDelete
  2. But when one considers what is epistemically rational, one only considers what is epistemically good for oneself. The setup is effectively Parfit’s teletransporter (branch line case). One response to this is that you should care equally about both your continuing self and your cloned self. Caring equally about their epistemic accuracies would lead you to credence 1/2 (if you considered the equal weight sum of the accuracy scores).

    ReplyDelete
  3. "But when one considers what is epistemically rational, one only considers what is epistemically good for oneself."

    This contention needs more support.

    The terms 'rational' and 'self-interest' are not synonymous.

    ReplyDelete
  4. Well, to me the practical matter here is not what either you or the clone think about who/what is the original. As original and clone respectively have the same memories, which of you, if either, is the original you matters little. The researcher(s) who oversaw the cloning process are the only ones who know for sure. Contrariwise, if those researchers now have three or more of 'you'... change is so unpredictable.

    ReplyDelete
  5. Ian S:

    I think Parfit is just wrong about this. The mere fact that somebody has my memories is very little reason to care for them. Imagine a creep who thinks you're a super cool person and snoops on you all the time. By reading your email and recording your conversations he comes to believe much of what you believe and remember, and slowly he becomes more and more like you. As a final coup de grace, he has you kidnapped and the few memories you had that he didn't yet share are implanted in his brain, the character is shifted to be perfectly like yours, and the fading memories he had of his life as separate from yours are removed. I don't think you need to care what will happen to this creep any more than you do about a total stranger with the same values as yours (of course, I think you *do* need to care about a total stranger, but differently from how you care about yourself).

    Mr Rushlau:

    That's true for practical rationality, but I think epistemic rationality does only consider what is epistemically good for you. The reason is that care for another's epistemic state is a part of practical or moral rationality, rather than of epistemic rationality as such. Suppose that Alice is a brilliant mathematician who has a choice between going for the rest of her life to a volcanic desert island with a large stack of mathematics journals, knowing that a volcanic eruption will bury all her mathematical discoveries before she can communicate them to anyone else, or giving all of her waking hours to teach algebra to underprivileged middleschoolers. If she does the latter, she benefits others epistemically with respect to mathematics, though her doing so is *morally* and not epistemically good with respect to mathematics. It is the desert island that is epistemically good with respect to mathematics.

    ReplyDelete
  6. I think the Parfit and personal identity stuff is a red herring. Cloning makes the story simpler, but we can remove it.

    Suppose there is a hard drive with randomly generated data and tomorrow it will be uploaded into a new human person whose body is sufficiently similar to yours that you couldn't tell the difference. By coincidence, the data on the hard drive exactly matches your mind. Additionally, you have a choice what firm epistemic policy to add to your own brain and to the data on the hard drive (it must be the same policy for both). Furthermore, after the operation is done, both you and the new person will be reliably informed of all the above facts, without either being informed which of the two persons you are.

    Done this way, there is no psychological continuity between you and the new person. It's just a coincidence that that person has the same brain states as you do (apart from the epistemic policy, which you are responsible for).

    ReplyDelete
  7. An interesting question: After the upload and being informed of the stuff, do you *know* you are the older person, if you implant the policy to think you are the older person? I think on some reliabilist views, you do.

    ReplyDelete
  8. I do not know what a reliabilist view entails. Whether the age of the person cloned makes a difference in which 'knows' more, insofar as they both know the same (assuming only that the cloning process is complete, and both do, in fact have the same knowledge/experience---and we may not necessarily know that), it seems presumptive to say, or claim, the original knows more. I will concede, however, that you know far about philosophy than I. And probably more mind science generally as well. Think of my interesting question as a thought experiment, albeit incompletely formed. I might attempt to do so, but such was not my intent when I posed the question(s). Thank you, sir.

    ReplyDelete
  9. ‘By coincidence, the data on the hard drive exactly matches your mind.’ Are you told this before the operation or not?

    If you are, I’m not seeing that this setup is relevantly different from the cloning version. You know that you will in effect be cloned. That this was made possible by an extraordinary coincidence is irrelevant.

    If not, you have no reason to think that the new person will be anything like you. (In fact, he will be. But you don’t know that, so it can’t affect your judgements.) The data is supposed to be random. So, as far as you know, the new person will be far more likely to think of himself as John Smith, Bill Bloggs, or whatever than as you. So it would make no sense to demand that both continuing you and the new person have the same credence that they are the original you.

    ReplyDelete
  10. I lost the transition between cloning and artificial intelligence, when a hard drive was introduced. Cloning, as I understood it, is/was a bioscience concept. That now famous (or forgotten?) sheep named Dolly? Seems the discussion has shifted from cloning to something more like transhumanism? If such apparent shift has occurred, then we have a different can of transistors, seems to me. If the discussion is now around AI issues, cloning is no longer in the discussion. Or, if it is, can someone explain to me the transition from cloning to AI to transhumanism? Failing that---if AI and transhumanism are transposable---how does THAT work? The swamp is getting deep; the sand, quickening.

    ReplyDelete
  11. Ian:

    I think the relevant difference is that in the cloning scenario, the (fake) memories of the twin are caused by your memories. That causal connection is something that Parfit and others require for psychological continuity. In my new case, it's just a coincidence that the two sets of memories match.

    ReplyDelete
  12. "The reason is that care for another's epistemic state is a part of practical or moral rationality, rather than of epistemic rationality as such... It is the desert island that is epistemically good with respect to mathematics."

    Any progress in their studies the mathematician achieves on the desert island is, in your scenario, lost to the lava. These advances are solely known to the lonely mathematician. Perhaps they are gratified by their accomplishments. I would not grudge them that. But I'd suggest this idealized situation, as constructed (the pure pursuit of mathematical truth for its own sake), highlights the lengths one must go to preserve the divide between epistemic and moral rationality.

    I'm inclined to see each of us, individuals within the larger social universe, as agents, and our choices cannot be segregated between the epistemological and the moral. If I determine something is 'epistemologically good for me', it is only in reference to moral considerations, and so brings moral connotations and consequences.

    Back to the island.

    Whatever advances our intrepid mathematician accomplishes does not, cannot, advance the field, does not contribute to the growth of human knowledge generally, because it is not shared. No other mathematician, nor the human world at large, gains from their insights. Of course, the mathematician is free to make such a choice, but this choice would be to forsake any prospect of communal endeavor, communal benefit. Knowledge has greater value (or should I say, has any intrinsic value at all) when it is communicated, as part of the shared pursuit of enhancing knowledge in the aggregate. The mathematician may have arrived at any number of heretofore undiscovered truths, but these truths evaporate when the mathematician and their notebooks are consumed by magma. The mathematician is, in essence, simply entertaining themselves awaiting their fate. I think such a choice can be characterized as an abdication of moral responsibility, not matter how self-satisfied with the choice the mathematician may be. The moral responsibility to which I refer is an inherent aspect of epistemic pursuits, because knowledge production entails knowledge dissemination, if rational epistemic pursuits are for the purpose increasing the scope of what is seen to be true, distinguishing this from ignorance and falsity. Which is what I think rational epistemic pursuits precisely are about.

    ReplyDelete
  13. It’s coincidence that the memories matched, but it’s not coincidence that this was recognized and that you were told it. If they hadn’t matched, the setup would have been aborted, and you wouldn’t be in this situation.

    Cause is a tricky concept. Suppose the hard drive had been set up long ago, while your memories were still forming. Then, counterfactually, if your memories hadn’t matched the hard drive, the data load into the new body would not have proceeded. So, at least in one sense, your memories were at least a partial cause of the new person :-).

    Another line. Suppose a wildly extravagant cloner (call it the Cloner of Babel, with a nod to Borges) worked as follows: It had every possible human body in cold storage and every possible collection of memories in a vast library of hard drives. When it had to clone a person, it would check their body and memories in minute and exhaustive detail, find the body and hard drive that matched [this would not be easy – see Borges], unfreeze the body and load the hard drive data. Would this be relevantly different from a more conventional atom-by-atom and memory-by-memory cloner, from a causal point of view? Would you want to say that with the conventional cloner, there would be the right sort of causal connection and psychological continuity but with the Cloner of Babel, there wouldn’t?

    Now suppose that all the bodies and hard drives but one had been destroyed, and that, by amazing coincidence, these were found to match yours, and the body unfrozen and the data loaded as above. Would this be relevantly different from the above case?

    In all this, I’m granting for the sake of argument the sort of dualism implicit in the body/memories approach. This can’t be quite right – at a minimum, some bodies will be incompatible with some sets of memories. I’m also taking ‘memories’ to include not just memories, but all relevant mental qualities. In your case of the jerk who takes on my memories, that would not be enough for me to consider him as equivalent to my future self. He would have to take on all my mental qualities (including my degree of jerkitude).

    ReplyDelete
    Replies
    1. "Then, counterfactually, if your memories hadn’t matched the hard drive, the data load into the new body would not have proceeded."

      I was assuming that the data load would have happened even if it didn't match your data.

      Delete
  14. Fine – maybe I was trying to be a bit too clever with the story about counterfactual cause.

    Still, the data must have been checked and found to match yours, and you must have been informed of this before being asked to choose your policy. (Otherwise, the setup would make no sense – without that knowledge, you would have had no reason to think the new person would be anything like you.)

    I’m not seeing that it matters whether this came about by cloning or by chance. Parfit explicitly says (Ch13, S96) that he thinks that an unreliable teletransporter that happened to have worked for the occasion would still give the required relation – “only the effect matters”. But he also requires a cause – “any cause”. There seems to be a fair amount of debate about what he meant and, Parfit aside, what is reasonable. I’m happy to leave this to better informed people. But it’s at least not obvious to me that you should care for your continuing self more than for the new version.

    ReplyDelete
  15. Yes, you are informed, but they would have put the same data into a body even if it didn't match your data.

    Another variant is this. An evil scientist found an exact duplicate of you in a distant galaxy, kidnapped him while asleep, and brought him to earth unconscious. His memories match yours up to the last five minutes. Soon you will be made unconscious and your last five minutes of memory will be erased, but before that you get to choose an epistemic policy for yourself and your twin. After awakening, you and your twin will be informed of the above facts.

    In this case, your twin and you have had wholly independent lives, by coincidence. You have five minutes of non-coincidental memory match due to your having five minutes of memories getting erased, and there is a non-coincidental match in epistemic policies. But the erasure and the epistemic policy are not enough to establish the kinds of connections that make it prudentially rational to care for the other in the manner in which one cares for oneself.

    ReplyDelete
  16. What ‘kinds of connections’ are ‘enough’ is precisely the point at issue. If you thought that a deliberately created clone would have the right kind of connection, you might think that a ‘found’ twin (verified as such) would too. You might think that matching bodies and memories would suffice, however they had come about.

    But (with apologies) I now think you are right that Parfit-style identity issues are a red herring. Rather, this looks like a case where loss of information leads to apparent violation of reflection.

    If you care only for yourself (taken in the usual sense), and you can adopt a policy, then the policy of credence 1 is clearly right, both epistemically and pragmatically. Of course, it’s not good for your twin, but you don’t care about that. If you can’t adopt a policy, then, after the event, credence 1/2 is the best you can do. There is no violation of reflection, because you have lost information – before the event, you knew that you were the original, after it, you don’t. (If you think you should care equally about yourself and the clone then credence 1/2 is right, both as a policy and after the event.)

    I don’t see this example as relevant to Bayesian conditionalization. But I’m also not sure about policy-based arguments for it. This suggestion looks more promising: with merely finitely additive probabilities, you should only conditionalize on a finite partition – it matters not just what you are told, but what else you might have been told.

    ReplyDelete