Tuesday, January 27, 2009

Some offers

1. Consider the offer: "If you give me a sound deductive argument that I'll give you $1000, then I'll give you $1000." It feels like something has been risked in making the offer. But surely nothing has been risked—neither one's integrity nor one's money.

Or is there really a risk that there is a sound argument for a contradiction, and hence for any conclusion?

2. Suppose Fred is a super-smart being who, while very malicious, exhibits perfect integrity (never lies, never cheats, never breaks promises) and is a perfect judge of argument validity. Fred offers me the following deal: If he can find a valid argument for a self-contradictory conclusion with the argument having no premises, he will torment me for eternity; otherwise, he'll give me $1. Should I go for the deal? Surely I should! But it seems too risky, doesn't it?

3. Suppose Kathy is a super-smart being who, while very malicious, exhibits perfect integrity and is omniscient about what is better than what for what persons or classes of persons. Kathy offers me the following deal: If horrible eternal pain is in every respect the best thing that could happen to anyone, then she will cause me to suffer horrible pain for eternity; otherwise, she'll give me $1. Shouldn't I go for this? After all, I either get a dollar, or I get that which is the best possible thing that could happen to anyone.

Do these cases show that we're not psychologically as sure of some things as we say we are? Or do they merely show that we're not very good at counterpossible reasoning or at the use of conditionals?

[The first version of this post had screwed-up formatting, and Larry Niven pointed that out in a comment. I deleted that version, and with it the said comment. My thanks to Larry!]

2 comments:

  1. Hmm. I could definitely use that dollar...

    ReplyDelete
  2. Ah! Much better. To answer your question, I'd go with the latter: these sound very much like questions an experimental economist might ask, and typically people answer those questions "wrongly," even when their hypothetical risk is much less. (For instance: would you rather have $5 now or $100 a year from now? Way too many people take the quick money.) I say "wrongly" because, in a sense, this is the right response. From a basic evolutionary perspective, individuals who prefer to take the safer-seeming choice (in this case, refusing the offers) will tend to survive longer than those who don't, at least, so long as the safer-seeming choice really is safer. If this is indeed the source of our apprehension in these cases (and I think that it is), it makes sense that the safer-seeming choice is typically only the actually safer choice in non-esoteric questions - by forming societies complex enough to support esoteric scenarios like these, we'd sufficiently insulated ourselves from the environmental pressures that would make them good evolutionary guides. (In other words, in societies, making the risky choice in esoteric matters tends to be way less harmful to one's reproductive chances than making the risky choice outside of societies, so we never had a real chance to evolve the proper intuitive response to esoteric questions.)

    ReplyDelete