Tuesday, June 30, 2015

It is more blessed to give than to receive

On the one hand, Jesus tells us that it is more blessed to give than to receive. On the other hand, Socrates tells us in the Gorgias:

And what sort of a person am I? One of those who are happy to be refuted if they make a false statement, happy also to refute anyone else who may do the same, yet not less happy to be refuted than to refute. For I think the former a greater benefit, in proportion as it is of greater benefit to be oneself delivered from the greatest harm than to deliver another. No worse harm, it is true, can befall a man than to hold wrong opinions on the matters now under discussion between us.
We thus have two plausible and apparently conflicting claims: it is better to give than to receive and yet it is better to receive a refutation than to give it. If the conflict is real, then of course we go with Jesus. But is the conflict real? After all, Jesus' saying has the form of a proverb, and we know that proverbs, biblical and otherwise, are not meant to have universal applicability. Wisdom is needed to figure out which proverb applies when.

Jesus' saying seems to me to apply to cases where the giving is a sacrifice, either of the thing given or of one's time and energy in giving it. Socrates, however, is clearly not talking of that sort of giving. Socrates obviously finds it fun to give refutations. There is no sacrifice for him in refuting another. Well, at least in the Gorgias. Eventually, his practice of refuting others costs him his life. At that point it seems that Jesus' proverb applies: giving refutation to others becomes a sacrifice, and it is better for one to make that sacrifice than to be on the receiving end of another's sacrifice.

Maybe a similar thing can be said about another case. We tend to feel that it is better to work on one's own virtue, and to receive virtue from others, than to work on the virtue of others. I think this is because working on one's own virtue tends to be costlier personally, tends to be more of a sacrifice. It is often easier to preach than to do. (And while preaching without doing is often ineffective, that's not universally true. There are many people whose lives have been turned to virtue by the preaching of people whose own lives turned out to be a fraud. One must, though, remember that the preaching is not all that was going on--there was also grace.) So giving virtue to others need not be a sacrificial gift of the sort that Jesus is talking about on my interpretation. But it also can be, in which case the proverb seems to apply.

Monday, June 29, 2015

Hiddenness and heroism

Heroism that involves facing death would not be so heroic if the hero felt completely certain of a good afterlife. But given the close rational connection between the existence of God and posthumous rewards and punishment, a connection that is also emotionally ingrained in us, for it to be heroic for us to face death a certain hiddenness of God appears necessary. The hiddenness would only need to be emotional: God's existence (or love or justice, I guess) would need to feel uncertain. And of course what goes for heroically facing death also applies to more minor sacrifices and obedience to the moral law.

Such a feeling of uncertainty, however, is compatible with a rational moral certainty. One can, after all, have a feeling of uncertainty and the associated fear while stepping back into the abyss on an indoor climbing wall despite moral certainty in the safety of the equipment and the competence of the belayer. So this need for emotional hiddenness doesn't solve the Schellenberg problem of hiddenness which is about belief not feelings of conviction. I wonder if it helps in any way? After all, one way to ensure emotional hiddenness is by having doxastic hiddenness.

Thursday, June 25, 2015

Functionalism about spatial properties

Functionalism about spatial properties says that what makes spatial properties be spatial is the kind of role they play in interaction with laws and regularities in the world. This allows the concept of spatial properties like shape or distance to be independent of the precise physics of the world. For instance, one might say that distance is a relation largely characterized by a correlation between itself and weakening causal interactions.

I find functionalism about spatial properties attractive, but it just occurred to me that if one is not careful, it might turn out that the virtual spatial relations in virtual worlds end up counting as real spatial relations.

Tuesday, June 23, 2015

The psychological theory of personal identity

Let's suppose that personal identity over time is secured by continuation of psychological states. Now imagine Jim and Sally are robots who are persons (if you don't think robots could be persons, just suspend disbelief for a while until I get back to the issue) and have almost all of their psychological states on their hard drives. According to the psychological theory, if you swap Jim and Sally's hard drives, Jim and Sally will go with the hard drives, rather than with the rest of their hardware. But here is something odd. When you unplug Jim and Sally's hard drives during the swap, either Jim and Sally continue existing or they don't. If they do continue existing, then by the psychological theory, they are surely located where the hard drives are, since that's where the memories are. They are basically reduced to hard drives.

There is a case to be made that they do continue existing, at least given the psychological theory of personal identity. First: To kill an innocent person, even temporarily (after all, many people, including me, believe that all our deaths are temporary!), is seriously wrong. But swapping hard drives doesn't seem problematic in this way. Second: There is some reason to think temporally gappy existence is impossible, and if gappy existence is impossible, then if Jim and Sally exist before and after the swap, they exist during it. Third (and specifically to the psychological theory): It is plausible that if the identity of a person across time is secured by a part of the person, then the person can exist reduced to that part. Thus, if the identity of a person comes from the soul, then the person can survive reduced to a soul.

So we have this: Given the psychological theory, Jim and Sally exist reduced to hard drives. But that's absurd! For we can replace hard drives by cruder mechanisms. We can suppose a computer where memory is constituted by writing in a large book. It is absurd to think a person can exist reduced to a book. So we should reject the psychological theory.

Well, that assumed that robots could be persons. Maybe they can't. And our memories do not sit on a convenient isolated piece of hardware in the brain. Indeed, that is true. But surely agents could have evolved whose memories are stored on a convenient isolated piece of hardware, and such agents could be persons. And the argument could be run for them.

Wednesday, June 17, 2015

Non-conglomerability

This result is probably known, and probably not optimal. A conditional probability function P is conglomerable provided that for any partition {Hi} (perhaps infinite and maybe even uncountable) of the state space if P(A|Hi)≥r for all i, then P(A)≥r.

Theorem. Assume the Axiom of Choice. Suppose P is a full conditional probability function (i.e., Popper function) P on an uncountable space such that:

  1. all singletons are measurable
  2. the function satisfies this regularity condition for all elements x and y: P({x}|{x,y})>0
  3. there is a partition of the probability space into two disjoint subsets A and B with the same cardinality such that P(A)>0 and P(B)>0
Then P is not conglomerable.

Conditions (2) and (3) are going to be intuitively satisfied for plausible continuous probabilities, like uniform and Gaussian ones. So in those cases there is no hope for a conglomerable conditional probability.

Sketch of proof: Let Q be a hyperreal-valued unconditional probability corresponding to P, so that P(X|Y)=Q(XY)/Q(Y). The regularity condition (2) implies that that there is a hyperreal α such that Q(F)/α is finite, non-zero and non-infinitesimal for each finite set F. (Just let α=Q({x0}) for any fixed x0.) Let R(F) be the standard part of Q(F)/α for any finite set α. Then P(F|G)=R(FG)/R(G) for any finite sets F and G. Moreover, R is finitely additive and non-zero on every singleton.

Since A2 has the same cardinality as A, there is a function f from B to the subsets of A with the property that f(b) and f(c) are disjoint if b and c are distinct and every f(b) is uncountable. Choose a finite number c such that P(A)<c/(1+c). For each b in B, choose a finite subset Fb of f(b) such that R(Fb)>cR({b}). Such a finite subset exists as R is finitely additive and the sum of an uncountable number of non-zero positive numbers is always infinity. Let H be the union of the Fb as b ranges over B. Then AH has at most the cardinality of B. Let h be a one-to-one function from AH to B. For each b in B, let Gb=Fb if there is no a in AH such that h(a)=b; otherwise, let Gb=Fb∪{a} for such an a. Let Hb={b}∪Gb. Then R(Gb)>cR({b}) and so R(Gb)/R(Hb)>c/(1+c). Then P(A|Hb)=P(Gb|Hb)=R(Gb)/R(Hb)>c/(1+c). But the Hb are a partition of our probability space, and P(A)<c/(1+c), so we have a violation of conglomerability.

'Ought' implies 'can'?

I am sick. Here's a heartening argument: Next week I have some teaching and I can't teach while sick. I ought to do the teaching. Ought implies can. So I will be well by next week.

One gap in the argument is that my doctor may tell me I'm not sufficiently infectious to be unable to teach. But that can't be all that's wrong with the argument.

Tuesday, June 16, 2015

Mixing and matching conditional probabilities

Given an unconditional probability function P, one can always (at least given the Axiom of Choice) extend to a full conditional probability function, or a Popper function, that allows one to assign values to P(A|B) even when P(B)=0. Typically, the extension is not unique. In fact, it turns out that there is no logical connection between the conditional probabilities P(A|B) for B a null set (a set of zero probability) and the unconditional probabilities.

What do I mean by saying there is no logical connection? Well, it turns out we can mix and match and the null-probability-condition parts of Popper functions with the other parts. Suppose that P and Q are two Popper functions defined on the same sets. Then we can define a frankenfunction by letting R(A|B)=P(A|B) when B is not a null set and R(A|B)=Q(A|B) when it is a null set. And this frankenfunction is a perfectly fine Popper function.

This is a problem. There is a complete disconnect between the null-probability-condition of the Popper function and the non-null-probability condition parts. (Another manifestation of this problem is the fact that in many cases we lack conglomerability.)

Monday, June 15, 2015

Responsibility and rationality

Whenever I act, one of following is true:

  1. My action is uncaused.
  2. My action is caused, but not by a reason. (I will take this mean: by a reason, in the right way.)
  3. My action is caused and by a reason.
But no one is responsible for an uncaused event. And an action that is caused but not by something rather than a reason is not a rational action. Hence in all cases where I act both rationally and with responsibility, I act on a reason.

Saturday, June 13, 2015

Generating looping color gradients with a limited palette

I was trying to generate a Mandelbrot set fractal in Minecraft using a python script. But the palette available is very limited, which led me to thinking how to generate a pleasing color progression out of a limited palette, and then I had a neat idea: just solve the Traveling Salesman Problem for the palette in RGB space. This generates a color progression that can be nicely used for looping the colors. The first version uses the larger palette of wool+hardened clay+redstone (and a version of this simulated annealing code), while the second drops the hardened clay (and now we can get an exact solution using this code).


Thursday, June 11, 2015

Causation in the right way and actualization of causal powers

Consider William James' murderous mountaineer. His buddy is hanging on a rope that our antihero is holding, and our antihero decides to murder him by letting go. The thought of what he's about to do makes him so nervous that his hands start shaking and let go of the rope. The intention, and by extension the reasons behind the intention, caused the murderous mountaineer to let go, just as he intended to. But although the reasons and the intention cause the letting go, he didn't intentionally let go and his letting go wasn't done for a reason, though it was because of a reason.

This is a famous example where we need the idea of "causation in the right way". Not every intention that causes an action according with intention causes it in the right way, in the way that makes the action intentional. The problem of having to add a "non-aberrancy" or "in the right way" condition plagues a lot of philosophy. A usual thought about such cases is that there is a messy story, beyond our ability to specify all the details. Perhaps that story includes various messy exceptions for various kinds of accidentality, or perhaps it has fairly onerous conditions on the details of the causal chain.

But what if in some--it's too much to hope that in all--cases instead of a long and messy story, we just have a bit of irreducible (or relatively so?) metaphysics. It's just a metaphysical feature of some instances of causation that they are intrinsically non-aberrant.

How could that be? Think of a causal power for an effect as something that can be actualized partially or completely. When a causal power is actualized completely, that causal power automatically causes its actualization, and everything constitutive of that actualization, in the right way. When it fails to actualize completely, it falls short of causing in the right way, though perhaps we can say something more (here's one place serious work would need to be done) about the degree of aberrancy in its partial causes.

It's a medieval dictum that causes contain their effects. But that needs qualification. Causes in a sense contain their proper effects. They contain those proper effects as telĂȘ, and then some aspect of the effect--perhaps with cooperation or thwarting from other causes--just is an actualization of the cause with that telos. When all goes well, the whole of the teleologically specified effect is an actualization of the cause, but in aberrant cases, very little is. For instance, in the case of the murderous mountaineer, thinking about how to drop the buddy is an actualization of the intention, but the dropping of the rope is not. There is no further messy reductive story. The one event just is an actualization of the causal power and the other just is not.

But there is something incredible about this story. Sam leaves money for her grandchildren invested wisely in some investments locked up for twenty years after her death. All goes according to plan: the investments rise in value and eventually enrich her grandchildren. But how could the enriching of her grandchildren twenty years after her death somehow have as an irreducible feature its being the actualization of her intention? (Quick thought: It'd be very hard to get a presentist story about this. But presentism is false.) By the time the enrichment happens, her intention is long past. (Does it matter that it's long past? Probably not, but the story is more vivid then.)

There are, I think, three things I can say about the incredulity objection. First, I could bite the bullet. Her intention in some sense lives on in the effects. Yes, these intended effects in the future really just are actualizations of her intention. That's just a metaphysical feature of them. This isn't all that crazy if one believes in the essentiality of origins. For if one believes in the essentiality of origins, then the enrichment's having this cause is an essential feature of the enrichment. Somehow this makes it less surprising if in fact the enrichment is an actualization (or part of the actualization) of the intention. We could even think that the very being of an effect is its having-been-caused.

Second, we could say that when x causes y in the right way, then being-an-actualization-of-x is an intrinsic feature of y, a feature that is causally involved in everything y does, and so when y causes z in the right way, z has the intrinsic feature of being-an-actualization-of-y, and we can go back down the chain to x. Perhaps this is what Aquinas means by per se ordered causal series.

Third, I could go the road of caution. I could say that this metaphysical "actualized by x" feature only is found in immediate effects. Thus, we would in the first instance only have a story about causation-in-the-right way for immediate effects. And then we would use this feature to help construct messier account of causation in the right way for remote effects.

All of this, though, requires a fairly non-reductive metaphysics of human beings.

Wednesday, June 10, 2015

The human as the end-setter

Perhaps the deepest question about human beings is about the source of our dignity. What feature of us is it that grounds our dignity, gives us a moral status beyond that of brute animals, provides us with a worth beyond market value, makes us into beings to be respected no matter the stakes?

I was thinking about the proposal (from the Kantian tradition, but rather simplified) that it is our ability to set ends of ourselves that is special about humans. But as far as I put it, the proposal is obviously inadequate. Suppose I take our Roomba and program it to choose a location in its vicinity at random and then try to find a path to that location using some path-finding algorithm. A natural way to describe the robot's functioning then is this: The robot set an end for itself and then searched for means appropriate to that end. So on the simple end-setting proposal, the robot should have dignity. But that's absurd: even if one day someone makes a robot with dignity, we're not nearly there yet, and yet what I've described is well within our current capabilities (granted, one might want to stick a Kinect on the Roomba to do it, since otherwise one would have to rely on dead-reckoning).

Perhaps, though, my end-setting Roomba wouldn't have enough of a variety of ends. After all, all its ends are of the same logical form: arrive at such and such a location. Maybe the end-setting theory needs the dignified beings to be able to choose between a wider variety of ends. Very well. There is a wide variety of states of the world that can be described with the Roomba's sensors, and we can add more sensors. We could program the Roomba to choose at random a state of the world that can be described in terms of actual and counterfactual sensor values and then try to achieve that end with the help of some simple or complex currently available algorithm. Now, maybe even the variety of ends that can be described using sensors isn't enough for dignity. But now the story is starting to get ad hoc, as we embark on the hopeless task of quantifying the variety of ends needed for dignity.

And surely that's not the issue. The problem is, rather, with whole idea that a being gets dignity just by being capable of choosing at random between goals. Surely dignity wouldn't just require choice of goals, but rational choice of goals. But what is this rationality in the choice of goals? Well, there could be something like an avoidance of conflicts between goals. However, that surely doesn't do much to dignify a being. If the Roomba chose a set of goals at random, discarding those sets that involved some sort of practical conflict (the Roomba--with some hardware upgrade, perhaps--could simulate pursuing the set of goals and see if the set is jointly achievable in practice), that would be cleverer, but wouldn't be dignified.

And I doubt that even more substantive constraints would make the random end-setting be a dignity-conferring property. For there is nothing dignified about choosing randomly between options. There might be dignity in a being that engaged in random end-setting subject to moral constraints, but the dignity wouldn't be grounded in the end-setting as such, but the being's subjection of its procedures to moral constraints.

The randomness is a part of the problem. But of course replacing randomness with determinism makes no difference. We could specify some deterministic procedure for the Roomba to make its choice--maybe it sorts the descriptions of possible ends alphabetically and always chooses the third one on the list--but that would be nothing special.

If end-setting is to confer dignity, the being needs to set its ends not just subject to rational constraints, but actually for reasons. Thus there must be reasons prior to the ends, reasons-to-choose and not just constraints-on-choice. However, positive reasons embody ends. And so in a being whose end-setting makes it be dignified, this end-setting is governed by prior ends, the ends embodied in the reasons the being is responsive to in its end-setting. On pain of vicious regress, such a being must be responsive to ends that it did not choose. Moreover, for this to be dignity-producing, surely the responsiveness needs to to these ends as such. But "an end not chosen by us" is basically just the good. So these beings must be responsive to the good as such.

At this point, however, it becomes less and less clear that the choice of ends is doing all that much work in our story about dignity, once we have responsiveness to the good as such in view. For this responsiveness now seems a better story about what confers dignity. (Though perhaps still not an adequate one.)

Objection: No current robot would be capable of grasping ends as such and hence cannot adopt ends as such.

Response: Sure, but can a two-year-old? A two-year-old can adopt ends, but does it cognize the ends as ends?

Thursday, June 4, 2015

Python coding for Android Minecraft PE

I've been sensitized to the fact that there are many children who have no access to a PC but do have access to a smartphone. So in the interests of computer science education, I made a mod that allows for Python scripting of Minecraft Pocket Edition on Android. Instructions and links are here. The screenshots are from my Galaxy S3.






Teleological personhood

It is common, since Mary Anne Warren's defense of abortion, to define personhood in terms of appropriate developed intellectual capacities. This has the problem that sufficiently developmentally challenged humans end up not counting as persons. While some might want to define personhood in terms of a potentiality for these capacities, Mike Gorman has proposed an interesting alternative: a person is something for which the appropriate developed intellectual capacities are normal, something with a natural teleology towards the right kind of intellectual functioning.

I like Gorman's solution, but I now want to experiment with a possible answer as to why, if this is what a person is, we should care more for persons than, say, for pandas.

There are three distinct cases of personhood we can think about:

  1. Persons who actually have the appropriate developed intellectual capacities.
  2. Immature persons who have not yet developed those capacities.
  3. Disabled persons who should have those capacities but do not.

The first case isn't easy, but since everyone agrees that those with appropriate development intellectual capacities should be cared for more than non-person animals, that's something everyone needs to handle.

I want to focus on the third case now, and to make the case vivid, let's suppose that we have a case of a disabled human whose intellectual capacities match those of a panda. Here is one important difference between the two: the human is deeply unfortunate, while the panda is--as far as the story goes--just fine. For even though their actual functioning is the same, the human's functioning falls significantly far of what is normal, while the panda's does not. But there is a strong moral intuition--deeply embedded into the Christian tradition but also found in Rawls--that the flourishing of the most unfortunate takes a moral priority over the flourishing of those who are less unfortunate. Thus, the human takes priority over the panda because although both are at an equal level of intellectual functioning, this equality is a great misfortune for the human.

What if the panda is also unfortunate? But a panda just doesn't have the range of flourishing, and hence for misfortune, that a human does. The difference in flourishing between a normal human state and the state of a human who is so disabled as to have the intellectual level of a panda is much greater than the total level of flourishing a panda has--if by killing the panda we could produce a drug to restore the human to normal function, we should do so. So even if the panda is miserable, it cannot fall as far short of flourishing as the disabled human does.

But there is an objection to this line of thought. If the human and the panda have equal levels of intellectual functioning, then it seems that the good of their lives is equal. The human isn't more miserable than the panda. But while I feel the pull of this intuition, I think that an interesting distinction might be made. Maybe we should say that the human and the panda flourish equally, but the human is unfortunate while the panda is not. The baselines of flourishing and misfortune are different. The baseline for flourishing is something like non-existence, or maybe bare existence like that of a rock, and any goods we add carry one above zero, so if we add the same goods to the human's and the panda's account, we get the same level. But the baseline for misfortune is something like the normal level for that kind of individual, so any shortfall carries one above zero. Thus, it could be that the human's flourishing is 1,000 units, and the panda's flourishing is 1,000 units, but nonetheless if the normal level of flourishing for a human is, say, 10,000 units (don't take either the numbers or the idea of assigning numbers seriously--this is just to pump intuitions), then the human has a misfortune of 9,000 units, while the panda has a misfortune of 1,000 units.

This does, however, raise an interesting question. Maybe the intuition that the flourishing of the most unfortunate takes a priority is subtly mistaken. Maybe, instead, we should say that the flourishing of those who flourish least should take a priority. In that case, neither the disabled human doesn't take a priority over the panda. But this is mistaken, since by this principle a plant would take priority over a panda, since the plant's flourishing level is lower than a panda's. Better, thus, to formulate this in terms of misfortune.

What about intermediate cases, those of people whose functioning is below a normal level but above that of a panda? Maybe we should combine our answers to (1) and (3) for those cases. One set of reasons to care for someone comes from the actual intellectual capacities. Another comes from misfortune. As the latter reasons wane, the former wax, and if all is well-balanced, we get reason to care for the human more than for the panda at all levels of the human's functioning.

That leaves (2). We cannot say that the immature person--a fetus or a newborn--suffers a misfortune. But we can say this. Either the person will or will not develop the intellectual capacities. If she will, then she is a person with those capacities when we consider the whole of the life, and perhaps therefore the reasons for respecting those future capacities extend to her even at the early stage--after all, she is the same individual. But if she won't develop them, then she is a deeply unfortunate individual, and so the kinds of reasons that apply in case (3) apply to her.

I find the story I gave about (2) plausible. I am less convinced that I gave the right story about (3). But I suspect that a part of the reason I am dissatisfied with the story about (3) is that I don't know what to say about (1). However, (1) will need to be a topic for another day.

Tuesday, June 2, 2015

Care and persons

To care about something that isn't a person to the degree that one cares about persons is wrong. It is a distortion of love to care in that way for a non-person, and it is a kind of disrespect to those who are persons when one cares for other things as much as persons deserve to cared about. (One thinks here of the implicit insult to children when someone loves a pet the way one loves a child.) But it is not wrong to care in this way about severely developmentally disabled humans. Hence these humans are persons.

Friday, May 29, 2015

Fine-tuning and the objection from very different life-supporting worlds

I enter a room with four walls, three of them red, and the fourth white, except for a small red patch, about 1 cm2 in size. I also find a dart stuck in that small red patch. (This is of course a variant of Leslie's story about the wasp and the dart.) What should I think about what happened here?

I don't know. But I know that what I should not think is that the dart was tossed in an unbiased random direction. Rather, I would instead conclude that for some reason whatever process or agency propelled the dart had both a bias in favor of this wall and a bias in favor of red. Here's, very roughly, how one would make a Bayesian model of this. There is the unbiased randomness hypothesis U. Let's give it credence 1/2. And there are four relevant strong bias hypotheses: B1, B2, B3 and B4, according to which the the dart was tossed with a strong bias for wall 1, 2, 3 or 4, respectively, as well as a strong bias in favor of red. These four bias hypotheses are prima facie roughly equally likely. The probability that at least one of them is true isn't going to be all that high, but also isn't going to be all that low. There may well be reasons beyond our ken for bias in favor of one wall or another. Let's say that the probability that some one of these bias hypotheses is true is about 1/16. Thus the prior probability of B4 will be about 1/64, as the bias hypotheses are approximately equally likely.

But note that our evidence--the dart in red on wall 4--is much better predicted by B4 than by U. How much better? Well, if the walls are three by four meters in size (a reasonable set of dimensions for the wall), the probability of hitting our small red patch will be one in 480,000 on U, but relatively high (depending on what we mean by "strong" in "strong bias") on B4, let's say 1/10. Then Bayes' Theorem tells us that we have extremely strong confirmation of B4, with posterior probability 99.96%.

Suppose we go a little more extreme. The room has 10,000 walls, each of the same size as before, (it's a giagantic myriagonal room), all but the last being completely red, with the last being white except for a small patch, with the same dimensions as before. Then what happens? Well, our uniform randomness hypothesis has an even smaller probability of predicting hitting the red patch on the 10,000th wall, though it has a very high probability of hitting red somewhere. On the other hand, now our bias hypotheses need to be split between 10,000 walls. Thus, the B10000 hypothesis will have a probability of 1/160,000, assuming the probability that some one of the bias hypotheses is true is 1/16 as before. Plugging this into Bayes' Theorem, we get 99.86%, which is roughly the same probability as before! (The reason is pretty simple: as we increase the number of walls, the prior odds and likelihood ratio go up in roughly inverse proportion, leaving the posterior odds roughly unchanged.)

This is, of course, supposed to be a response to the objection to the fine-tuning argument based on the claim that for all we know, if the parameters defining the physics were very different from what they are, life might be quite likely (this is supposed to correspond to the three red walls), even though in the vicinity of the actual values of the parameters, life-permissiveness is rare (this is the white wall with a small red patch). The reasonable conclusion is that whatever cause generated our physics had a bias in favor of both (a) life and (b) the rough vicinity of our place in the space of possible parameter values. And we have an obvious explanation of why a cause might have bias (a): the cause is a morally good agent. But bias (b) is something we may not have an explanation for. Nonetheless, even without an explanation, we can have a good Bayesian argument.