Showing posts with label action. Show all posts
Showing posts with label action. Show all posts

Monday, November 3, 2025

The good of success is not at the time of success

It’s good for one to succeed, at least if the thing one succeeds in is good. And the good of succeeding at a good task is something over and beyond the good of the task’s good end, since the good end might be good for someone other than the agent, while the good of success is good for the agent.

Here’s a question I’ve wondered about, and now I think I’ve come to a fairly settled view. When does success contribute to one’s well-being? The obvious answer is: when the success happens! But the obvious answer is wrong for multiple reasons, and so we should embrace what seems the main alternative, namely that success is good for us when we are striving for the end.

Before getting to the positive arguments for why the good of success doesn’t apply to us at the time of success, let me say something about one consideration in favor of that view. Obviously, we often celebrate when success happens. However, notice that we also often celebrate when success becomes inevitable. Let’s now move to the positive arguments.

First, success at good tasks would still be good for one even if there were no afterlife. But some important projects have posthumous success—and such success is clearly a part of one’s well-being. And it seems implausible to respond that posthumous success only contributes to our well-being because as a matter of fact we do have an afterlife. Note, too, that in order to locate the good of success at the time of success, we would not just need an afterlife, but an afterlife that begins right at death. For instance, views on which we cease to exist at death and then come back into existence later at the resurrection of the dead (as corruptionist Christians hold) won’t solve the problem, because the success may happen during the gap time. I believe in an afterlife that begins right at death, but it doesn’t seem like I should have to in order to account for the good of success. Furthermore, note that to use the afterlife to save posthumous success, we need a correlation between the timeline the dead are in and the timeline the living are in, and even for those of us who believe in an afterlife right at death, this is unclear.

Second, suppose your project is ensure that some disease does not return before the year 2200. When is your success? Only in 2200. But suppose your project is even more grandiose: the future is infinite and you strive to ensure that the disease never returns. When is your success? Well, “after all of time”. But there is no time after all of time. So although it may be true that you are successful, that success does not happen at any given time. At any given time, there is infinite project-time to go. So if you get the good of success at the time of success, you never get the good of success here. Even an afterlife won’t help here.

Third, consider Special Relativity. You work in mission control on earth to make sure that astronauts on Mars accomplish some task. You are part of the team, but the last part of the team’s work is theirs. But since light can take up to 22 minutes (depending on orbital positions) to travel between Earth and Mars, the question of at what exact you-time the astronauts accomplished their task depends on the reference frame, with a range of variation in the possible answers of up to 22 minutes. But whether you are happy at some moment should not depend on the reference frame. (You might say that it depends on what your reference frame is. But there is no unambiguous such thing as “your” reference frame in general, say if you are shaking your head so your brain is moving in one direction and the rest of your body in another.)

Here is an interesting corollary of the view: the future is not open (by open, I mean the thesis that there are no facts about how future contingents will go). For if the future is open, often it is only at the time of success that there will be a fact about success, so there won’t be a fact of your having been better off for the success when you were striving earlier for the success. That said, the open-futurist cannot accept the third argument, and is likely to be somewhat dubious of the second.

Friday, July 11, 2025

Reasons and direct support

A standard view of reasons is that reasons are propositions or facts that support an action. Thus, that I promised to visit is a reason to visit, that pain is bad is a reason to take an aspirin, and that I am hungry is a reason to eat.

But notice that any such facts can also be a reason for the opposite action. That I promised to visit is a reason not to visit, if you begged me not to keep any of my promises to you. That pain is bad is a reason not to take an aspirin and that I am hungry is a reason not to eat when I am striving to learn to endure harship.

One might think that this kind of contingency in what the reasons—considered as propositions or facts—support disappears when the reasons are fully normatively loaded. That I owe you a visit is always a reason to visit, and that I ought to relieve my hunger is always a reason to eat.

This is actually mistaken, too. That I owe you a visit is indeed always a reason to visit. But it can also be a reason—and even a moral one—not to visit. For instance, if a trickster informs me that that if I engage in an owed visit to you, they will cause you some minor harm—say, give you a hangnail—then the fact that I owe you a visit gives me a reason not to visit you, though that reason will be outweighed (indeed, it has to be outweighed, or else it wouldn’t be true that I owe you the visit).

In fact, plausibly, that an action is the right one is typically also a moral reason not to perform the action. For whenever we do the right thing, that has a potential of feeding our pride, and we have reason not to feed our pride. Of course, that reason is always outweighed. But it’s still there. And we might even say that the fact that an action is wrong is a reason, albeit not a moral one, to perform that action in order to exhibit one’s will to power (this is a morally bad reason to act on, but one that is probably minimally rational—we understand someone who does this).

All this suggests to me that we need a distinction: some reasons directly support doing something. That I owe you a visit directly supports my visiting you, but only indirectly supports my not visiting you to avoid pride in fulfilling my duties.

But now it is an interesting question what determined what reasons directly support what action. One option is that the relation is due to entailment: a reason directly supports ϕing provided that that reason entails that ϕing is good or right. But this misses the hyperintentionality in reasons. It is necessarily true that it’s right for me to respect my neighbor; a necessary truth is entailed by every proposition; but that my neighbor is annoying is not directly a reason to respect my neighbor. One might try for some “relevant entailment”, but I am dubious. Perhaps the fact that an action is wrong relevantly entails that there is reason to do it to exhibit one’s will to power, but that ϕing is wrong is directly a reason not to ϕ, and only indirectly a reason to ϕ.

I suspect the right answer is that this direct support relation comes from our human nature: if it is our nature to be directly motivated to ϕ because of R, then R directly supports ϕing. Hmm. This may work for epistemic support, too.

Wednesday, July 9, 2025

Habitual action

Alice has lived a long and reasonable life. She developed a lot of good habits. Every morning, she goes on a walk. On her walk, she looks at the lovely views, she smells the flowers in season, she gathers mushrooms, she listens to the birds chirping, she climbs a tree, and so on. Some of these things she does for their own sake and some she does instrumentally. For instance, she climbs a tree because she saw research that daily exercise promotes health, but she smells the flowers for the sake of the smelling itself.

She figured all this out when she was in her 30s, but now she is 60. One day, she realizes that for a while now she had forgotten the reasoning that led to her habits. In particular, she no longer knows which of her daily activities have innate value and which ones are merely instrumental.

So what can we say about her habitual activities?

One option is that they retain the teleology with which they were established. Although Alice no longer remembers that she climbs a tree solely for the sake of health, that is indeed what she climbs the tree for. On this picture, when we perform actions from habit, they retain the teleology they had when the habit was established. In particular, it follows that agential teleology need not be grounded in occurrent mental states of the agent. This is a difficult bullet to bite.

The other option is that they have lost their teleological characterization. This implies, interestingly, that there is no fact about whether the actions are being done for their own sake or instrumentally. In particular, it follows that the standard diviion of actions into those done for their own sake and those done instrumentally is not exhaustive. That is also a difficult bullet to bite.

I am not sure what to say. I suspect one lesson is that action is more complicated than we philosophers think, and our simple characterizations of it miss the complexity.

Monday, July 7, 2025

Acting because of and for reasons

It seems that:

  1. If you pursue friendship because friendship is non-instrumentally valuable, then you pursue friendship non-instrumentally.

But not so. Imagine a rich eccentric offers you $10,000 to pursue something that is non-instrumentally valuable. You think about it, correctly decide friendship is non-instrumentally valuable, and pursue it to gain the $10,000. You are pursuing friendship because it is non-instrumentally valuable, but you are pursuing it merely instrumentally.

More generally, is there any conditional of the form:

  1. If you pursue friendship because p, then you pursue friendship non-instrumentally

that is true in all cases, where p states some known reason for the pursuit of friendship? I don’t think so. For the rich eccentric can tell you that you will get $10,000 if it is both the case that p and you pursue friendship. In that case, if you know that it is the case that p, then your reason for pursuing friendship is p, since it is given p, and only given p, that you will get $10,000 for your pursuit of friendship.

Maybe the lesson from the above is that there is a difference between doing something because of a reason and doing it for the reason. That friendship is non-instrumentally valuable is a reason. In the first rich eccentric case, you are pursuing because of that reason, but you are not pursuing it for that reason. Thus maybe we can say:

  1. If you pursue friendship for the reason that friendship is non-instrumentally valuable, then you pursue friendship non-instrumentally.

In the case where you are aiming only at the $10,000, you are pursuing friendship for the reason that pursuing friendship will get you $10,000, or more explicitly for the conjunctive reason that (a) if friendship is non-instrumentally valuable it will get you $10,000 to pursue it and (b) it is non-instrumentally valuable. But you are nonetheless pursuing friendship because it is non-instrumentally valuable.

There is thus a rather mysterious “acting for R” relation in regard to actions which does not reduce to “acting because R”.

Friday, July 12, 2024

An act with a normative end

Here’s an interesting set of cases that I haven’t seen a philosophical discussion of. To get some item B, you need to affirm that you did A (e.g., took some precautions, read some text, etc.) But to permissibly affirm that you did A, you need to do A. Let us suppose that you know that your affirmation will not be subject to independent verification, and you in fact do A.

Is A a means to B in this case?

Interestingly, I think the answer is: Depends.

Let’s suppose for simplicity that the case is such that it would be wrong to lie about doing A in order to get B. (I think lying is always wrong, but won’t assume this here.)

If you have such an integrity of character that you wouldn’t affirm that you did A without having done A, then indeed doing A is a means to affirming that you did A, which is a means to B, and in this case transitivity appears ot hold: doing A is a means to B.

But we can imagine you have less integrity of character, and if the only way to get B would be to falsely affirm that you did A, you would dishonestly so affirm. However, you have enough integrity of character that you prefer honesty when the cost is not too high, and the cost of doing A is not too high. In such a case, you do A as a means to permissibly affirming that you did A. But it is affirming that you did A that is a means to getting B: permissibly affirming is not necessary. Thus, your doing A is not a means to getting B, but it is a means to the additional bonus that you get B without being dishonest.

In both specifications of character, your doing A is a means to its being permissible for you to affirm you did A. We see, thus, that we have a not uncommon set of cases where an ordinary action has a normative end, namely the permissibility of another action. (These are far from the only such cases. Requesting someone’s permission is another example of an action whose end is the permissibility of some other action.)

The cases also have another interesting feature: your action is a non-causal means to an end. For your doing A is a means to permissibility of affirming you did A, but does not cause that permissibility. The relationship is a grounding one.

Friday, May 17, 2024

Acting for the sake of rationality alone

Alice is confused about the nature of practical rationality and asks wrong philosopher about it. She is given this advice:

  1. For each of your options consider all the potential pleasures and pains for you that could result from the option. Quantify them on a single scale, multiply them by their probabilities, and add them up. Go for the option where the resulting number is biggest.

Some time later, Alice goes to a restaurant and follows the advice to the letter. After spending several hours pouring over the menu and performing back-of-the-envelope calculations she orders and eats the kale and salmon salad.

Traditional decision theory will try to explain Alice’s action in terms of ends and means. What is her end? The obvious guess is that it’s pleasure. But that need not be correct. Alice may not care at all about pleasure. She just cares about doing the action that maximizes the sum of pleasure quantities multiplied by their probabilities. She may not even know that this sum is an “expected value”. It’s just a formula, and she is simply relying on an expert’s opinion as to what formula to use. (If we want to, we could suppose the philosopher gives Alice a logically equivalent formula that was so complicated that she can’t tell that she is maximizing expected pleasure.)

I suppose the right end-means analysis of Alice’s action would be something like this:

  • End: Act rationally.

  • Means: Perform an action that maximizes the sum of products of pleasures and probabilities.

The means is constitutive rather than causal. In this case, there is no causal means that I can see. (Alice may have been misinformed by the same philosopher that there is no such thing as causation.)

The example thus shows that there can be cases of action where one’s aim is simply to act rationally, where one isn’t aiming at any other end. These may be defective cases, but they are nonetheless possible.

Monday, April 22, 2024

Does culpable ignorance excuse?

It is widely held that if you do wrong in culpable ignorance (ignorance that you are blameworthy for), you are culpable for the wrong you do. I have long though think this is mistaken—instead we should frontload the guilt onto the acts and omissions that made one culpable for the ignorance.

I will argue for a claim in the vicinity by starting with some cases that are not cases of ignorance.

  1. One is no less guilty if one tries to shoot someone and misses than if one hits them.

  2. If one drinks and drives and is lucky enough to hit no one, one is no less guilty than if one does hit someone, as long as the degree of freedom and knowledge in the drinking and driving is the same.

  3. If one freely takes a drug one knows to remove free will and produce violent behavior in 25% of cases, one is no less guilty if involuntary violence does not ensue than if involuntary violence does ensue.

Now, let’s consider this case of culpable ignorance:

  1. Mad scientist Alice offers Bob a million dollars to undergo a neural treatment that over the next 48 hours will make Bob think that Elbonians—a small ethnic group—are disease-bearing mosquitoes. Bob always kills organisms that he thinks are disease-bearing mosquitoes on sight. Bob correctly estimates that there is a 25% chance that he will meet an Elbonian over the next 48 hours. If Bob accepts the deal, he is no less guilty if he is lucky enough to meet no Elbonians than if he does meet and kill one.

This is as clear a case of culpable ignorance as can be: in accepting the deal, Bob knows he will become ignorant of the human nature of Elbonians, and he knows there is a 25% chance this will result in his killing an Elbonian. I think that just as in cases (1)–(3), one is no less guilty if the bad consequences for others don’t result, so too in case (4), Bob is no less guilty if he never meets an Elbonian.

For a final case, consider:

  1. Just like (4), except that instead of coming to think Elbonians are (disease-bearing) mosquitoes, Bob will come to believe that unlike all other innocent human persons whom it is impermissible to kill, it is obligatory to kill Elbonians, and Bob’s estimate that this belief will result in his killing an Elbonian is 25%.

Again, Bob is no less guilty for taking the money and getting the treatment if he does not run into any Elbonians than if he does run into and kill an Elbonian.

Therefore, one is no less guilty for one’s culpable ignorance if wicked action does not result. Or, equivalently:

  1. One is no more guilty if wicked action does result from culpable ignorance than if it does not.

But (6) is not quite the claim I started with. I started claiming one is not guilty for the wicked action in cases of culpable ignorance. The claim I argued for is that one is no guiltier for the wicked action than if there is no wicked action resulting from the ignorance. But now if one was guilty for the wicked action, it seems one would be guiltier, since one would have both the guilt for the ignorance and for the wicked action.

However, I am now not so sure. The argument in the previous paragraph depended on something like this principle:

  1. Being guilty of both action A and action B is guiltier than just being guilty of action A, all other things being equal. (Ditto for omissions, but I want to be briefer.)

Thus being guilty of acquiring ignorance and acting wickedly on the ignorance would be guiltier than just of acquiring ignorance, and hence by (6) the wicked action does not have guilt. But now that I have got to this point in the argument, I am not so sure of (7).

There may be counterexamples to (7). First, a politician’s lying to the people an hour after a deadly natural disaster is not less guilty than lying in the same way to the people an hour before the natural disaster. But in lying to the people after the disaster one lies to fewer people—since some people died in the disaster!—and hence there are fewer actions of lying (instead of lying to Alice, and lying to Bob, and lying to Carl, one “only” lies to Alice and one lies to Bob). But I am not sure that this is right—maybe there is just one action of lying lying to the people rather than a separate one for each audience member.

Second, suppose Bob strives to insult Alice in person, and consider two cases. In one case, when he has decided to insult Alice, he gets into his car, drives to see Alice, and insults her. In the other case, when he gets into the car he realizes he doesn’t have enough gas to reach Alice, and so he buys gas, then drives to see Alice, and then insults her. In the second case, Bob performed an action he didn’t perform in the first case: buy gas in order to insult Alice. But it doesn’t seem that Bob is guiltier in the second case, even though he did perform one more guilty action. I am also not sure about this case. Here I am actually inclined to think that Bob is more guilty, for two reasons. First, he was willing to undertake a greater burden in order to insult Alice—and that increases guilt. Second, he had an extra chance to repent—each time one acquiesces in a means, that’s a chance to just say no to the whole action sequence. And yet he refused this chance. (It seems to me that Bob is guiltier in the second case, just as the assassin possessing two bullets and shooting the second after missing with the first—regardless of whether the second shot hits—is guiltier than the assassin who after shooting and missing once stops.)

While I am not convinced of the cases, they point to the idea that in the context of (7), the guilt of action A might “stretch” to making B guilty without increasing the total amount of guilt. If that makes sense, then that might actually be the right way of account of accounting for actions done in culpable ignorance. If Bob kills an Elbonian, he is guilty. That is not an additional item of guilt, but rather the guilt of the actions and omissions that caused the guilt stretches over and covers the killing. This seems to me to mesh better with ordinary ways of talking—we don’t want to say that Bob’s killing of the Elbonian in either case (4) or (5) is innocent. And saying that there is no additional guilt may be a way of assuaging the intuition I have had over the years when I thought that culpable ignorance excuses.

Maybe.

A final obvious question is about punishment. We do punish differentially for attempted and completed murder, and for drunk driving that does not result in death and drink driving that does. I think there pragmatic reasons for this. If attempted and completed murder were equally punished, there would be an incentive to “finish the job” upon initial failure. And having a lesser penalty for non-lethal drunk driving creates an incentive for the drunk driver to be more careful driving—how much that avails depends on how drunk the driver is, but it might make some difference.

Tuesday, January 23, 2024

Do I need to be aware of what I am intending if I am to be responsible?

I am going to argue that one doesn’t need to be conscious of intending to ϕ in order to be responsible for intending to ϕ.

The easiest version of the argument supposes time is discrete. Let t1 be the very first moment at which I have already intended to ϕ. My consciousness of that intending comes later, at some time t2: there is a time delay in our mental processing. So, at t1, I have already intended to ϕ. When I have intended to ϕ, I am responsible for ϕ. But now suppose that God annihilates me before t2. Then I never come to be aware that I intended to ϕ, but yet I was already responsible for it.

Here are three ways out:

  1. I am not yet responsible at t1, but only come to be responsible once I come to be aware of my intention, namely at t2.

  2. My awareness is simultaneous with the intention, and doesn’t come from the intention, but from the causal process preceeding the intention. During that causal process I become more and more likely to intend to ϕ, and so my awareness is informed by this high probability.

  3. My awareness is a direct simultaneous seeing of the intention, partially constituted by the intention itself, so there is no time delay.

Wednesday, May 24, 2023

Bidirectionality in means and ends

I never seem to tire of this action-theoretic case. You need to send a nerve signal to your arm muscles because there is a machine that detects these signals and dispenses food, and you’re hungry. So you raise your arm. What is your end? Food. What is your means to the food? Sending a nerve signal. But what is the means to the nerve signal?

The following seems correct to say: You raised your arm in order that a nerve signal go to your arm. What has puzzled me greatly about this case in the past is this. The nerve signal is a cause of the arm’s rising, and the effect can’t be the means to the cause. But I now think I was confused. For while the nerve signal is a cause of the arm’s rising, the nerve signal is not a cause of your raising your arm. For your raising your arm is a complex event C that includes an act of will W, a nerve signal S, and the rising of the arm R. The nerve signal S is a part, but not a cause, of the raising C, though it is a cause of the rising R.

So it seems that the right way to analyze the case is this. You make the complex event C happen in order that its middle part S should happen. Thus we can say that you make C happen in order that its part S should happen in order that you should get food. Then C is a means to S, and S is a means to food, but while S is a causal means to food, C is a non-causal means to S. But it’s not a particularly mysterious non-causal means. It sometimes happens that to get an item X you buy an item Y that includes X as a part (for instance, you might buy an old camera for the sake of the lens). There is nothing mysterious about this. Your obtaining Y is a means to your obtaining X, but there is no causation between the obtaining of Y and the obtaining of X.

Interestingly, sometimes a part serves as a means to a whole, but sometimes a whole serves as a means to the part. And this can be true of the very same whole and the very same part in different circumstances. Suppose that as a prop for a film, I need a white chess queen. I buy a whole set of pieces to get the white queen, and then throw out the remaining pieces in the newly purchased set to avoid clutter. Years later, an archaeologist digs up the 31 pieces I threw out, and buys my white queen from a collector to complete the set. Thus, I acquired the complete set to have the white queen, while the archaeologist acquired the white queen to have the complete set. This is no more mysterious than the fact that sometimes one starts a fire to get heat and sometimes one produces heat to light a fire.

Just as in one circumstances an event of type A can cause an event of type B and in other circumstances the causation can go the other way, so too sometimes an event of type A may partly constitute an event of type B, and sometimes the constitution can go the other way. Thus, my legal title to the white queen is constituted by my legal title to the set, but the archaeologist’s legal title to the set is partly constituted by legal title to the white queen.

There still seems to be an oddity. In the original arm case, you intend your arm’s rise not in order that your arm might rise—that you don’t care about—but in order that you might send a nerve signal. Thus, you intend something that you don’t care about. This seems different from buying the chess set for the sake of the queen. For there you do care about your title to the whole set, since it constitutes your title to the queen. But I think the oddity can probably be resolved. For you only intend your arm’s rising by intending the whole complex event C of your raising your arm. Intending something you don’t care about as part of intending a whole you do care about is not that unusual.

Friday, March 17, 2023

Highlighted outcome structures

In the previous two posts, I have been arguing that seeing action as pursuing ends does not capture all of the richness of the directed structure of action.

Here is my current alternative to the end-based approach. Start with the idea of a “highlighted outcome structure”, which is a partial ordering ≤ on the set O of possible outcomes together with a distinguished subset S of O with the property that if x ∈ S and y ∈ O and x ≤ y, then y ∈ S. The idea is that x ≤ y means one pursues y at least as much as x, and that one’s action is successful provided that one gets an outcome in S.

To a first approximation, directed action is action aligned along a highlighted outcome structure. But that doesn’t quite capture all the phenomena. For one might aim along a non-existent outcome structure. For instance, I may mistakenly think there is such a borogrove, and seek to know about borogroves, the more the better, but in fact the word “borogrove” is nonsense, and there is no set of outcomes corresponding to different degrees of knowledge about borogroves.

So, at least to a second approximation, direction action is action aligned along a conception of a highlighted outcome structure. Ideally, there actually is a highlighted outcome structure that fits the conception.

Note that this allows for the following interesting phenomenon: one can defer to another person with regard to a highlighted outcome structure. Thus, a Christian might pursue the structure that God has in mind for one, and do so as such.

Thursday, March 16, 2023

More on directed activity without ends

In my previous post I focused on how the phenomenon of games with score undercuts the idea that activity is for an end, for some state of affairs that one aims to achieve. For no matter how good one’s score, one was aiming beyond that.

I want to consider an objection to this. Perhaps when one plays Tetris, one has an infinite number of ends:

  • Get at least one point.

  • Get at least two points.

  • Get at least three points.

  • ….

And similarly if one is running a mile, one has an infinite number of ends, namely for each positive duration t, one aims to run the miles in at most t.

My initial worry about this suggestion was that it has the implausible consequence that no matter how well one does, one has failed to achieve infinitely many ends. Thus success is always muted by failure. In the Tetris case, in fact, there will always be infinitely many failures and finitely many successes. This seemed wrong to me. But then I realized it fits with phenomenology to some degree. In these kinds of cases, when one comes to the end of the game, there may always be a slight feeling of failure amidst success—even when one breaks a world record, there is the regret that one didn’t go further, faster, better, etc. Granted, the slightness of that feeling doesn’t match the fact that in the Tetris case one has always failed at infinitely many ends and succeeded only at finitely many. But ends can be prioritized, and it could be that the infinitely many ends have diminishing value attached to them (compare the phenomenon of the “stretch goal”), so that even though one has failed at infinitely many, the finitely many one has succeeded at might outweigh them (perhaps the weights decrease exponentially).

So the game cases can, after all, be analyzed in the language of ends. But there are other cases that I think can’t. Consider the drive to learn about something. First, of course, note that our end is not omniscience—for if that were our end, then we would give up as soon as we realized it was unachievable. Now, some of the drive for learning involves known unknowns: there are propositions p where I know what p is and I aim to find out if p is true. This can be analyzed by analogy with the the infinitely-many-ends account of games with score: for each such p, I have an end to find out whether p. But often there are unknown unknowns: before I learn about the subject, I don’t even know what the concepts and questions are, so I don’t know what propositions I want to learn about. I just want to learn about the subject.

We can try to solve this by positing a score. Maybe we let my score be the number of propositions I know about the subject. And then I aim to have a score of at least one, and a score of at least two, and a score of at least three, etc. That’s trivial pursuit, not real learning, though. Perhaps, then, we have a score where we weight the propositions by their collective importance, and again I have an infinite number of ends. But in the case of the really unknown unknowns, I don’t even know how to quantify their importance, and I have no concept of the scale the score would be measured on. Unlike in the case of games, I just may not even know what the possible scores are.

So in the case of learning about a subject area, we cannot even say that we are positing an infinite number of ends. Rather, we can say that our activity has a directedness—to learn more, weighted by importance—but not an end.

Wednesday, November 30, 2022

Two versions of the guise of the good thesis

According to the guise of the good thesis, one always acts for the sake of an apparent good. There is a weaker and a stronger version of this:

  • Weak: Whenever you act, you act for an end that you perceive is good.

  • Strong: Whenever you act, you act for an end, and every end you act for you perceive as good.

For the strong version to have any plausibility, “good” must include cases of purely instrumental goodness.

I think there is still reason to be sceptical of the strong version.

Case 1: There is some device which does something useful when you trigger it. It is triggered by electrical activity. You strap it on to your arm, and raise your arm, so that the electrical activity in your muscles triggers the device. Your raising your arm has the arm going up as an end, but that end is not perceived as good, but merely neutral. All you care about is the electrical activity in your muscles.

Case 2: Back when they were dating in high school, Bob promised to try his best to bake a nine-layer chocolate cake for Alice’s 40th birthday. Since then, Bob and Alice have had a falling out, and hate each other’s guts. Moreover, Alice and all her guests hate chocolate. But Alice doesn’t release Bob from his promise. Bob tries his best to bake the cake in order to fulfill his promise, and happens to succeed. In trying to bake the cake, Bob acted for the end of producing a cake. But producing the cake was worthless, since no one would eat it. The only value was in the trying, since that was the fulfillment of his promise.

In both cases, it is still true that the agent acts for a good end—the useful triggering of the device and the production of the cake. But in both cases it seems they are also acting for a worthless end. Thus the cases seem to fit with the weak but not the strong guise of the good thesis.

I was going to leave it at this. But then I thought of a way to save the strong guise of the good thesis. Success is valuable as such. When I try to do something, succeeding at it has value. So the arm going up or the cake being produced are valuable as necessary parts of the success of one’s action. So perhaps every end of your action is trivially good, because it is good for your action to succeed, and the end is a (constitutive, not causal) means to success.

This isn’t quite enough for a defense of the strong thesis. For even if the success is good, it does not follow that you perceive the success as good. You might subscribe to an axiological theory on which success is not good in general, but only success at something good.

But perhaps we can say this. We have a normative power to endow some neutral things with value by making them our ends. And in fact the only way to act for an end that does not have any independent value is by exercising that normative power. And exercising that normative power involves your seeing the thing you’re endowing with value as valuable. And maybe the only way to raise your arm or for Bob to bake the cake in the examples is by exercising the normative power, and doing so involves seeing the end as good. Maybe. This has some phenomenological plausibility and it would be nice if it were true, because the strong guise of the good thesis is pretty plausible to me.

If this story is right, it adds a nuance to the ideas here.

Friday, October 1, 2021

Musings on personal qualitative identity

Consider the popular concept of “identity”, in the sense of what one “identifies with/as”. Let’s call this “personal qualitative identity”. We can think of someone’s personal qualitative identity as a plurality of properties that the person correctly takes themselves to have and that are important, in a way that needs explication, to the person’s image of themselves.

There are a few analytic quibbles we could ask about what I just said. Couldn’t someone have properties they do not actually have as part of their identity? Surely there are lots of people who have excellences of various sorts at the heart of their self-image but lack these excellences. I don’t want to count mistakenly self-attributed properties as part of a person’s identity, because there is a kind of respect we have towards another’s personal qualitative identity that requires it to be factive. In these cases, maybe I would say that the person’s taking themselves to have the excellences is a part of their identity, but not the actual possession of the excellences.

In an opposed criticism, one might want to require the person to know that they have the properties, and not merely to correctly think they have them. But that is asking for too much. Suppose Alice identifies as ethnically Slovak, on the basis of misreading the handwriting on an old geneological document that actually said she was Slovenian. But suppose the document was wrong, and Alice in fact is Slovak rather than Slovenian. Surely it is correct to say that being Slovak is a part of her identity, even though Alice does not know that she is Slovak.

But the really central and difficult thing in the concept of personal qualitative identity is the kind of “self-identificational” importance that the person attaches to them. We have plenty of properties that we correctly believe, and even know, ourselves to have, but which lack the kind of first-person importance that makes them a part of the personal qualitative identity. There is a contradiction in saying: “It is a part of my (personal qualitative) identity that I am F, but I don’t care about being F.”

In particular, the properties that are a part of the personal qualitative identity enjoy an important role in motivating the person’s actions. Of course, any property one takes oneself to have can motivate action. I don’t much care that my eyes are blue, but my self-attribution of the blueness of my eyes motivates me to write “blue” under “eye color” on government forms. But the properties that are a part of the personal qualitative identity enter into one’s motivations more often, in wider range of contexts, and in a way more significant to oneself.

There is an ambiguity here, though. When one is motivated to act a certain way by a property in one’s identity, is one motivated by the fact that one has the property or by the fact that one identifies with that property? I want to suggest that the right answer should often be the first-order one. It is my duty as a parent to provide for my children, and I identify with my having that duty. But whether I identify with having that duty or not is irrelevant to the reason-giving force of that duty: if I didn’t identify with that duty, I would be just as obligated by it. Indeed, it seems to me to be a failure when I am moved not by my duty but by my identification with the duty. The thought “this is my duty” can be a healthy thought, but adding “and I identify as having it” is morally a thought too many, though sometimes, morally deficient as we are, we need the kick in the behind that the extra thought provides.

In fact, I think there is an interesting moral danger that I think has not been much talked about. If the property F is in my personal qualitative identity, then I also have the higher order property IF of having F in my identity. Logically speaking, this higher order property may or may not itself be a part of my identity. While in some cases it may be appropriate for IF to be a part of my identity in addition to F, in most if not all of those cases, IF should be a less central part of my identity than F, and in many cases it should not be a part of my identity at all. This is because the actual rational motivational force is often largely exhausted by my one’s having F, while a focus on IF adds an illusion of additional rational force.

In general, I think that it is important to be critical about our personal qualitative identities. There are substantive and personally important normative questions about which of one’s properties should enter into the identity. A failing I know myself to have is that I end up promoting generalizations about myself into parts of my personal qualitative identity by having them play too strong a motivational role. That “I am the kind of person who ϕs” should not play much of a role in my deliberations. What matters is whether ϕing, on a given occasion, is a good or a bad thing. Yet I find myself often deciding things on the basis of being, or not being, a certain kind of person. That's deciding on the basis of navel-gazing.

I find the following norm appealing: a property F should be a part of my identity if and only if independently of my attitude to F, my having F has significant rational importance to a broad range of my deliberations. But this austere norm is probably too austere.

Wednesday, January 20, 2021

I can jump 100 feet up in the air

Consider a possible world w1 which is just like the actual world, except in one respect. In w1, in exactly a minute, I jump up with all my strength. And then consider a possible world w2 which is just like w1, but where moments after I leave the ground, a quantum fluctuation causes 99% of the earth’s mass to quantum tunnel far away. As a result, my jump takes me 100 feet in the air. (Then I start floating down, and eventually I die of lack of oxygen as the earth’s atmosphere seeps away.)

Here is something I do in w2: I jump 100 feet in the air.

Now, from my actually doing something it follows that I was able to do it. Thus, in w2, I have the ability to jump 100 feet in the air.

When do I have this ability? Presumably at the moment at which I am pushing myself off from the ground. For that is when I am acting. Once I leave the ground, the rest of the jump is up to air friction and gravity. So my ability to jump 100 feet in the air is something I have in w2 prior to the catastrophic quantum fluctuation.

But w1 is just like w2 prior to that fluctuation. So, in w1 I have the ability to jump 100 feet in the air. But whatever ability to jump I have in w1 at the moment of jumping is one that I already had before I decided to jump. And before the decision to jump, world w1 is just like the actual world. So in the actual world, I have the ability to jump 100 feet in the air.

Of course, my success in jumping 100 feet depends on quantum events turning out a certain way. But so does my success in jumping one foot in the air, and I would surely say that I have the ability to jump one foot. The only principled difference is that in the one foot case the quantum events are very likely to turn out to be cooperative.

The conclusion is paradoxical. What are we to make of it? I think it’s this. In ordinary language, if something is really unlikely, we say it’s impossible. Thus, we say that it’s impossible for me to beat Kasparov at chess. Strictly speaking, however, it’s quite possible, just very unlikely: there is enough randomness in my very poor chess play that I could easily make the kinds of moves Deep Blue made when it beat him. Similarly, when my ability to do something has extremely low reliability, we simply say that I do not have the ability.

One might think that the question of whether one is able to do something is really important for questions of moral responsibility. But if I am right in the above, then it’s not. Imagine that I could avert some tragedy only by jumping 100 feet in the air. I am no more responsible for failing to avert that tragedy than if the only way to avert it would be by squaring a circle. Yet I can jump 100 feet in the air, while no one can square a circle.

It seems, thus, that what matters for moral responsibility is not so much the answer to the question of whether one can do something, but rather answers to questions like:

  1. How reliably can one do it?

  2. How reliably does one think (or justifiably think or know) one can do it?

  3. What would be the cost of doing it?

Tuesday, December 15, 2020

A proof that ought implies can

Some actions are are things I can do immediately: for instance, I can immediately raise my hand. Others require that I do something to enable myself to do the action: for instance, to teach in person, I have to go to the classroom, or to feed my children, I need to obtain food. So, here is a very plausible axiom of deontic logic:

  1. If I ought to do A, and A is not an action I can do immediately, then I ought to bring it about that I can immediately do A.

Now, say that I remotely can do an action provided that I can immediately do it, or I can immediately bring it about that I can immediately do it, or I can immediately bring it about that I can immediately bring it about that I can immediately do it, or ….

It follows from (1) and a bit of reasoning that:

  1. If I ought to do A, then I remotely can do A, or I have an infinite regress of prerequisite obligations.

But:

  1. It is false that I have an infinite regress of prerequisite obligations.

So:

  1. If I ought to do A, then I remotely can do A.

Wednesday, November 25, 2020

Reasons as construals

Scanlon argues that intentions do not affect the permissibility of non-expressive actions because our intentions come from our reasons, and our reasons are like beliefs in that they are not something we choose.

In this argument, our reasons are the reasons we take ourselves to have for action. Scanlon’s argument can be put as follows (my wording, not his):

  1. I do not have a choice of which reasons I take myself to have.

  2. If I rationally do A, I do it for all the reasons for A that I take myself to have for doing A.

And the analogy with beliefs supports (1). However, when formulated like this, there is something like an equivocation on “reasons I take myself to have” between (1) and (2).

On its face reasons I take myself to have are belief-like: indeed, one might even analyze “I take myself to have reason R for A” as “I believe that R supports A”. But if they are belief-like in this way, I think we can argue that (2) is false.

Beliefs come in occurrent and non-occurrent varieties. It is only the occurrent beliefs that are fit to ground or even be analogous to the reasons on the basis of which we act. Suppose I am a shady used car dealer. I have a nice-looking car. I actually tried it out and found that it really runs great. You ask me what the car is like. I am well-practiced at answering questions like that, and I don’t think about how it runs: I just say what I say about all my cars, namely that it runs great. In this case, my belief that the car runs great doesn’t inform my assertion to you. I do not even in part speak on the basis of the belief, because I haven’t bothered to even call to mind what I think about how this car runs.

So, (2) can only be true when the “take myself to have” is occurrent. For consistency, it has to be occurrent in (1). But (1) is only plausible in the non-occurrent sense of “take”. In the occurrent sense, it is not supported by the belief analogy. For we often do have a choice over which beliefs are occurrent. We have, for instance, the phenomenon of rummaging through our minds to find out what we think about something. In doing so, we are trying to make occurrent our beliefs about the matter. By rummaging through our minds, we do so. And so what beliefs are occurrent then is up to us.

This can be of moral significance. Suppose that I once figured out the moral value of some action, and now that action would be very convenient to engage in. I have a real choice: do I rummage through my mind to make occurrent my belief about the moral value of the action or not? I might choose to just do the convenient action without searching out what it is I believe about the action’s morality because I am afraid that I will realize that I believe the action to be wrong. In such a case, I am culpable for not making a belief occurrent.

While the phenomenon of mental rummaging is enough to refute (1), I think the occurrent belief model of taking myself to have a reason is itself inadequate. A better model is a construal model, a seeing-as model. It’s up to me whether I see the duck-rabbit as a duck or as a rabbit. I can switch between them at will. Similarly, I can switch between seeing an action as supported by R1 and seeing it as supported by R2. Moreover, there is typically a fact of the matter whether I am seeing the duck-rabbit as a duck or as a rabbit at any given time. And similarly, there may be a fact of the matter as to how I construed the action when I finally settled on it, though I may not know what that fact is (for instance, because I don’t know when I settled on it).

In some cases I can also switch to seeing the action as supported by both R1 and R2, unlike in the case of the duck-rabbit. But in some cases, I can only see it as supported by one of the reasons at a time. Suppose Alice is a doctor treating a patient with a disease that when untreated will kill the patient in a month. There is an experimental drug available. In 90% of the cases, the drug results in instant death. In 10% of the cases, the drug extends the remaining lifetime to a year. Alice happens to know that this patient once did something really terrible to her best friend. Alice now has two reasons to recommend the drug to the patient:

  • the drug may avenge the evil done to her friend by killing the patient, and

  • the drug may save the life of the patient thereby helping Alice fulfill her medical duties of care.

Both reasons are available for Alice to act on. Unless Alice has far above average powers of compartmentalization (in a way in which some people perhaps can manage to see the duck-rabbit as both a duck and a rabbit at once), it is impossible for Alice to act on both reasons. She can construe the recommending of the pill as revenge on an enemy or she can construe it as a last-ditch effort to give her patient a year of life, but not both. And it is very plausible that she can flip between these. (It is also likely that after the fact, she may be unsure of which reason she chose the action for.)

In fact, we can imagine Alice as deliberating between four options:

  • to recommend the drug in the hope of killing her enemy instantly

  • to recommend the drug in the hope of giving her patient a year of life

  • to recommend against the drug in order that her enemy should die in a month

  • to recommend against the drug in order that her patient have at least a month of life.

The first two options involve the same physical activity—the same words, say—and the last two options do as well. But when she considers the first two options, she construes them differently, and similarly with the last two.

Wednesday, October 14, 2020

Bennett's positive and negative instrumentality

Bennett offers this account of positive vs. negative instrumentality. If the volume of the space of possible bodily movements occupied by doing A is greater than that occupied by doing not-A, then doing A is a negative instrumentality; if it is less, then it is positive. Thus, raising one’s hand is positive: one can, for instance, raise, lower or keep one’s hand level, and raising occupies less volume of movement space than not-raising.

Here’s a curious consequence. Let M be the maximum speed at which I can move. Let A be moving at a velocity of magnitude greater than half of M. Then A occupies more of the space of possible bodily movements than not-A and hence counts as negative by Bennett’s criteria.

Why? Well, velocity is a vector: it has a magnitude and a dimension. The relevant action space (assuming the movement is two-dimensional—we can’t fly) is a disc of radius M. The subset occupied by not-A is the a (closed) disc of radius M/2. The area (the two-dimensional analogue of volume) occupied by not-A is (1/4)πM2. The area of the whole movement space is πM2. The area occupied by A is thus πM2 − (1/4)πM2 = (3/4)πM2. Thus, A occupies three times the area occupied by not-A. Hence, not-A is a positive action and A is a negative action.

This seems quite wrong.

Thursday, October 1, 2020

Acting on a desire

Suppose I have a desire for A, and I act on this desire to get A. There are at least three different stories about my motivations that are compatible with this:

  1. I pursued A non-instrumentally.

  2. I pursued A instrumentally in order to satisfy my desire.

  3. I pursued A instrumentally in order to rid myself of my desire.

The distinction between (2) and (3) should be somewhat familiar to many: when one struggles with temptation, the temptation whispers to one that if one gives in, the struggle will be over. (This is, of course, a deception: for if one gives in, the temptation is likely to return strengthened later.) The distinction between (1) and (2) is subtler. In case (1), the desire reveals to us something desirable and we pursue it. The pursuit satisfies our desire, but we don’t do it to satisfy the desire, but simply because the thing is desirable.

Intentions and reasons

An interesting question is whether one’s intentions in an action supervene on facts about one’s reasons and desires in the action. I don’t know the answer, but I also don’t know of a good way to account for intentions in terms of reasons and desires.

Judith Jarvis Thomson suggests this:

  • for a person to X, intending an event E, is for him to X because he thinks his doing so will cause E, and he wants E.

This is false. The standard (at least for me) method of generating counterexamples to conjunctive principles is to find cases where the conjuncts are coincidentally satisfied in ways other than what one had in mind in formulating the principles.

So, here is the counterexample. I am alone in an eccentric friend’s house, and I want to take an ibuprofen. I look in the medicine cabinet, and I see a jar full of pills of different sizes, colors and shapes all jumbled together. I call up my friend asking where the ibuprofen is. My friend says: “Ah, ibuprofen. That’s the pill that will hurt your throat when you swallow them.” I look in the jar, and indeed there is exactly one pill that is large enough to hurt the throat upon swallowing. I swallow the pill because I think doing this will hurt my throat. But I don’t intend my throat to get hurt. So far I don’t have a counterexample: I have failed to satisfy the “he wants E” conjunct. But now just throw that conjunct into the story in a motivationally irrelevant way. Perhaps I want my throat to be hurt, due to my being a masochist. But I promised my accountability partner that I would refrain from intentionally hurting myself, and I’ve gotten pretty good at keeping this promise, so I don’t intend to get hurt.

One could add to Thomson’s conditions:

  • and his wanting E is a cause of his action.

But we can just multiply the the coincidental satisfaction of conditions. For instance, perhaps my psychiatrist informed me that my masochistic desires are caused by headaches, and so if I get rid of my headache, my masochistic desires will disappear. Thus, my desire to hurt my throat is a cause of my relieving my headache. But I don’t relieve my headache in order to hurt my throat.

All this makes me think that it’s not unlikely that having a particular intention in an action is a primitive datum about the action: perhaps actions are teleological entities, and the intentions are their telê.

Friday, September 11, 2020

Some fun distinctions

Isn’t it funny how very similar gestures can signal respect and disrespect? Under ordinary circumstances, crossing to the other side of the street to avoid near someone is a form of disrespect. But in a pandemic it signals a respectful desire not to make the other nervous. Though I suppose even apart from a pandemic, one would have moved out of the way of dignitaries.

We have another neat little thing here. There is a difference between going out of one’s way to ensure that one isn’t in another’s personal space and going out of one’s way to ensure that the other isn’t in one’s personal space, even though in an egalitarian society, x is in y’s space if and only if y is in x’s space.

And notice how hard it is to formulate that point without reifying “personal space”, just by using distance. I can hear a difference between avoiding my being within a certain distance of another and avoiding the other being within a certain distance of me, but I can’t tell which is which! Maybe, though, we can distinguish (a) avoiding imposing on another the bad-for-them of us being within a certain distance and (b) avoiding imposing on me the bad-for-me of us being within that distance. In other words, the reasons for the two actions are grounded in the same state of affairs but considered as bad for different individuals.

I suppose similar things can happen entirely in third person contexts. I can work for a friendship between x and y considered as a good for x, considered as a good for y, or considered as a good for both. And these are all three different actions.