Prudential rationality is about what an agent should do in the light of what is good or bad for the agent. Prudential or self-interested rationality is a major philosophical topic and considered fairly philosophically fundamental. Why? There are many (infinitely many) other categories of goods and bads, and for each category it makes sense to ask what one should do in the light of that category. For instance, family rationality is concerned with what an agent should do in the light of what is good or bad for people in the agent's family; leftward rationality is concerned with the good and bad for the people located to the agent's left; nearest-neighbor rationality with the good or bad for the person other than the agent whose center of mass is closest to the agent's center of mass; green-eye rationality with the good or bad for green-eyed people; and descendant rationality with the good or bad for one's descendants. Why should prudential rationality get singled out as a topic?
It's true that in terms of agent-relative categories, the agent is particularly natural. But the agent's descendants is also a quite natural agent-relative category.
This question reminds me of this thought (inspired by Nancy Cartwright's work). Physicists study things that don't exist. They study the motion of objects in isolated gravitational systems, in isolated quantum systems, and so on. But there are no isolated systems, and any system includes a number of other forces. It is, however, sometimes useful to study the influences that particular forces would have on their own.
However, in the end what we want to predict in physics is how real things move. And they move in the light of all the forces. And likewise in action theory we want to figure out how real people should act. And they should act in the light of all the goods and bads. We get useful insight into how and why real things move by studying how they would move if they were isolated or if only one force was relevant. We likewise get useful insight into how and why real people should act by studying what actions would be appropriate if they were isolated or if only one set of considerations were relevant. As a result we have people who study prudential rationality and people who study epistemic rationality.
It is nonetheless crucial not to forget that the study of how one should act in the light of a subset of the goods and bads is not a study of how one should act, but only a study of how one would need to act if that subset were all that's relevant, just as the study of gravitational systems is not a study of how things move, but only of how things would move if gravity were all that's relevant.
That said, I am not sure prudential rationality is actually that useful to study. Its main value is that it restricts the goods and bads to one person, thereby avoiding the difficult problem of balancing goods and bads between persons (and maybe even non-persons). But that value can be had by studying not prudential or self-interested rationality, but one-recipient rationality, where one studies how one should act in the light of the goods and bads to a single recipient, whether that recipient is or is not the agent.
It might seem innocent to make the simplifying assumption that the single recipient is the agent. But I think that doing this has a tendency to hide important questions that become clearer when we do not make this assumption. For instance, when one studies risk-averseness, we lose sight of the crucially important question of whose risk-averseness is relevant: the agent's or the recipient's? Presumably both, but they need to interact in a subtle and important way. To study risk-averseness in the special case where the recipient is the agent risks losing sight of something crucial in the phenomenon, just as one loses a lot of structure when instead of studying a mathematical function of two variables, say, f(x,y)=sin x cos y, one studies merely how that function behaves in the special case where the variables are equal. Although one does simplify by not studying the interaction between the agent's and the recipient's risk-averseness, one does so at the cost of confusing the two and not knowing which aspect of one's results is due to the risk-averseness of the person qua agent and which part is due to the risk-averseness of the person qua recipient.
Similarly, when one is interested--as decision theorists centrally are--in decision-making under conditions of uncertainty, it is important to distinguish between the relevance of the uncertainty of the person qua agent and the uncertainty of the person qua recipient. When we do that, we might discover a structure that was hidden in the special case where the agent and recipient are the same. For instance, we may discover that with respect to means the agent's uncertainty is much more important than the recipient's, but with respect to ends the recipient's uncertainty is very important.
To go back to the gravitational analogy, it's very useful to consider the gravitational interaction between particles x and y. But we lose an enormous amount of structure when we restrict our attention to the case where x=y. We would do better to make the simplifying assumption that we're considering two different particles, and then think of the one-particle case as a limiting case. Likewise for rationality. While we do need to study simplified cases, we need to choose the cases in a way that does not lose too much structure.
Of course, if we have an Aristotelian theory on which all one's actions are fundamentally aimed at one's own good, then what I say above will be unhelpful. For in that case, prudential rationality does capture the central structure of rationality. But such theories are simply false.
3 comments:
Of course, if we have an Aristotelian theory on which all one's actions are fundamentally aimed at one's own good, then what I say above will be unhelpful. For in that case, prudential rationality does capture the central structure of rationality. But such theories are simply false.
I think the idea that prudential rationality is a uniquely unproblematic form of rationality is doing most of the work in the historical interest in prudential rationality. Historically there was self-interest versus other-interest, as e.g. Sidgwick represents them, where other-interest was equated with (broadly utilitarian) morality. Utilitarians face an interesting question in “Why be moral?”—why should I care about all these strangers? And they give weird answers involving non-existent things like the point of view of the universe or ideal observers. IIRC Aquinas answers this question, basically, by saying that God loves everyone and we should love God, so we should also love what he loves. Whatever else you want to say, this is not a metaphysically lightweight explanation. It is not obvious, at least from a theoretical point of view (practical life is different), that we have reason to be concerned about others.
But there is no similarly interesting question about “Why be self-interested?” because “It’s in your self-interest!” seems like the best possible answer to that sort of question. Maybe a slightly more sophisticated way to put this is that it is natural or unproblematic for the thing doing the acting to act for the good of itself, while it is arbitrary, or calls for more explanation, if it acts for the good of anything else. After all, it would be very odd if my actions were systematically aimed at the good of those to my immediate left. Why is it any less odd if my actions are aimed at the good of those in my family or circle of friends, or all sentient beings?
One way to handle this is to be a pretty simplistic intuitionist: we just feel , or see, that we have reason to act for the goods of our family and friends while not for those to our immediate left. But I think a lot of 20th c ethics was concerned to be more theoretically rigorous than that.
All that said, I like the idea of separating out acting for one’s own good from acting for others. It should not be taken for granted that the right action when I am uncertain of my own good is the same as the right action when I am uncertain of yours, or that the degree of risk it is appropriate to subject you to is the same as the degree it is appropriate to subject myself to. I imagine there is some interesting psychological work to be done, or that has been done, on such topics.
I wonder if thinking that acting in one's own interest is unproblematic while acting for others is problematic isn't just an artifact of the Fall. In other words, there isn't anything more problematic about care for others, but we're just pretty selfish. We could instead have fallen so we'd find it obvious we should care for those on our immediate left but not find it obvious we should act for ourselves.
After all, I don't see how there is anything me that needs to be said about why one should bring about good effects than that they are good. That they are good for one just seems irrelevant (at this level of detail).
Post a Comment