Showing posts with label agency. Show all posts
Showing posts with label agency. Show all posts

Friday, August 29, 2025

Experiencing something as happening to you

In some video games, it feels like I am doing the in-game character’s actions and in others it feels like I am playing a character that does the actions. The distinction does not map onto the distinction between first-person-view and third-person-view. In a first-person view game, even a virtual reality one (I’ve been playing Asgard Wrath 2 on my Quest 2 headset), it can still feel like a character is doing the action, even if visually I see things from the character’s point of view. On the other hand, one can have a cartoonish third-person-view game where it feels like I am doing the character’s actions—for instance, Wii Sports tennis. (And, of course, there are games which have no in-game character responsible for the actions, such as chess or various puzzle games like Vexed. But my focus is on games where there is something like an in-game character.)

For those who don’t play video games, note that one can watch a first-person-view movie like Lady in the Lake without significantly identifying with the character whose point of view is presented by the camera. And sometimes there is a similar distinction in dreams, between events happening to one and events happening to an in-dream character from whose point of view one looks at things. (And, reversely, in real life some people suffer from a depersonalization where feels like the events of life are happening to a different person.)

Is there anything philosophically interesting that we can say about the felt distinction between seeing something from someone else’s point of view—even in a highly immersive and first-person way as in virtual reality—and seeing it as happening to oneself? I am not sure. I find myself feeling like things are happening to me more in games with a significant component of physical exertion (Wii Sports tennis, VR Thrill of the Fight boxing) and where the player character doesn’t have much character to them, so it is easier to embody them, and less so in games with a significant narrative where the player character has character of their own—even when it is pretty compelling, as in Deus Ex. Maybe both the physical aspect and the character aspect are bound up in a single feature—control. In games with a significant physical component, there is more physical control. And in games where there is a well-developed player character, presumably to a large extent this is because the character’s character is the character’s own and only slightly under one’s control (e.g., maybe one can control fairly coarse-grained features, roughly corresponding to alignment in D&D).

If this is right, then a goodly chunk of the “it’s happening to me” feeling comes not from the quality of the sensory inputs—one can still have that feeling when the inputs are less realistic and lack it when they are more realistic—but from control. This is not very surprising. But if it is true, it might have some philosophical implications outside of games and fiction. It might suggest that self-consciousness is more closely tied to agency than is immediately obvious—that self-consciousness is not just a matter of a sequence of qualia. (Though, I suppose, someone could suggest that the feeling of self-conscious is just yet another quale, albeit one that typically causally depends on agency.)

Tuesday, February 21, 2023

Achievement in a quantum world

Suppose Alice gives Bob a gift of five lottery tickets, and Bob buys himelf a sixth one. Bob then wins the lottery. Intuitively, if one of the tickets that Alice bought for Bob wins, then Bob’s win is Alice’s achievement, but if the winning ticket is not one of the ones that Alice bought for Bob, then Bob’s win is not Alice’s achievement.

But now suppose that there is no fact of the matter as to which ticket won, but only that Bob won. For instance, maybe the way the game works is that there is a giant roulette wheel. You hand in your tickets, and then an equal number of depressions on the wheel gets your name. If the ball ends in a depression with your name, you win. But they don’t write your name down on the depressions ticket-by-ticket. Instead, they count up how many tickets you hand them, and then write your name down on the same number of depressions.

In this case, it seems that Bob’s win isn’t Alice’s achievement, because there is no fact of the matter that it was one of Alice’s tickets that got Bob his win. Nor does this depend on the probabilities. Even if Alice gave Bob a thousand tickets, and Bob contributed only one it seems that Bob’s win isn’t Alice’s achievement.

Yet in a world run on quantum mechanics, it seems that our agential connection to the external world is like Alice’s to Bob’s win. All we can do is tweak the probabilities, perhaps overwhelmingly so, but there is no fact of the matter about the outcome being truly ours. So it seems that nothing is ever our achievement.

That is an unacceptable consequence, I think.

I think there are two possible ways out. One is to shift our interpretation of “achievement” and say that Bob’s win is Alice’s achievement in the original case even when it was the ticket that Bob bought for himself that won. Achievement is just sufficient increase of probability followed by the occurrence of the thus probabilified event.

The second is heavy duty metaphysics. Perhaps our causal activity marks the world in such a way that there is always a trace of what happened due to what. Events come marked with their actual causal history. Sometimes, but not always, that causal history specifies what was actually the cause. Perhaps I turn a quantum probability dial from 0.01 to 0.40, and you turn it from 0.40 to 0.79, and then the event happens, and the event comes metaphysically marked with its cause. Or perhaps when I turn the quantum probability dial and you turn it, I embue it with some of my teleology and when you turn it, you embue it with some of yours, and there is a fact of the matter as to whether a further on down effect comes from your teleology or mine.

I find the metaphysical answer hard to believe, but I find the probabilistic one conceptually problematic.

Friday, April 15, 2022

Towards a great chain of being

Here is one way to generate a great chain of agency: y is a greater agent than x if for every major type of good that x pursues, y pursues it, too, but not vice versa.

Take for instance the cat and the human. The cat pursues major types of good such as nutrition, reproduction, play, comfort, health, life, truth, and (to a limited degree) social interaction. The human pursues all of these, but additionally pursues virtue, beauty, and union with God. Thus the human is a greater agent than the cat.

Is it the case that humans are at the top of the great chain of agency on earth?

This is a difficult question to answer for at least two reasons. The first reason is that it is difficult to identify the relevant level of generality in my weaselly phrase “major type of good”. The oak pursues photosynthetic nutrition, the dung beetle does its thing, while we pursue other forms of nutrition. Do the three count as pursuing different “major types” of good? I want to say that all these are one major type of good, but I don’t know how to characterize it. Maybe we can say something like this: Good itself is not a genus but there are highest genera of good, and by “major type” we mean these highest genera. (I am not completely sure that all the examples in my second paragraph are of highest genera.)

The second reason the question is difficult is this. The cat is unable to grasp virtue as a type of good. A cat who had a bit more scientific skill might be able to see an instrumental value in the human virtue—could see the ways that it helps members of communities gain cat-intelligible goods like nutrition, reproduction, health, life, etc. But the cat wouldn’t see the distinctive way virtue in itself is good. Indeed, it is not clear that the cat would be able to figure out that virtue is itself a major type of good, no matter how much scientific skill the cat had. Similarly, it is very plausible that there are major types of good that are beyond human knowledge. If we saw beings pursuing those types of good, we would likely notice various instrumental benefits of the pursuit—for the pursuit of various kinds of good seems interwoven in the kinds of evolved beings we find on earth (pursuing one good often helps with getting others)—but we just wouldn’t see the behavior as the pursuit of a major type of good. Like the cat scientist observing our pursuit of virtue, we would reduce the good being pursued to the goods intelligible to us.

Thus, if octopi pursue goods beyond our ken, we wouldn’t know it unless we could talk to octopi and they told us that what they were pursuing in some behavior was a major type of good other than the ones we grasp—though of course, we would still be unable to grasp what was good in it. And as it happens the only beings on earth we can talk to are humans.

All that said, it still seems a reasonable hypothesis that any major type of good that is pursued by non-human organisms on earth are pursued by us.

Wednesday, September 1, 2021

Models of libertarian agency, and some more on divine simplicity

Here is a standard libertarian picture of free and responsible choice. I am choosing between two non-mental actions, A and B. I deliberate on the basis of the reasons for A and the reasons for B. This deliberation indeterministically causes an inner mental state W(A), which is the will or resolve or intention to produce A. And then W(A) causes, either deterministically or with high probability, the extra-mental action A.

Now notice two things. First, notice that my production of the state W(A) is itself something I am morally responsible for. Imagine that I have resolved myself to gratuitously insult you. If it turns out that my vocal chords are paralyzed, my resolve W(insult) is itself enough to make me guilty.

Second, note that that my production of W(A) could involve the production of a prior second-order state of will or resolve, a willing W(W(A)) to will to produce A. For there are times when it’s hard to resolve ourselves to do something, and in those cases we might resolve ourselves to resolve ourselves first. But at the same time, to avoid an infinite regress, we should not adopt a view on which every time we responsibly produce something, we do so by forming a prior state of willing or resolve or intention. In light of this, although my production of W(A) could involve the production of a prior second-order state W(W(A)), it need not do so. In fact, phenomenologically, it seems more plausible to think that in typical cases of free choice, we do not go to the meta level of producing W(W(A)). We only go to the meta level in special cases, such as when we have to “steel” ourselves to gain the resolve to do the action.

Thus we have seen that, assuming libertarianism, it is possible for me to be responsible for indeterministically producing a state of affairs W(A) without producing a prior state of willing or resolving or intending in favor of W(A). The state W(A) is admittedly an inner mental state. But the responsibility for W(A) does not seem to have anything to do with the innerness of W(A). We are responsible for W(A) because our deliberation indeterministically but non-aberrantly results in W(A).

Here is a question: Could there be cases where we have libertarian-free actions where instead of our deliberation indeterministically non-aberrantly resulting in W(A), and thereby making us responsible for W(A) as well as A, our deliberation directly indeterministically and non-aberrantly results in the extra-mental action A, without an intervening inner mental state W(A) that deterministically or with high probability causes A, but with us nonetheless being responsible for A?

Once we have admitted—as a libertarian has to, on pain of a regress of willings—that we can be responsible for producing a state of affairs without a prior willing of that state of affairs, then it seems hard to categorically deny the possibility of us producing an extra-mental state of affairs responsibly without an intervening prior willing. And in fact phenomenology fits quite well with the hypothesis that we do that. We do many things intentionally and responsibly without being aware of a willing, resolve or intention to do them. If we stick to the initial libertarian model on which there must be an intervening mental state W(A), we have to say that either the state W(A) is hidden from us—unconscious—or that these actions are only free in a derivative way. Neither is a particularly attractive hypothesis. Why not, simply, admit that sometimes deliberation results in an extra-mental action that we are responsible for without an intervening willing, resolve or intention?

Well, I can think of one reason:

  1. It seems that you can only be responsible for what we do intentionally, and we cannot do something intentionally without intending something.

But note that if this reason undercuts the possibility of our responsibly directly doing A without an intervening act W(A) of intention, it likewise undercuts the possibility of our responsibly directly producing W(A) without an intervening W(W(A)) act, and sets us on a vicious regress.

I actually think (1) can be accepted. In that case, when we directly responsibly produce W(A), the intentionality in the production of W(A) is constituted by the non-aberrant causal connection between deliberation and W(A), rather than by some regress-engendering intention-for-W(A) prior to W(A). And the occurrence of W(A) means that we are intending something, namely A.

But what would it be like if we were to directly responsibly produce A, without an intervening act of intention W(A)? How would that be reconciled with (1)? Again, the intentionality of the production of A would be constituted by the non-aberrant causal connection between deliberation and A. And the content of the intention would supervene on the actual occurrence of A as well as on the reasons favoring A that were instrumental in the deliberation. (There are some complications about excluded reasons. Maybe in those cases deliberation can have an earlier stage where one freely decides whether to exclude some reasons.)

Call the cases where we thusly directly responsibly produce an extra-mental action A cases of direct agency.

A libertarian need not believe we exhibit direct agency. Perhaps we always have one level of resolve, willing or intention as an inner mental state. But the libertarian should not be dogmatic here, given the above arguments.

Our phenomenology suggests that we do exhibit direct agency, and indeed do so quite commonly. And if God is simple, and hence does not have contingent inner states, all of God’s indeterministic free actions are cases of direct agency.

In fact, independently of divine simplicity, we may have some reason to prefer the direct agency model in the case of God. Consider why it is that sometimes we go to the meta level of W(W(A)): we do so because of the weakness of our wills, we have to will ourselves to will ourselves to produce A. It seems that a perfect being would never have reason to go to the meta level of W(W(A)). So, the remaining question is whether a perfect being would ever have reason to go to the W(A) level. I think there is some plausibility in the idea that just as going to the W(W(A)) level is a sign of weakness, a sign of a need for self-control, going to the W(A) level is also a sign of imperfection—a sign that one needs a tool, even if an intra-mental tool, for the production of A. It seems plausible, thus, that if this is possible and compatible with freedom and responsibility, a perfect being would simply directly produce A (where A is, say, the action of the being’s causing horses to exist). And I have argued that it is possible, and it is compatible with freedom and responsibility.

Wednesday, September 16, 2020

Agent causation

I have long identified as having an agent-causal theory of free will. But I have just realized that my Aristotelian take on agent causation is far enough from the most common agent-causal theories—those of Clare and O’Connor—that it may be misleading to talk of myself as accepting an agent-causal theory of free will.

Standard agent-causal theories distinguish between agent causation and event causation as two distinct and real things in the world. For instance, they hold that I agent cause my writing of this post but there is an event causal relation between my being in this armchair and the cushion being squished. Agent causation has the agent as the cause and event causation has an event as the cause. But I think in both cases the cause is the same: it is a substance, namely myself. I cause the writing of this post and I cause the cushion to be squashed. On the standard view on which agent causation is distinguished solely by the fact that the cause is the agent, both are cases of agent causation. But that would be misleading to say.

So, if we take seriously the Aristotelian account of causation as substance causation, we shouldn’t distinguish agent causation from other kinds of causation by whether the cause is an agent or something else. But we can still make the distinction. My writing this post is an actualization of a power of my will (or my practical rationality, if you prefer). My squashing the cushion is an actualization of the power of my weight. Agent causation is distinguished from other kinds of causation not by what does the causing, but how it does the causing. In agent causation, the substance causes by actualizing its will (and any substance with a will is an agent). In other kinds of causation, the substance causes by actualizing a different power.

So, I think the fundamental relation underlying causation is actually at least ternary: agent X causes event E by actualizing power P.

This neatly integrates agent causation with reasons causation. There are more and less proximate powers. I now have a nearly proximate power to speak Polish (some motivation might be needed to make it proximate). When I was an infant, I had a remote power to speak Polish, by having a nearly proximate power to learn Polish. When the power P in the causal relation is specified as the maximally proximate power, in the agential case, it is a maximally proximate power of the will. And a maximally proximate powers of the will are tied to reasons: it is only by having a reason to do something that I am able to will it (one wills under the guise of the good). So, my reasons for action supervene on the maximally proximate power for action. Thus, the ternary description of agent causation neatly includes the reasons.

Thursday, March 9, 2017

Multiple levels of multiple realizability

We could have sophisticated beings who reason about the world via numerical Bayesian credences. But we could also have sophisticated beings who reason in some other way—indeed, we are such beings. And there is one sophisticated being who reasons about the world via omniscience. This suggests that reasoning and agency are multiply realizable at multiple levels, including:

  1. brain/mind architecture

  2. algorithms implementing general reasoning and representation strategy

  3. general reasoning and representation strategy.

Each level is an abstraction from the previous. So now we have a very deep question: Is there a fourth level that abstracts from the third, to get the concept of reasoning as such? Or are the various general reasoning and representation strategies unified analogically, say by similarity to some primary case? And if so, what is the primary case? Omniscience? Logical omniscience plus numerical Bayesianism?

Friday, September 9, 2016

Are the laws of nature first order?

I think it's a pretty common to think that the laws of nature should be formulated in a first-order language. But I think there is some reason to think this might not be true. We want to formulate the laws of nature briefly and elegantly. In a previous post, I suggested that this might require a sequence of stipulations. For instance, we might define momentum as the product of mass and acceleration, and then use the concept of momentum over and over in our laws. If each time we referred to the momentum of an object a we had to put something like "m(a)⋅dx(a)/dt", our formulation of the laws wouldn't have the brevity and elegance we want. It is much better to stipulate the momentum p(a) of a as "m(a)⋅dx(a)/dt" once, and then just use p(x) each time.

But our best-developed logical formalism for capturing such stipulations is the λ-calculus. So our fundamental laws might be something like:

  • p(pa(m(a)⋅dx(a)/dt)→(L1(p)&...&Ln(p)))
instead of being a rather longer expression which contains a conjunction of n things in each of which "m(a)⋅dx(a)/dt" occurs at least once. But the λ-calculus is a second-order language. In fact, it seems very plausible that encoding stipulation is always going to use a second-order tool, since stipulation basically specifies a rewrite rule for a subsequent sentence.

So what if the language of science is second order? Well, two things happen. First, Leon Porter's argument against naturalism fails, since it assumes the language of science to be first-order. Second, I have the intuition that this line of thought supports theism to some degree, though I can't quite justify it. I think the idea is that second-order stuff is akin to metalinguistic stuff, and we would expect the origins of this sort of stuff to be an agent.

Thursday, May 14, 2015

Preference structures had by no possible agent

Say that a preference structure is a total, transitive and reflexive relation (i.e., a total preorder) on centered worlds--i.e., world-agent pairs <w,x>. Then there is a preference structure had by no possible agent. This is in fact just an easy adaptation of the proof of Cantor's Theorem.

Let c be my own centered world <@,Pruss>. We now define a preference structure Q as follows. If agent x at world w, where <w,x> is not the same as <@,Pruss>, prefers her own centered world <w,x> to c, then we say that c is Q-preferable to <w,x>; otherwise, we say that <w,x> is Q-preferable to c. Then we say that all the centered worlds that according to the preceding are Q-preferable to c are Q-equivalent and all the centered worlds we said to be less Q-preferable than c are also Q-equivalent. Thus, Q ranks centered worlds into three classes: those less good than c, those better than c and finally c itself.

But now note that no possible agent has Q as her preference structure. First of all, I at the the actual world do not have Q as my preference structure--that's empirically obvious, in that the worlds do not fall into three equipreferability classes for me. And if <w,x> is different from <@,Pruss>, then x's preference-order at w (if any) between c and <w,x> differs from what Q says about the order.

So what? Well, I think this provides a slight bit of evidence for the idea that agents choose under the guise of the good.

Friday, August 22, 2014

Freedom and consciousness

The following seems a logically possible story about how some contingent agent chooses. The agent consciously deliberates between reasons in favor of action A and reasons in favor of action B. The agent then forms a free decision for A—an act of will in favor of A. This free decision then causes two things: it causes the agent to do A and it causes the agent to be aware of having decided in favor of A.

Not only does the above story seem logically possible, but it seems likely to be true in at least some, and perhaps even all, cases of our free choices.
But if the above story is true, then it will also be possible for the causal link between the agent's decision and the agent's awareness of the decision to be severed, say because someone hit the agent on the head right after making the decision and right before the agent was aware of the decision, or because God miraculously suspended the causal linkage. In such a case, however, the agent will still have decided for A, and would have done so freely, but would not have been aware of so deciding.

Thus it is possible to freely decide for A without being aware that one has freely decided for A. This no doubt goes against common intuitions.

I think the main point to challenge in my story is the claim that it is possible that the decision causes the awareness of the decision. Maybe a decision for A has to be the kind of mental state that has awareness of its own nature built right in, so the awareness is simultaneous with and constituted by the decision. I think this is phenomenologically implausible. It seems to me that many times I am only aware of having decided to perform an action when I am already doing the physical movements partly constituting the action. But presumably the movements (at least typically) come after I've made up my mind, after my decision.

It would be a strange thing to have decided but not to have been aware of how one has decided. Perhaps we can imaginatively wrap our minds around this by thinking about cases where an agent remembers deliberating but doesn't remember what decision she came to. Surely that happens to all of us. Of course, in typical such cases, the agent was at some point aware of the outcome of the deliberation. So this isn't going to get our minds around the story completely. But it may help a little.

In the above, I want to distinguish awareness of choice from prediction of choice. It may be that even before one has made a decision, one has a very solid prediction of how one's choice will go. That prediction is not what I am talking about.

Wednesday, March 30, 2011

A gratitude/resentment argument

This argument is inspired by an argument of Kenneth Pearce.

  1. (Premise) It is sometimes appropriate to be grateful for or to the universe or to be resentful for or at the universe.
  2. (Premise) It is only appropriate to be grateful for or to A if A is an agent or an effect of an agent.
  3. (Premise) It is only appropriate to be resentful for or at A if A is an agent or an effect of an agent.
  4. Therefore, the universe is an agent or an effect of an agent.
  5. (Premise) If the universe is an agent or an effect of an agent, naturalism is false.
  6. Therefore, naturalism is false.

Friday, February 26, 2010

An adverbial model for agent causation

The big problem for libertarian views of free will, especially agent-causal ones, is how to make the action come from both the agent and the agent's reasons. The compatibilist gives up on the agent part—or, more charitably, we should say that, roughly, she analyzes the action's originating from the agent in terms of the action's originating from the agent's reasons.

Here is a model. In the world, there is nomically explained causation. Maybe, charged particle A causes charged particle B to move away, because of the laws of electromagnetics. Maybe, massive particle A causes massive particle B to approach, because of the law of gravitation. Here is a very natural way to say what is happening here:

  1. A electromagnetically causes B to move away.
  2. A gravitationally causes B to approach.
The laws that are explaining the causation can be included adverbially in the causal statements. The laws from which the causation comes tag the causation, modify it. (In Aristotelian terms, we might even be tempted to say that electromagnetic causation and gravitational causation are analogically cases of causation—causation takes multiple forms.) The adverbial part here is crucial—the law really is doing much of the explaining here. In some sense, even, I would say that the lawmaker (that in virtue of which the law is a law) causes the movement of B or maybe causes A's causing of that movement. (I somehow like the latter, but in the free will case I think the former works better.) For some relevant background, see an unpublished paper of mine.

Suppose now that Plato writes a book because of love of truth and Euthydemus fools Callias out of a desire to impress. Then, very roughly:

  1. Plato's love of truth Platonically causes Plato's writing of the book.
  2. Euthedemus' desire to impress Euthydemically causes Euthydemus' fooling Callias.
The nomic case provides us with a way in which causation has three relata[note 1]: the reasons, the agent and the action. But the agent and the reasons enter differently.

Strictly speaking, the analogy shouldn't be between the agent and the law, but between the agent and the lawmaker, or, even better, between the agent's form and the lawmaker.

Tuesday, August 26, 2008

Are associations entities exercising agency?

It seems that committees, corporations, clubs and countries can and do exercise agency. That a committee has done A is not a claim that all or most of the people on the committee have done A (in fact, one person might have been deputed), and some of the things that a committee can do seem to be things that no individual can do (e.g., collectively deliberate). Thus, there seems to be good reason to introduce the notion of collective agency.

Now, some people go one step further and say that the collective agency is exercised by an entity—the committee, corporation, club or country—that is an agent. Here is an argument for this further step. For x to exercise agency, x must think (deliberate, etc.) But if x thinks, then x is. (Otherwise the inference "I think therefore I am" is invalid.) Therefore anything that exercises agency must be. And to be is to be an entity, a something or other, (a tode ti, to use the terminology of Metaphysics Z).

So the move for positing an agent where there is collective agency is not unjustified. But the move has the following consequence: committees, clubs and countries are persons. For it seems to be a conceptual truth that only persons are agents. To be an agent, one must be a rational being, after all.

But if committees, corporations, clubs and countries are persons, then to dissolve a committee, corporation, club or country is to kill a person. Therefore, to dissolve a committee, corporation, club or country requires reasons that have the kind of gravity that killing a person requires. But that is absurd, at least in the case of committees, corporations and clubs. While it is wrong to kill a person because her work is more efficiently done by someone else, it is not wrong to dissolve a committee because its deliberations can be more efficiently subsumed under another head. And while it can be permissible for a state to dissolve a corporation or club that refuses to accept members of some minority group, this kind of discrimination does not rise to the level of a capital crime—we would not, for instance, think it acceptable to execute a sole proprietor who exhibited racism in hiring.

Therefore, it is absurd to say that committees, corporations and clubs are entities that exercise agency. And if the argument from collective agency to collectives being agents is sound, then it follows that committees, corporations and clubs do not exercise agency, except in an analogical sense.

Notice something, though. My argument above is carefully phrased to apply to committees, corporations and clubs. It might be argued not to apply to countries. For there is some plausibility to the idea that a country can only be permissibly dissolved for the gravest of reasons, reasons akin to those that justify execution (think of the partition of Germany after WWII as a form of capital punishment on the country). Still, I think this is mistaken. Reasons for two nationalities within a country to separate need not be as grave as the reasons for killing a person, if the separation can be done in a peaceful way (perhaps the separation of the Czechs and the Slovaks is an example?)

It could also be that there are some genuine collective entities. Thus, it could be that the Church is a genuine collective entity. Certainly the Christian is likely to say that to try to destroy the Church is worse than trying to kill a person (but, fortunately, destroying the Church is impossible). It could also be that a Christian marriage is a genuine collective entity, and that therefore to try to break up such a marriage is akin to attempted murder (again, fortunately, only death can actually break up a Christian marriage).

But even if there are such supernatural collective entities, it is clear that the phenomenon that gets analyzed by some as "collective agency" is not limited to them. Thus, if the argument from collective agency to collectives being agents is sound, one needs a different story about colelctive agency.