tag:blogger.com,1999:blog-38914342185645455112024-03-18T20:24:20.622-05:00Alexander Pruss's BlogAlexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.comBlogger4191125tag:blogger.com,1999:blog-3891434218564545511.post-35439045989254308752024-03-18T12:05:00.002-05:002024-03-18T12:05:26.819-05:00Simplicity and Newton's inverse square law<p>When I give talks about the way modern science is based on beauty, I
give the example of how everyone will think Newton’s Law of
Gravitation</p>
<ol type="1">
<li><span
class="math inline"><em>F</em> = <em>G</em><em>m</em><sub>1</sub><em>m</em><sub>2</sub>/<em>r</em><sup>2</sup></span></li>
</ol>
<p>is more plausible than what one might call “Pruss’s Law of
Gravitation”</p>
<ol start="2" type="1">
<li><span
class="math inline"><em>F</em> = <em>G</em><em>m</em><sub>1</sub><em>m</em><sub>2</sub>/<em>r</em><sup>2.00000000000000000000000001</sup></span></li>
</ol>
<p>even if they fit the observation data equally, and even if (2) fits
the data slightly better.</p>
<p>I like the example, but I’ve been pressed on this example at least
once, because I think people find the exponent <span
class="math inline">2</span> especially plausible in light of the idea
of gravity “spreading out” from a source in concentric shells whose
surface areas are proportional to <span
class="math inline"><em>r</em><sup>2</sup></span>. Hence, it seems that
we have an explanation of the superiority of (1) to (2) in physical
terms, rather than in terms of beauty.</p>
<p>But I now think I’ve come to realize why this is not a good response
to my example. I am talking of <em>Newtonian</em> gravity here. The
“spreading out” intuition is based on the idea of a field of force as
something energetic coming out of a source and spreading out into space
around it. But that picture makes little sense in the Newtonian context
where the theory says we have instantaneous action at a distance. The
“spreading out” intuition makes sense when the field of force is
emanating at a uniform rate from the source. But there is no sense to
the idea of emanation at a uniform rate when we have instantaneous
action at a distance.</p>
<p>The instantaneous action at a distance is just that: action at a
distance—one thing attracting another at a distance. And the force law
can then have any exponent we like.</p>
<p>With General Relativity, we’ve gotten rid of the instantaneous action
at a distance of Newton’s theory. But my point is that in the
<em>Newtonian context</em>, (1) is very much to be preferred to (2).</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-52520551019755424532024-03-18T11:50:00.005-05:002024-03-18T11:51:43.446-05:00Beauty and simplicity in equations<p>Often, the kind of beauty that scientists, and especially physicists,
look for in the equations that describe nature is taken to have
simplicity as a primary component.</p>
<p>While simplicity is important, I wonder if we shouldn’t be careful
not to overestimate its role. Consider two theories about some
fundamental force <em>F</em> between
particles with parameters <span
class="math inline"><em>α</em><sub>1</sub></span> and <span
class="math inline"><em>α</em><sub>2</sub></span> and distance <i>r</i> between them:</p>
<ol type="1">
<li><p><span
class="math inline"><em>F</em> = 0.8846583561447518148493143571151840833168115852975428057361124296<em>α</em><sub>1</sub><em>α</em><sub>2</sub>/<em>r</em><sup>2</sup></span></p></li>
<li><p><span
class="math inline"><em>F</em> = 0.88465835614475181484931435711518<em>α</em><sub>1</sub><em>α</em><sub>2</sub>/<em>r</em><sup>2 + 2<sup>−64</sup></sup></span>.</p></li>
</ol>
<p>In both theories, the constants up front are meant to be exact and (I
suppose) have no significantly more economical expression. By standard
measures of simplicity where simplicity is understood in terms of the
brevity of expression, (2) is a much simpler theory. But my intuition is
that unless there is some special story about the significance of the
2 + 2<sup>−64</sup> exponent, (1) is
the preferable theory.</p>
<p>Why? I think it’s because of the beauty in the exponent <span
class="math inline">2</span> in (1) as opposed to the nasty <span
class="math inline">2 + 2<sup>−64</sup></span> exponent in (2). And
while the constant in (2) is simpler by about 106 bits, that additional
simplicity does not make for significantly greater <em>beauty</em>.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com3tag:blogger.com,1999:blog-3891434218564545511.post-28270147083085474972024-03-15T11:14:00.003-05:002024-03-15T15:17:37.236-05:00A tweak to the Turing test<p>The <a
href="https://academic.oup.com/mind/article/LIX/236/433/986238">Turing
test</a> for machine thought has an interrogator communicate (by typing)
with a human and a machine both of which try to convince the
interrogator that they are human. The interrogator then guesses which is
human. We have good evidence of machine thought, Turing claims, if the
machine wins this “imitation game” about as often as the human. (The
original formulation has some gender complexity: the human is a woman,
and the machine is trying to convince the interrogator that it, too, is
a woman. I will ignore this complication.)</p>
<p>Turing thought this test would provide <em>a posteriori</em> evidence
that a machine can think. But we have a good <em>a priori</em> argument
that a machine can pass the test. Suppose Alice is a typical human, so
that in competition with other humans she wins the game about half the
time. Suppose that for any finite sequence <span
class="math inline"><em>S</em><sub><em>n</em></sub></span> of <span
class="math inline"><em>n</em></span> questions and <span
class="math inline"><em>n</em> − 1</span> answers of reasonable length
(i.e., of a length not exceeding how long we allow for the game—say, a
couple of hours) ending on a question that could be a transcript of the
initial part of an interrogation of Alice, there is a fact of the matter
as to what answer Alice would make to the last question. Then there is a
possible very large , but finite, machine that has a list of all such
possible finite sequences and the answers Alice would make, and that at
any point in the interrogation answers just as Alice would. That machine
would do as well as Alice at the imitation game, so it would pass the
Turing test.</p>
<p>Note that we do not need to <em>know</em> what Alice would say in
response to the last question of <span
class="math inline"><em>S</em><sub><em>n</em></sub></span>. The point
isn’t that we could <em>build</em> the machine—we obviously couldn’t,
just because the memory capacity required would be larger than the size
of the universe—but that such a machine is <em>possible</em>. We could
suppose constructing the database in the machine at random and just
getting amazingly lucky and matching Alice’s dispositions.</p>
<p>The machine would not be thinking. Matching the current stage in the
interrogation to the database and just giving the item in the line for
that is not thinking. The point is obvious. Suppose that <span
class="math inline"><em>S</em><sub>1</sub></span> consists of the
question “What is the most important thing in life?” and the database
gives the rote answer “It is living in such a way that you have no
regrets.” It’s obvious that the machine doesn’t know what it’s
saying.</p>
<p>Compare this to a giant chess playing machine which encodes for each
of the 10<sup>40</sup> legal chess
positions the optimal next move. That machine doesn’t <em>think</em>
about playing chess.</p>
<p>If the Turing test is supposed to be an <em>a posteriori</em> test
for the possibility of machine intelligence, I propose a simple tweak:
We limit the memory capacity of the machine to be within an order of
magnitude of human memory capacity. This avoids cases where the Turing
test is passed by rote recitation of responses.</p>
<p>Turing himself imagined that doing well in the imitation game would
require <em>less</em> memory capacity than the human brain had, because
he thought that only “a very small fraction” of that memory capacity was
used for “higher types of thinking”. Specifically, Turing surmised that
10<sup>9</sup> bits of memory would
suffice to do well in the game against “a blind man” (presumably because
it would save the computer from having to have a lot of data about what
the world looks like). So in practice my modification is one that would
not decrease Turing’s own confidence in the passability of his test.</p>
<p><a
href="https://www.scientificamerican.com/article/what-is-the-memory-capacity/">Current
estimates</a> of the memory capacity of the brain are of the order of
10<sup>15</sup> bits, at the high end
of the estimates in Turing’s time (and Turing himself inclined to the
low end of the estimates, around <span
class="math inline">10<sup>10</sup></span>). The model size of GPT-4 has
not been released, but it appears to be near but a little below the human
brain capacity level. So if something with the model size of GPT-4 were
to pass the Turing test, it would also pass the modified Turing
test.</p>
<p><strong>Technical comment:</strong> The above account assumed there
was a fact about what answer Alice would make in a dialogue that started
with <em>S</em><sub><em>n</em></sub>.
There are various technical issues with regard to this. Given Molinism
or determinism, these technical issues can presumably be overcome (we
may need to fix the exact conditions in which Alice is supposed to be
undergoing the interrogation). If (as I think) neither Molinism nor
determinism is true, things become more complicated. But there are
presumably to be statistical regularities as to what Alice is likely to
answer to <span
class="math inline"><em>S</em><sub><em>n</em></sub></span>, and the
machine’s database could simply encode an answer that was chosen by the
machine’s builders at random in accordance with Alice’s statistical
propensities.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-75374457732214329742024-03-13T12:01:00.004-05:002024-03-13T12:01:27.816-05:00Do you and I see colors the same way?<p>Suppose that Mary and Twin Mary live almost exactly duplicate lives
in an almost black-and-white environment. The exception to the
duplication of the lives and to the black-and-white character of the
environment is that on their 18th birthday, each sees a colored square
for a minute. Mary sees a green square and Twin Mary sees a blue
square.</p>
<p>Intuitively, Mary and Twin Mary have different phenomenal experiences
on their 18th birthday. But while I acknowledge that this is intuitive,
I think it is also deniable. We might suppose that they simply have a
“new color” experience on their 18th birthday, but it is qualitatively
the <em>same</em> “new color” experience. Maybe what determines the
qualitative character of a color experience is not the physical color
that is perceived, but the relationship of this color to the whole body
of our experience. Given that green and blue have the same relationship
to the other (i.e., monochromatic) color experiences of Mary and
Twin-Mary, it may be that they appear the same way.</p>
<p>If this kind of relationalism is correct, then it is very likely that
when you and I look at the same blue sky, our experiences are
qualitatively different. Your phenomenal experience is defined by its
position in the network of <em>your</em> experiences and mine is defined
by its position in the network of <em>my</em> experiences. Since these
networks are different, the experiences are different. Somehow I find
this idea somewhat plausible. It is even more plausible some experiences
other than colors. Take tastes and smells. It’s not unlikely that fried
cabbage tastes differently to me because in the network of my
experiences it has connections to experiences of my grandmother’s
cooking that it does not have in your network.</p>
<p>Such a relationalism could help explain the wide variation in sensory
preferences. We normally suppose that people disagree on which tastes
they like and dislike. But what if they don’t? What if instead the
phenomenal tastes are different? What if banana muffins, which I
dislike, taste differently to me than they do to most people, because
they have a place in a different network of experiences, and if banana
muffins tasted to me like they do to you, I would like them just as
much?</p>
<p>In his original Mary thought experiment, Jackson says that monochrome
Mary upon experiencing red for the first time learns what experience
<em>other people</em> were having when they saw a red tomato. If the
above hypothesis is right, she doesn’t learn that at all. Other people’s
experiences of a red tomato would be <em>very</em> different from
Mary’s, because Mary’s monochrome upbringing would place the red tomato
in a very different network of experiences from that which it has in
other people’s networks of experiences. (I don’t think this does much
damage to the thought experiment as an argument against physicalism.
Mary still seems to learn something—what it is to have an experience
occupying such-and-such a spot in her network of experiences.)</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com1tag:blogger.com,1999:blog-3891434218564545511.post-66076651462605930012024-03-13T11:31:00.003-05:002024-03-13T11:31:59.854-05:00More fun with monochrome Mary<p>Here’s a fun variant of the black-and-white Mary thought experiment.
Mary has been brought up in a black-and-white environment, but knows all
the microphysics of the universe from a big book. One day she sees a
flash of green light. She gains the phenomenal concept <span
class="math inline"><em>α</em></span> that applies to the specific look
of that flash. But does Mary know what green light looks like?</p>
<p>You might think she knows because her microphysics book will inform
her that on such-and-such a day, there was a flash of green light in her
room, and so she now knows that a flash of green light has appearance
<em>α</em>. But that is not quite
right. A microphysics book will not tell Mary that there was a flash of
green light in <em>her</em> room. It will tell her that there was a
flash of green light in a room with such-and-such physical properties.
Whether she can deduce from these properties and her observations that
this was <em>her</em> room depends on what the rest of the universe is
like. If the universe contains Twin Mary who lives in a room with
exactly the same monochromatically observable properties as Mary’s room,
but where at the analogous time there is a flash of blue light, then
Mary will have no way to resolve the question of whether she is the
woman in the room with the green flash or in the room with the blue
flash. And so, even though Mary knows all the microphysical facts about
the world, Mary doesn’t know whether it is a green flash or a blue flash
that has appearance <em>α</em>.</p>
<p>This version of the Mary thought experiment seems to show that there
is something very clear, specific and even verbalizable (since Mary can
stipulate a term in her language to express the concept <span
class="math inline"><em>α</em></span>, though if Wittgenstein is right
about the private language argument, we might require a community of
people living in Mary’s predicament) that can remain unknown even when
one knows all the microphysical facts <em>and</em> has all the relevant
concepts <em>and</em> has had the relevant experiences: Whether it is
green or blue light that has appearance <span
class="math inline"><em>α</em></span>?</p>
<p>This seems to do quite a bit of damage to physicalism, by showing
that the correlation between phenomenal appearances and physical facts
is a fact about the world going beyond microphysics.</p>
<p>But now suppose Joan lives on Earth in a universe which contains both
Earth and Twin Earth. The denizens of both planets are prescientific,
and at their prescientific level of observation, everything is exactly
alike between Earth and Twin Earth. Finer-grained observation, however,
would reveal that Earth’s predominant surface liquid is H<span
class="math inline"><sub>2</sub></span>O while Twin Earth’s is XYZ, but
currently there is no difference. Now, Joan reads a book that tells her
in full detail all the microphysical structure of the universe.</p>
<p>Having read the book, Joan wonders: Is water H<span
class="math inline"><sub>2</sub></span>O or is it XYZ? Just by reading
the book, she can’t know! The reason she doesn’t know it is because her
prescientific observations combined with the contents of the book are
insufficient to inform her whether she lives on Earth or on Twin Earth,
whether she is Joan or Twin Joan, and hence are insufficient to inform
her whether the liquid she refers to as “water” is H<span
class="math inline"><sub>2</sub></span>O or XYZ.</p>
<p>But surely this shouldn’t make us abandon physicalism about
water!</p>
<p>Now Joan and Twin Joan both have concepts that they verbalize as
“water”. The difference between these concepts is entirely external to
Joan and Twin Joan—the difference comes entirely from the identity of
the liquid interaction with which gave rise to the respective concepts.
The concepts are essentially ostensive in their differences. In other
words, Joan’s ignorance of whether water is H<span
class="math inline"><sub>2</sub></span>O or XYZ is basically an
ignorance of self-locating fact: is <em>she</em> in the vicinity of
H<sub>2</sub>O or in the vicinity of
XYZ.</p>
<p>Is this true for Mary and Twin Mary? Can we say that Mary’s ignorance
of whether it is a green or a blue flash that has appearance <span
class="math inline"><em>α</em></span> is essentially an ignorance of
self-locating facts? Can we say that the difference between Mary’s
phenomenal concept formed from the green flash and Twin Mary’s
phenomenal concept formed from the blue flash is an external
difference?</p>
<p>Intuitively, the answer to both questions is negative. But the point
is not all that clear to me. It <em>could</em> turn out that both Mary
and Twin Mary have a purely comparative recognitive concept of “the same
phenomenal appearance as <em>that</em> flash”, together with an ability
to recognize that similarity, and with the two concepts being internally
exactly alike. If so, then the argument is unconvincing as an argument
against physicalism.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-34348358831370008342024-03-12T10:52:00.002-05:002024-03-12T10:52:24.005-05:00The epistemic gap and causal closure<p>In the philosophical literature, the main objection to physicalism
about consciousness is the epistemic gap: the alleged fact that full
knowledge of the physical does not yield full knowledge of the mental.
And one of the main objections to nonphysicalism about consciousness is
causal closure: the alleged fact that physical events, like our actions,
have causes that are entirely physical.</p>
<p>There is a simple way to craft a theory that avoids both objections.
Simply suppose that mental states have two parts: a physical and a
non-physical part. The physical part of the mental state is responsible
for the mental state’s causal influence on physical reality. The
non-physical part explains the epistemic gap: full knowledge of the
physical world yields full knowledge of the physical part of the mental
state, but not full knowledge of the mental state.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com4tag:blogger.com,1999:blog-3891434218564545511.post-58029211353767811522024-03-11T14:19:00.005-05:002024-03-11T14:21:24.318-05:00Trust versus prediction<p>What is the difference between trusting that someone will <span
class="math inline"><em>ϕ</em></span> and merely predicting their <span
class="math inline"><em>ϕ</em></span>ing?</p>
<p>Here are two suggestions that don’t quite pan out.</p>
<p>1. <em>In trusting, you have to have a pro-attitude towards <span
class="math inline"><em>ϕ</em></span>ing.</em> But this is false. One
can trust a referee will make a fair decision even when one hopes they
will make a decision that favors one instead. And you can trust that
someone who promised you a punishment will mete it out if you deserve it
even if you would rather they didn’t.</p>
<p>2. <em>In trusting, you rely on the person’s <span
class="math inline"><em>ϕ</em></span>ing.</em> But this is not always
true. A promised benefit might be such that it doesn’t affect any of
your actions, but you can still trust you will receive it.</p>
<p>But here is an idea I like. In trusting, you believe that the person
will intentionally <em>ϕ</em> as part
of her proper functioning, and you believe this on account of the
person’s possessing the relevant proper functional disposition. In
central cases, “proper functioning” can be replaced with “expression of
virtue”, but trust can include non-moral proper function.</p>
<p>A consequence of this account is that it is impossible to trust
someone to do wrong, since wrongdoing is never a part of a person’s
proper functioning. For trust-based theories of promises, this makes it
easy to see why promises to do wrong are null and void: for it makes no
sense to solicit trust where trust is impossible.</p>
<p>This account of trust gives a nice extended sense of trust in things
other than people. Just drop “intentionally” and “person”. In an
extended sense, you can trust a dog, a carabiner, a book, or anything
else that has a proper function. This seems right: we certainly do talk
of trust in this extended sense.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-62178346918445504722024-03-11T13:31:00.002-05:002024-03-11T13:31:41.115-05:00Consent, desire and promises<p>I have long argued that desire is not the same as consent: the fact
that I want you to do something does not constitute consent to your
doing it.</p>
<p>Here is a neat little case that has occurred to me that seems to show
this conclusively. Alice borrowed a small sum of money from me, and the
return is due today. However, I know that I have failed Alice on a
number of occasions, and I have an unpleasant feeling of moral envy as
to how she has always kept to her moral commitments. I find myself
fantasizing about how nice it would feel to have Alice fail me on this
occasion! It would be well worth the loss of the loan not to “have to”
feel guilt about the times I failed Alice.</p>
<p>But now suppose that Alice knows my psychology really well. Her
knowing that I <em>want</em> her to fail to return the money is no
excuse to renege on her promise.</p>
<p>There are milder and nastier versions of this. A particularly nasty
version is when the promisee wants you to break a promise so that you
get severely punished: one thinks here of Shylock in the <em>Merchant of
Venice</em>. A mildish (I hope) version is where I am glad when people
come late to meetings with me because it makes me feel better about my
record of unpunctuality.</p>
<p>Or for a very mild version, suppose that I typically come about a
minute late to appointments with you. You inductively form the belief
that I will do so this time, too. And it is a pleasure to have one’s
predictions verified, so you want me to be late.</p>
<p>The above examples also support the claim that we cannot account for
the wrong of promise-breaking in terms of overall harm to the promisee.
For we can tweak some of these cases to result in an overall benefit to
the promisee. Let’s say that I feel pathologically and excessively
guilty about all the times I’ve been late to appointments, and your
breaking your promise to show up at noon will make me feel a lot better.
It might be that overall there is a benefit from your breaking the
promise. But surely that does not justify your breaking the promise.</p>
<p>Or suppose that in the inductive case, the value of your pleasure in
having your predictions verified exceeds the inconvenience of waiting a
minute.</p>
<p><strong>Objection:</strong> Promises get canceled in the light of a
sufficiently large benefit to the promisee.</p>
<p><strong>Response:</strong> The above cases are not like that. For the
benefit of relief of my guilt requires that you <em>break</em> the
promise, not that the promise be <em>canceled</em> in light of a good to
me. And the pleasure of verification of predictions surely is
insufficient to cancel a promise.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-60002274217176371992024-03-11T13:08:00.001-05:002024-03-11T13:08:11.333-05:00Promising punishment<p>I have long found promises to punish puzzling. The problem with such
promises is that normally a promisee can release the promisor from a
promise. But what’s the point of me promising you a punishment should
you do something if you can just release me from the promise when the
time for the promise comes?</p>
<p>Scanlon’s account of promising also faces another problem with
promises to punish: Scanlon requires that the promisee wants to be
assured of the promised action. But of course in many cases of promising
a punishment, the promisee does not want any such assurance! (There are
some cases when they do, say when they recognize the benefit of being
held to account for something.)</p>
<p>Additionally, it seems that breaking a promise is wrong because of
the harm to the promisee. But it is commonly thought that escaping
punishment is not a harm. Here I am inclined to follow Boethius,
however, who insisted that a just punishment is intrinsically good for
one. But suppose we follow common sense rather than Boethius, or perhaps
we are dealing with a case where the norm whose violation gains a
punishment is not a moral norm.</p>
<p>Then there is still something interesting we can say. Let’s say that
I promise you a punishment for some action, and you perform that action,
but I omit the punishment. Even if the omission of the punishment is not
a harm, you might feel a resentment that in your choice of activity you
had to take my prospective punishment into account but I wasn’t going to
follow-through on the punishment. There is something unfair about this.
Perhaps the point is clearest in a case like this: I promise you a
punishment each time you do something. Several times you hold yourself
back due to fear of punishment, and then finally you do it, and out of
laziness I don’t to punish. You then feel: “Why did I even bother to
keep to the rule earlier?”</p>
<p>But note that even in a case like this, it seems better to locate the
harm in my making of the promise if I wasn’t going to keep it than in
the non-keeping of it. So, let’s suppose that the Boethius line of
thought doesn’t apply, and suppose that I am now deciding whether to
perform the onerous task of punishing you as per promise. What moral
reason do I have to punish you now in light of the promise? Well, there
are considerations having to do with future cases: if I don’t do it now,
you won’t trust me in the future, etc. But we can suppose all such
future considerations are irrelevant—maybe this is the last hour of my
life. So why is it that I should punish you?</p>
<p>I think there are two mutually-compatible stories one can tell. One
story is an Aristotelian one: it’s simply bad for <em>my</em> will that
I not keep my promise. The other story is a trust-based one: I solicited
your trust, and even if you want me to break trust with you, I have no
right to betray your trust. Having one’s trust betrayed is in itself a
harm, regardless of whether one is trusting someone to do something that
is otherwise good or bad for one.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com1tag:blogger.com,1999:blog-3891434218564545511.post-49679024085784639422024-03-11T11:01:00.003-05:002024-03-11T12:40:31.859-05:00The Laws of Promising<p>On a conventionalist theory of promises, there is a social
institution of promising, somewhat akin to a game, and a promise is a
kind of communicative action that falls under the rules of that
institution. But what makes a communicative action fall under the rules
of the promissory institution? Well, one of the generally agreed on
necessary conditions is that it must be intentional. So now it seems
that a part of what makes something a promise is that it be intended to
fall under the rules of the promissory institution. And this itself is a
rule of the promissory institution.</p>
<p>Thus, the promissory institution needs to make reference to itself in
its rules. Is this a vicious circularity?</p>
<p>Maybe not. The <a href="https://www.worldbadminton.com/rules/">Laws
of Badminton</a> govern players of badminton. Indeed, the Definitions in
the Laws start with: “Player: Any person playing Badminton”. Badminton
is nothing but the game governed by these rules, and yet the rules
constantly make reference to badminton via the concept of a player (and
occasionally make explicit self-reference, as in law 17.6.1 that an
umpire shall “uphold and enforce the Laws of Badminton”). Is this a
vicious circularity? Here is a reason to think it is not. People can
coherently decide to play the game defined by a set of rules referred to
under some description such as “The rules posted on
WorldBadminton.com/rules” or “The rules customarily in use in this club”
or “The Laws of Badminton” or “The rules adopted by the Badminton World
Federation” that in fact refers to the same set of rules. The rules can
refer to themselves under some of these descriptions as well. We can
then suppose that a player is someone who is achieving some measure of
minimal success in intentionally following the rules under some such
description.</p>
<p>The way to avoid vicious circularity here is that one needs some way
of gaining reference to the rules from within the rules, and one can do
so by means of an appropriate expression typically having to do with a
physical embodiment of the rules, say in an inscription or in a
customary practice.</p>
<p>Can make the same move with regard to promises? We could image a
group of early humans sitting around and making up “the Laws of
Promising” prior to any promises being made, with the Laws of Promising
referencing themselves under some description like “The Laws promulgated
in the Cave of the Lone Bear on the third full moon since the melting of
the snow in the fourth year of the chiefdom of Jas the Bald.” And then
the laws could cover communicative actions intended to fall under the
Laws of Promising under some relevant description or other. But while we
can <em>imagine</em> this, it is highly implausible as a historical
claim.</p>
<p>I want to offer a weird alternative to the institutional theory of
promises. Let’s first imagine that in your head there is a literal “book
of promises” (made of waterproof paper, etc.), and that you can inscribe
text in that book using a little pen that moves around in your head. But
suppose that moving the pen is not a basic action. The only way to write
<em>p</em> in the book of promises is
to intentionally communicate to another person that you are inscribing
<em>p</em> in the book. Such
intentional communication causes, by some weird law of nature, the
inscription of <em>p</em> into the book
of promises. And then we suppose that it is a fundamental moral law that
anything inscribed in the book of promises is to be done, subject to
various nuances.</p>
<p>On this account, promising <span
class="math inline"><em>p</em></span> is inscribing <span
class="math inline"><em>p</em></span> into the book by intentionally
communicating that you are inscribing <span
class="math inline"><em>p</em></span> into the book. But note that you
are not intending to <em>promise</em>: you are intending to <em>inscribe
into the book</em>, which is different. So there is no circularity.
(Compare here a mind-reading machine which serves you lunch if you press
a button with the intention of getting lunch from the machine. There is
no circularity.)</p>
<p>Is there such a book? A tempting simple thought is that there is: it
is our memory. But that’s not right. Promises are normatively binding
even if they are not remembered, though if they innocently forgotten one
is typically not culpable for breaking them.</p>
<p>A dualist can suppose that the soul really does contain something
like a book of promises, which is not directly available to
introspection. When you make a promise, the content is “inscribed” into
the “promise book”, <em>and</em> remembered as being inscribed. There is
no other way to put things into the soul’s “promise book”, though if
there is a God, he could miraculously inscribe things in the book.
(Would we then be required to fulfill them? Well, it depends on what the
moral rule is. If it says that one must do everything <em>in</em> the
book, then we would be required to fulfill what God wrote in the book.
But if it only says that one must do everything that <em>one
inscribed</em> in the book, then what God inscribed in it may not need
to be done.)</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-63435457597456099352024-03-05T17:51:00.004-06:002024-03-05T17:51:46.464-06:00Blurting<p>It is commonly thought that to engage in a speech of a particular
sort—assertion, request, etc.—one needs to intend to do so.</p>
<p>But suppose you ask me a question, and I unintentionally blurt out an
answer, even though the matter is confidential. Can you correctly tell
people that I answered your question, that I asserted whatever it was
that I blurted out?</p>
<p>If yes, then one does not need to <em>intend</em> to engage in a
speech act of a particular sort in order for that speech act to
occur.</p>
<p>But I suspect the that in unintentionally blurting one does not
answer or assert. One reason is that if one was answering or asserting,
then it seems that one could also unintentionally blurt out a lie.
(Imagine that you have a habit of answering a certain question with a
falsehood, and you blurt out a falsehood purely out of habit.) But I
don’t think a lie can be unintentional.</p>
<p>Moreover, if someone asserts, then what they say is presented for
trust. But what is said unintentionally is not presented for trust.</p>
<p>I am not very confident of the above.</p>Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-81244834658151023322024-03-01T11:40:00.006-06:002024-03-05T08:00:26.789-06:00Comparing sizes of infinite sets<p>Some people want to be able to compare the sizes of infinite sets
while preserving the proper subset principle that holds for finite
sets:</p>
<ol type="1">
<li>If <em>A</em> is a proper subset of
<em>B</em>, then <span
class="math inline"><em>A</em> < <em>B</em></span>.</li>
</ol>
<p>We also want to make sure that our comparison agrees with how we
compare finite sets:</p>
<ol start="2" type="1">
<li>If <em>A</em> and <span
class="math inline"><em>B</em></span> are finite, then <span
class="math inline"><em>A</em> ≤ <em>B</em></span> if and only if <span
class="math inline"><em>A</em></span> has no more elements than <span
class="math inline"><em>B</em></span>.</li>
</ol>
<p>For simplicity, let’s just work with sets of natural numbers. Then
there is a total preorder (total, reflexive and transitive relation)
≤ on the sets of natural numbers (or on
subsets of any other set) that satisfies (1) and (2). Moreover, we can
require the following plausible weak translation invariance principle in
addition to (1) and (2):</p>
<ol start="3" type="1">
<li><em>A</em> ≤ <em>B</em> if and only
if <span
class="math inline">1 + <em>A</em> ≤ 1 + <em>B</em></span>,</li>
</ol>
<p>where 1 + <em>C</em> is the set
<em>C</em> translated one unit to the
right: <span
class="math inline">1 + <em>C</em> = {1 + <em>n</em> : <em>n</em> ∈ <em>C</em>}</span>.
(See the Appendix for the existence proofs.) So far things are sounding
pretty good.</p>
<p>But here is another plausible principle, which we can call
<em>discreteness</em>:</p>
<ol start="4" type="1">
<li>If <em>A</em> and <span
class="math inline"><em>C</em></span> differ by a single element, then
there is no <em>B</em> such that <span
class="math inline"><em>A</em> < <em>B</em> < <em>C</em></span>.</li>
</ol>
<p>(I write <em>A</em> < <em>B</em>
provided that <em>A</em> ≤ <em>B</em>
but not <em>B</em> ≤ <em>A</em>.) When
two sets differ by a single element, intuitively their sizes should
differ by one, and sizes should be multiples of one.</p>
<p><strong>Fun fact:</strong> There is no total preorder on the subsets
of the natural numbers that satisfies the proper subset principle (1),
the weak translation invariance principle (3) and the discreteness
principle (4).</p>
<p>The proof will be given in a bit.</p>
<p>One way to try to compare sets that respects the subset principle (1)
would be to use hypernatural numbers (which are the extension of the
natural numbers to the context of hyperreals).</p>
<p><strong>Corollary 1:</strong> There is no way to assign a
hypernatural number <span
class="math inline"><em>s</em>(<em>A</em>)</span> to every set <span
class="math inline"><em>A</em></span> of natural numbers such that (a)
<span
class="math inline"><em>s</em>(<em>A</em>) < <em>s</em>(<em>B</em>)</span>
whenever <em>A</em> ⊂ <em>B</em>, (b)
<span
class="math inline"><em>s</em>(<em>A</em>) − <em>s</em>(<em>B</em>) = <em>s</em>(1+<em>A</em>) − <em>s</em>(1+<em>B</em>)</span>,
and (c) if <em>A</em> and <span
class="math inline"><em>B</em></span> differ by one element, then <span
class="math inline">|<em>s</em>(<em>A</em>)−<em>s</em>(<em>B</em>)| = 1</span>.</p>
<p>For if we had such an assignment, we could define <span
class="math inline"><em>A</em> ≤ <em>B</em></span> if and only if <span
class="math inline"><em>s</em>(<em>A</em>) ≤ <em>s</em>(<em>B</em>)</span>,
and we would have (1), (3) and (4).</p>
<p><strong>Corollary 2:</strong> There is no way to assign a hyperreal
probability <em>P</em> for a lottery
with tickets labeled with the natural numbers such that (a) each
individual ticket has equal non-zero probability of winning <span
class="math inline"><em>α</em></span>, (b) <span
class="math inline"><em>P</em>(<em>A</em>) − <em>P</em>(<em>B</em>)</span>
and <span
class="math inline"><em>P</em>(1+<em>A</em>) − <em>P</em>(1+<em>B</em>)</span>
are always either both negative, both zero, or both positive, and (c) no
two distinct probabilities of events differ by less than <span
class="math inline"><em>α</em></span>.</p>
<p>Again, if we had such an assignment, we could define <span
class="math inline"><em>A</em> ≤ <em>B</em></span> if and only if <span
class="math inline"><em>P</em>(<em>A</em>) ≤ <em>P</em>(<em>B</em>)</span>,
and we would have (1), (3) and (4).</p>
<p>I will now prove the fun fact. The proof won’t be the simplest
possible one, but is designed to highlight how wacky a total preorder
that satisfies (1) and (4) must be. Suppose we have such a total
preorder ≤. Let <span
class="math inline"><em>A</em><sub><em>n</em></sub></span> be the set
<span
class="math inline">{<em>n</em>, 100 + <em>n</em>, 200 + <em>n</em>, 300 + <em>n</em>, ...}</span>.
Observe that <span
class="math inline"><em>A</em><sub>100</sub> = {100, 200, 300, 400, ...}</span>
$ is a proper subset of <span
class="math inline"><em>A</em><sub>0</sub> = {0, 100, 200, 300, ...}</span>,
and differs from it by a single element. Now let’s consider how the
elegant sequence of shifted sets <span
class="math inline"><em>A</em><sub>0</sub>, <em>A</em><sub>1</sub>, ..., <em>A</em><sub>100</sub></span>
behaves with respect to the preorder ≤.
Because <span
class="math inline"><em>A</em><sub><em>n</em> + 1</sub> = 1 + <em>A</em><sub><em>n</em></sub></span>,
if we had (3), the order relationship between successive sets in the
series would always be the same. Thus we would have exactly one of these
three options:</p>
<ol type="i">
<li><p><span
class="math inline"><em>A</em><sub>0</sub> ≈ <em>A</em><sub>1</sub> ≈ ... ≈ <em>A</em><sub>100</sub></span></p></li>
<li><p><span
class="math inline"><em>A</em><sub>0</sub> < <em>A</em><sub>1</sub> < ... < <em>A</em><sub>100</sub></span></p></li>
<li><p><span
class="math inline"><em>A</em><sub>0</sub> > <em>A</em><sub>1</sub> > ... > <em>A</em><sub>100</sub></span>,</p></li>
</ol>
<p>where <em>A</em> ≈ <em>B</em> means
that <em>A</em> ≤ <em>B</em> and <span
class="math inline"><em>B</em> ≤ <em>A</em></span>. But (i) and (ii)
each contradict (1), since <span
class="math inline"><em>A</em><sub>100</sub></span> is a proper subset
of <em>A</em><sub>0</sub>, while (iii)
contradicts (4) since <span
class="math inline"><em>A</em><sub>0</sub></span> and <span
class="math inline"><em>A</em><sub>100</sub></span> differ by one
element.</p>
<p>This completes the proof. But we can now think a little about what
the ordering would look like if we didn’t require (3). The argument in
the previous paragraph would still show that (i), (ii) and (iii) are
impossible. Similarly, <span
class="math inline"><em>A</em><sub>0</sub> ≥ <em>A</em><sub>1</sub> ≥ ... ≥ <em>A</em><sub>100</sub></span>
is impossible, since <span
class="math inline"><em>A</em><sub>100</sub> < <em>A</em><sub>0</sub></span>
by (1). That means we have two possibilities.</p>
<p>First, we might have <span
class="math inline"><em>A</em><sub>0</sub> ≤ <em>A</em><sub>1</sub> ≤ ... ≤ <em>A</em><sub>100</sub></span>.
But because <em>A</em><sub>0</sub> and
<em>A</em><sub>100</sub> differ by one
element, by (4) it follows that exactly one of these <span
class="math inline">≤</span> is actually strict. Thus, in the sequence
<span
class="math inline"><em>A</em><sub>0</sub>, <em>A</em><sub>1</sub>, ..., <em>A</em><sub>100</sub></span>
suddenly there is exactly one point at which the size of the set goes up
by one. This is really counterintuitive. We are generating our sequence
of sets by starting with <span
class="math inline"><em>A</em><sub>0</sub></span> and then shifting the
set over to the right by one (since $A_{n+1}=1+A_n), and suddenly the
size jumps.</p>
<p>The second option is we don’t have monotonicity at all. This means
that at some point in the sequence we go up and at some other point we
go down: there are <em>m</em> and <span
class="math inline"><em>n</em></span> between <span
class="math inline">0</span> and 99
such that <span
class="math inline"><em>A</em><sub><em>m</em></sub> < <em>A</em><sub><em>m</em> + 1</sub></span>
and <span
class="math inline"><em>A</em><sub><em>n</em></sub> > <em>A</em><sub><em>n</em> + 1</sub></span>.
This again is really counterintuitive. All these sets look alike: they
consist in an infinite sequence of points 100 units apart, just with a
different starting point. But yet the sizes wobble up and and down. This
is weird!</p>
<p>This suggests to me that the problem lies with the subset principle
(1) or possibly with discreteness (4), not with the details of how to
formulate the translation invariance principle (3). If we have (1) and
(4) things are just too weird. I think discreteness is hard to give up
on: counting should be discrete—two sets can’t differ in size by, say,
1/100 or <span
class="math inline">1/2</span>. And so we are pressed to give up the
subset principle (1).</p>
<p><strong>Appendix: Existence proofs</strong></p>
<p>Let <em>U</em> be any set. Let <span
class="math inline">∼</span> be the equivalence relation on subsets of
<em>U</em> defined by <span
class="math inline"><em>A</em> ∼ <em>B</em></span> if and only if either
<em>A</em> = <em>B</em> or <span
class="math inline"><em>A</em></span> and <span
class="math inline"><em>B</em></span> are finite and of the same
cardinality. The subset relation yields a partial order on the <span
class="math inline">∼</span>-equivalence classes, and by the <a
href="https://en.wikipedia.org/wiki/Szpilrajn_extension_theorem">Szpilrajn
extension theorem</a> extends to a total order. We can use this total
order on the equivalence classes of subsets to define a total preorder
on the subsets, and this will satisfy (1) and (2).</p>
<p>If we want (3), let <em>U</em> be
the integers, and instead of the Szpilrajn extension theorem, use
Theorem 2 of <a href="https://arxiv.org/pdf/1309.7295.pdf">this
paper</a>.</p>
<p>The proof of the “Fun Fact” is really easy. Suppose we have such a
total preorder ≤. Let <span
class="math inline"><em>A</em> = {2, 4, 6, ...}</span>, <span
class="math inline"><em>B</em> = {1, 2, 3, 4, ...}</span> and <span
class="math inline"><em>C</em> = {0, 2, 4, 6, 8, ...}</span>. By (1), we
have <em>A</em> < <em>C</em>.
Suppose first that <span
class="math inline"><em>B</em> ≤ <em>A</em></span>. Then <span
class="math inline"><em>C</em> = 1 + <em>B</em> ≤ 1 + <em>A</em> = <em>B</em></span>
by (3). Hence <em>C</em> ≤ <em>A</em>
by transitivity, contradicting <span
class="math inline"><em>A</em> < <em>C</em></span>. So <span
class="math inline"><em>A</em> < <em>B</em></span> by totality. Thus
<em>B</em> < <em>C</em> by (3).
Since <em>A</em> and <span
class="math inline"><em>C</em></span> differ by one element, this
contradicts (4).</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com2tag:blogger.com,1999:blog-3891434218564545511.post-39478313868039762772024-02-29T14:59:00.002-06:002024-02-29T14:59:50.388-06:00The Incarnation and unity of consciousness<p>A number of people find the following thesis plausible:</p>
<ol type="1">
<li>Necessarily, the conscious states hosted in a single person at one
time are unified in a single conscious state that includes them.</li>
</ol>
<p>But now consider Christ crucified.</p>
<ol start="2" type="1">
<li><p>Christ has conscious pain states in his human mind.</p></li>
<li><p>Christ has no conscious pain states in his divine mind.</p></li>
<li><p>Christ has a conscious divine comprehension state in his divine
mind.</p></li>
<li><p>Christ has no conscious divine comprehension state in his human
mind.</p></li>
<li><p>Any conscious state is in a mind.</p></li>
<li><p>Christ has no minds other than a human and a divine one.</p></li>
</ol>
<p>It seems that (2)–(7) contradict (1). For by (1), (2) and (4) it
seems there is a conscious state in Christ that includes both Christ’s
pain and Christ’s divine comprehension. But that state wouldn’t be in
the divine mind because of (3) and wouldn’t be in the human mind because
of (5). But it would have to be in a mind, and Christ has no other
minds.</p>
<p>There is a nitpicky objection that (7) might be false for all we
know—maybe Christ has some other incarnation on another planet. But that
is a mere complication to the argument, given that none of these other
incarnations could host the divine comprehension in the created
mind.</p>
<p>But the argument I gave above fails if God is outside time. For then
the “has” in (4) is compatible with the divine comprehension being
atemporal, then it does not follow from (2) and (4) that the divine
comprehension and the pain happen at the same <em>time</em>, as is
required to contradict (1).</p>
<p>In other words, we have an argument from the Incarnation to God’s
atemporality, assuming the unity of consciousness thesis (1).</p>
<p>That said, while I welcome arguments for divine atemporality, I am
not convinced of (1).</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-38022357226788841752024-02-28T10:35:00.003-06:002024-02-28T10:37:43.719-06:00More on benefiting infinitely many people<p>Once again let’s suppose that there are infinitely people on a line
infinite in both directions, one meter apart, on positions numbered in
meters. Suppose all the people are on par. Fix some benefit (e.g.,
saving a life or giving a cookie). Let <span
class="math inline"><em>L</em><sub><em>n</em></sub></span> be the action
of giving the benefit to all the people to the left of position <span
class="math inline"><em>n</em></span>. Let <span
class="math inline"><em>R</em><sub><em>n</em></sub></span> be the action
of giving the benefit to all the people to the right of position <span
class="math inline"><em>n</em></span>.</p>
<p>Write <em>A</em> ≤ <em>B</em> to
mean that action <em>B</em> is at least
as good as action <em>A</em>, and write
<em>A</em> < <em>B</em> to mean that
<em>A</em> ≤ <em>B</em> but not <span
class="math inline"><em>B</em> ≤ <em>A</em></span>. If neither <span
class="math inline"><em>A</em> ≤ <em>B</em></span> nor <span
class="math inline"><em>B</em> ≤ <em>A</em></span>, then we say that
<em>A</em> and <span
class="math inline"><em>B</em></span> are noncomparable.</p>
<p>Consider these three conditions:</p>
<ul>
<li><p><em>Transitivity:</em> If <span
class="math inline"><em>A</em> ≤ <em>B</em></span> and <span
class="math inline"><em>B</em> ≤ <em>C</em></span>, then <span
class="math inline"><em>A</em> ≤ <em>C</em></span> for any actions <span
class="math inline"><em>A</em></span>, <span
class="math inline"><em>B</em></span> and <span
class="math inline"><em>C</em></span> from among the <span
class="math inline">{<em>L</em><sub><em>k</em></sub>}</span> and the
<span
class="math inline">{<em>R</em><sub><em>k</em></sub>}</span>.</p></li>
<li><p><em>Strict monotonicity:</em> <span
class="math inline"><em>L</em><sub><em>n</em></sub> < <em>L</em><sub><em>n</em> + 1</sub></span>
and <span
class="math inline"><em>R</em><sub><em>n</em></sub> > <em>R</em><sub><em>n</em> + 1</sub></span>
for all <em>n</em>.</p></li>
<li><p><em>Weak translation invariance:</em> If <span
class="math inline"><em>L</em><sub><em>n</em></sub> ≤ <em>R</em><sub><em>m</em></sub></span>,
then <span
class="math inline"><em>L</em><sub><em>n</em> + <em>k</em></sub> ≤ <em>R</em><sub><em>m</em> + <em>k</em></sub></span>
and if <span
class="math inline"><em>L</em><sub><em>n</em></sub> ≥ <em>R</em><sub><em>m</em></sub></span>,
then <span
class="math inline"><em>L</em><sub><em>n</em> + <em>k</em></sub> ≥ <em>R</em><sub><em>m</em> + <em>k</em></sub></span>,
for any <em>n</em>, <span
class="math inline"><em>m</em></span> and <span
class="math inline"><em>k</em></span>.</p></li>
</ul>
<p><strong>Theorem:</strong> If we have transitivity, strict
monotonicity and weak translation invariance, then exactly one of the
following three statements is true:</p>
<ol type="i">
<li><p>For all <em>m</em> and <span
class="math inline"><em>n</em></span>, <span
class="math inline"><em>L</em><sub><em>m</em></sub></span> and <span
class="math inline"><em>R</em><sub><em>n</em></sub></span> are
incomparable</p></li>
<li><p>For all <em>m</em> and <span
class="math inline"><em>n</em></span>, <span
class="math inline"><em>L</em><sub><em>m</em></sub> < <em>R</em><sub><em>n</em></sub></span></p></li>
<li><p>For all <em>m</em> and <span
class="math inline"><em>n</em></span>, <span
class="math inline"><em>L</em><sub><em>m</em></sub> > <em>R</em><sub><em>n</em></sub></span>.</p></li>
</ol>
<p>In other words, if any of the left-benefit actions is comparable with
any of the right-benefit actions, there is an overwhelming moral skew
whereby either all the left-benefit actions beat all the right-benefit
actions or all the right-benefit actions beat all the left-benefit
actions.</p>
<p>Proposition 1 in <a href="https://arxiv.org/pdf/2010.07366.pdf">this
paper</a> is a special case of the above theorem, but the proof of the
theorem proceeds in basically the same way. For a <em>reductio</em>,
assume that (i) is false. Then either <span
class="math inline"><em>L</em><sub><em>m</em></sub> ≥ <em>R</em><sub><em>n</em></sub></span>
or <span
class="math inline"><em>L</em><sub><em>m</em></sub> ≤ <em>R</em><sub><em>n</em></sub></span>
for some <em>m</em> and <span
class="math inline"><em>n</em></span>. First suppose that <span
class="math inline"><em>L</em><sub><em>m</em></sub> ≥ <em>R</em><sub><em>n</em></sub></span>.
Then the second and third paragraphs of the proof of Proposition 1 show
that (iii) holds. Now suppose that <span
class="math inline"><em>L</em><sub><em>m</em></sub> ≤ <em>R</em><sub><em>n</em></sub></span>.
Let <span
class="math inline"><em>L</em><sub><em>k</em></sub><sup>*</sup> = <em>R</em><sub>−<em>k</em></sub></span>
and <span
class="math inline"><em>R</em><sub><em>k</em></sub><sup>*</sup> = <em>L</em><sub>−<em>k</em></sub></span>.
Say that <span
class="math inline"><em>A</em>≤<sup>*</sup><em>B</em></span> iff <span
class="math inline"><em>A</em><sup>*</sup> ≤ <em>B</em><sup>*</sup></span>.
Then transitivity, strict monotonicity and weak translation invariance
hold for ≤<sup>*</sup>. Moreover, we
have <span
class="math inline"><em>L</em><sub><em>m</em></sub> ≤ <em>R</em><sub><em>n</em></sub></span>,
so <span
class="math inline"><em>R</em><sub>−<em>m</em></sub>≤<sup>*</sup><em>L</em><sub>−<em>n</em></sub></span>.
Applying the previous case with <span
class="math inline"> − <em>m</em></span> and <span
class="math inline"> − <em>n</em></span> in place of <span
class="math inline"><em>n</em></span> and <span
class="math inline"><em>m</em></span> respectively we conclude that we
always have <span
class="math inline"><em>L</em><sub><em>j</em></sub>><sup>*</sup><em>R</em><sub><em>k</em></sub></span>
and hence that we always have <span
class="math inline"><em>L</em><sub><em>j</em></sub> < <em>R</em><sub><em>k</em></sub></span>,
i.e., (ii).</p>
<p>I suppose the most reasonable conclusion is that there is complete
incomparability between the left- and right-benefit actions. But this
seems implausible, too.</p>
<p>Again, I think the big conclusion is that human ethics has limits of
applicability.</p>
<p>I hasten to add this. One might reasonably think—Ian suggested this
in a recent comment—that decisions about benefiting or harming
infinitely many people (at once) do not come up for humans. Well, that’s
a little quick. To vary the Pascal’s Mugger situation, suppose a strange
guy comes up to you on the street, and tells you that there are
infinitely many people in a line drowning in a parallel universe, and
asks you if you want him to save all the ones to the left of position
123 or all the ones to the right of
position − 11, because he can
magically do either one, and nothing else, and he needs help in his
moral dilemma. You are, of course, very dubious of what he is saying.
Your credence that he is telling the truth is very, very small. But as
any good Bayesian will tell you, it shouldn’t be zero. And now the
decision you need to make is a real one.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com3tag:blogger.com,1999:blog-3891434218564545511.post-60273336673804922002024-02-27T20:20:00.001-06:002024-02-27T20:20:02.753-06:00Incommensurability in rational choice<p>When I hear that two options are incommensurable, I imagine things
that are very different in value. But incommensurable options could also
be very close in value. Suppose an eccentric tyrant tells you that she
will spare the lives of ten innocents provided that you either have a
slice of delicious cake or listen to a short but beautiful song. You are
thus choosing between two goods:</p>
<ol type="1">
<li><p>The ten lives plus a slice of delicious cake.</p></li>
<li><p>The ten lives plus a short but beautiful song.</p></li>
</ol>
<p>The values of the two options are very close relatively speaking: the
cake and song make hardly any difference compared to the ten lives that
comprise the bulk of the value. Yet, because the cake and the song are
incommensurable, when you add the same ten lives to each, the results
are incommensurable.</p>
<p>We can make the differences between the two incommensurables
arbitrarily small. Imagine that the tyrant offers you the choice
between:</p>
<ol start="3" type="1">
<li><p>The ten lives plus a chance <span
class="math inline"><em>p</em></span> of a slice of delicious
cake.</p></li>
<li><p>The ten lives plus a chance <span
class="math inline"><em>p</em></span> of a short but beautiful
song.</p></li>
</ol>
<p>Making <em>p</em> be as small as we
like, we make the difference between the options as small as possible,
but the options remain incommensurable.</p>
<p>Well, maybe “noncomparable” is a better term than “incommensurable”,
as it is a more neutral term, without that grand sound. Then we can say
that (1) and (2) are “noncomparable by a slight amount” (relative to the
magnitude of the overall goods involved).</p>
<p>There is a common test for incommensurability. Suppose <span
class="math inline"><em>A</em></span> and <span
class="math inline"><em>B</em></span> are options where neither is
better than the other, and we want to know if they are equal in value or
incommensurable. The test is to vary one of the two options by a slight
amount of value, either positive or negative. If after the tweak the two
options are still such that neither is better than the other, they must
be incommensurable. (Proof: If <span
class="math inline"><em>A</em>′</span> is slightly better or worse than
<em>A</em>, and <span
class="math inline"><em>B</em></span> is equal to <span
class="math inline"><em>A</em></span>, then <span
class="math inline"><em>A</em>′</span> will be slightly better or worse
than <em>B</em>. So if <span
class="math inline"><em>A</em>′</span> is neither better nor worse than
<em>B</em>, we couldn’t have had <span
class="math inline"><em>B</em></span> and <span
class="math inline"><em>A</em></span> equal.)</p>
<p>But cases of things that are noncomparable by a slight amount show
that we need to be careful with the test. The test still offers a
sufficient condition for incommensurability: if the fact that neither is
better than the other remains after making an option better or worse, we
must have incommensurability. But if the two options are noncomparable
by a <em>very, very</em> slight amount, a merely <em>very</em> slight
variation in one could destroy the noncomparability, and generate a
false positive for incommensurability. For instance, suppose that our
two options are (3) and (4) with <span
class="math inline"><em>p</em> = 10<sup>−100</sup></span>. Now suppose
the slight variation on (3) is that we suppose you are given a mint in
addition to the goods in (3). A mint beats a <span
class="math inline">10<sup>−100</sup></span> chance of a song, even if
it’s incommensurable with a larger chance of a song. So the variation on
(3) beats the original (4). But we still have incommensurability.</p>
<p>(Note: There are two concepts of incommensurability. One is purely
value based, and the other is agent-centric and based on rational
choice. It is the second one that I am using in this post. I am
comparing not pure values, but the reasons for pursuing the values. Even
if the values are strictly incommensurable, as in the case of a
certainty of a mint and a <span
class="math inline">10<sup>−100</sup></span> chance of a song, the
former is rationally preferable at least for humans.)</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com1tag:blogger.com,1999:blog-3891434218564545511.post-73408691091963142142024-02-27T10:28:00.008-06:002024-02-27T15:32:10.338-06:00Saving infinitely many lives<p>Suppose there is an infinitely long line with equally-spaced
positions numbered sequentially with the integers. At each position
there is a person drowning. All the persons are on par in all relevant
respects and equally related to you. Consider first a choice between two
actions:</p>
<ol type="1">
<li><p>Save people at <span class="math inline">0, 2, 4, 6, 8, ...</span> (red circles).</p></li>
<li><p>Save people at <span class="math inline">1, 2, 3, 5, 7, ...</span> (blue circles).</p></li>
</ol>
<p>It seems pretty intuitive that (1) and (2) are morally on par. The
non-negative evens and odds are alike!</p>
<p>But now add a third option:</p>
<ol start="3" type="1">
<li>Save people at 2, 4, 6, 8, ...
(yellow circles).</li>
</ol><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFupHkeWN6I64y_Aq0-YsUVLSXd0gMrdiLXO0zokjc2j9kdWX_xVPrIDsFIus9sqV_XR8KqpfZxzPuMs09e97oL_fISCXC8SgaaRB4s8EUCHDPhXXT5RNpGVS-V7NjxWYg_uuOgxOGvZiTnByUX9ZwlUHz0i4BzN5smJ78S2ZpH7fWBrjqSQXZ6BFulBg/s1327/evenodd.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="296" data-original-width="1327" height="142" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFupHkeWN6I64y_Aq0-YsUVLSXd0gMrdiLXO0zokjc2j9kdWX_xVPrIDsFIus9sqV_XR8KqpfZxzPuMs09e97oL_fISCXC8SgaaRB4s8EUCHDPhXXT5RNpGVS-V7NjxWYg_uuOgxOGvZiTnByUX9ZwlUHz0i4BzN5smJ78S2ZpH7fWBrjqSQXZ6BFulBg/w640-h142/evenodd.png" width="640" /></a></div><br /><div>The relation between (2) and (3) is exactly the same as the relation
between (1) and (2)—after all, there doesn’t seem to be anything special
about the point labeled with the zero. So, if (1) and (2) are on par, so
are (2) and (3).</div>
<p>But by transitivity of being on par, (1) and (3) are on par. But
they’re not! It is better to perform action (1), since that saves all
the people that action (3) saves, plus the person at the zero point.</p>
<p>So maybe (1) is after better than (2), and (2) is better than (3)?
But this leads to the following strange thing. We know how much better
(1) is than (2): it is better by one person. If (1) is better than (2)
and (2) is better than (3), then since the relationships between (1) and
(2) and between (2) and (3) are the same, it follows that (1) must be
better than (2) by <em>half a person</em> and (2) must be better than
(3) by that same amount.</p>
<p>But when you are choosing which people to save, and they’re all on
par, and the saving is always certain, how can you get two options that
are “half a person” apart?</p>
<p>Very strange.</p>
<p>In fact, it seems we can get options that are apart by even smaller
intervals. Consider:</p>
<ol start="4" type="1">
<li><p>Save people at <span class="math inline">0, 10, 20, 30, 40, ...</span>.</p></li>
<li><p>Save people at <span class="math inline">1, 11, 21, 31, 41, ...</span>.</p></li>
</ol>
<p>and so on up to:</p>
<ol start="14" type="1">
<li>Save people at <span class="math inline">10, 20, 30, 40, ...</span>.</li>
</ol>
<p>Each of options (4)–(14) is related the same way to the next. Option
(4) is better than option (14) by exactly one person. So it seems that
each of options (4)–(13) is better by a <em>tenth</em> of a person than
the next!</p>
<p>I think there is one at all reasonable way out, and it is to say that
in both the (1)–(3) series and the (4)–(14) series, each option is
incomparable with the succeeding one, but we have comparability between
the start and end of each series.</p>
<p>Maybe, but is the incomparability claim really correct? It still
feels like (1) and (2) should be exactly on par. If you had a choice
between (1) and (2), and one of the two actions involved a slight
benefit to another person—say, a small probability of saving the life of
the person at − 17—then we should go
for the action with that slight benefit. And this makes it implausible
that the two are incomparable.</p>
<p>My own present preferred solution is that the various things here
seem implausible to us because human morality is not meant for cases
with infinitely many beneficiaries. I think this is another piece of
evidence for the species-relativity of morality: our morality is
grounded in <em>human</em> nature.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com4tag:blogger.com,1999:blog-3891434218564545511.post-35466867648460685702024-02-26T11:22:00.003-06:002024-02-26T11:22:28.196-06:00Consciousness finitism<p>My 11-year-old has an interesting intuition, that it is impossible to
have an infinite number of conscious beings. She is untroubled by
Hilbert’s Hotel, and insists the intuition is specific to
<em>conscious</em> beigs, but is unable to put her finger on what
exactly bothers her about an infinity of conscious beings. It’s not
considerations like “If there are infinitely many people, you probably
have a near-duplicate.” Near-duplicates don’t bother her. It’s
consciousness specifically. She is surprised that a
consciousness-specific finitist intuition isn’t more common.</p>
<p>My best attempt at a defense of consciousness-finitism was that it
seems reasonable to think of yourself as a uniformly randomly chosen
member of the set of all conscious beings. But thinking of yourself as a
uniformly randomly chosen member of a countably infinite set leads to
the well-known paradoxes of countably infinite fair lotteries. So that
may provide some sort of argument for consciousness-finitism. But my
daughter insists that’s not where her intuition comes from.</p>
<p>Another argument for consciousness-finitism would be the challenges
of aggregating utilities across an infinite number of people: If all the
people are positioned at locations numbered 1,2,3,…, and you benefit the
people at even-numbered locations, you benefit the same quantity of
people as when you benefit the people whose locations are divisible by
four, but clearly benefiting the people at the even-numbered locations
is a lot better. I haven’t tried this family of arguments on my
daughter, but I don’t think her intuitions come from thinking about
well-being.</p>
<p>Still, I have a hard time believing in the impossibility of an
infinite number of consciousnesses on the strength of such arguments.
The main reason I have such a hard time is that it seems obvious that
you could have a forward infinite regress of conscious beings, each
giving birth to the next.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-35606838539581391892024-02-23T10:13:00.002-06:002024-02-23T10:13:58.491-06:00Teaching virtue<p>A famous Socratic question is whether virtue can be taught. This
argument may seem to settle the question:</p>
<ol type="1">
<li><p>If vice can be taught, virtue can be taught.</p></li>
<li><p>Vice can be taught. (Clear empirical fact!)</p></li>
<li><p>So, virtue can be taught.</p></li>
</ol>
<p>Well, except that what I labeled as a clear empirical fact is not
something that Socrates would accept. I think Socrates reads “to teach”
as a success verb, with a necessary condition for teaching being the
conveyance of <em>knowledge</em>. In other words, it’s not possible to
<em>teach</em> falsehood, since knowledge is always of the truth, and
presumably in “teaching” vice one is “teaching” falsehoods such as that
greed is good.</p>
<p>That said, if we understand “to teach” in a less Socratic way, as
didactic conveyance of views, skills and behavioral traits, then (2) is
a clear empirical fact, and (1) is plausible, and hence (3) is
plausible.</p>
<p>That said, it would not be surprising if it were <em>harder</em> to
teach virtue even in this non-Socratic sense than it is to teach vice.
After all, it is surely harder to teach someone to swim well than to
swim badly.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com7tag:blogger.com,1999:blog-3891434218564545511.post-824664788534444442024-02-20T14:13:00.005-06:002024-02-20T14:31:29.030-06:00Relativism and natural law<p>Individual relativism and natural law ethics have something in
common: both agree that the grounds of your ethical obligations are
found in you. The disagreement, of course, is in how they are found. The
relativist says that they are found in your subjectivity, in your
beliefs and values that differ from person to person, while the natural
lawyer thinks they are found in your human form, which is exactly like
the human form of everyone else.</p>
<p>(Whether Kantianism shares this feature depends on how we read the
metaphysics of rationality, namely whether our rationality as a genuine
part of our selves, or as an abstraction.)</p>
<p>I think this commonality has some importance: it captures the idea
that idea that we are in some sense morally beholden to ourselves rather
than to something alien, something about which we could ask “Why should
I listen to it?”</p>
<p>But I think in the end natural law does a better job being a
non-alienating ethics. For we have good reason to think that my moral
beliefs and values are etiologically largely the product of society
around me and accidental features in my life. If these beliefs and
values are what grounds my moral obligations, then my obligations are by
and large the product of society and accident. (Think of the common
philosophical observation that we do not choose our beliefs, but catch
them like one catches a cold.) If I had lived in a different society
with different accidental influences, I would have had different
obligations on relativism. The obligations are, thus, largely the result
of external and accidental influence on my cognition.</p>
<p>On the other hand, on natural law, my obligations are grounded in my
individual human form which is my central and essential metaphysical
constituent. Granted, I did not <em>create</em> this form for myself.
But neither is it an accidental result of external influence—it defines
<em>me</em>.</p>
<p>I think that as a society we feel that the variability of our
individual beliefs and values makes us more autonomous if relativism is
true. But once we realistically realize that this variability is largely
due to external influence, our intuitions should shift. Natural law
provides a more real autonomy.</p>
<p>Of course, on a theistic version of natural law, my form comes from
God. Yes, but on orthodox Aristotelianism (which I am not sure I
completely endorse) it is not an alien imposition, since I have no
existence apart from that form.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-42138153833047738642024-02-18T10:41:00.001-06:002024-02-18T10:41:26.391-06:00Joshua Rasmussen moving to Baylor<p>I am very, very happy that my brilliant friend <a href="https://www.apu.edu/theology/faculty/jrasmussen/">Joshua Rasmussen</a>, of Azusa Pacific University, has accepted a full professor position in Baylor's Philosophy Department starting Fall 2024.</p>Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com5tag:blogger.com,1999:blog-3891434218564545511.post-72765801501409299532024-02-18T10:37:00.001-06:002024-02-18T10:39:07.576-06:00Disable Windows double-finger-clickMy new work laptop did not have dedicated buttons, and by default Windows set it up so that a two-finger tap or click on the touchpad would trigger a right-button click. I turned on the non-default setting that lets me click in the lower-right part of the touchpad to get a right button click, and turned off the two-finger tap options. There is no way to turn off the option for generating a right-button click with a two-finger click. <div><br /></div><div>This might seem quite innocent, but I kept on getting fake right-button clicks instead of left-button clicks when clicking the touchpad. I changed the registry settings to make the right-click area really small. It didn't solve the problem. Finally, I figured out what was going on: I click the touchpad with the side of my right thumb. This seems to result in the touchpad occasionally registering the tip and joint of my right thumb as separate contacts. The bad right-clicks were driving me crazy. I searched through registry and Windows .sys and .dll files for some hidden option to turn off the two-finger click for right-button clicks, finding nothing. Nothing. I tried to install some older proprietary touchpad driver, but none of them worked.</div><div><br /></div><div>Finally, it was time to write some code to disable the bad right clicks. After a bunch of hiccups (I almost never write code that interacts with the Windows API), and a Python-based prototype, I wrote a <a href="https://github.com/arpruss/disable-two-finger-click">little C program</a>. Just set disable-two-finger-right-click.exe to run as Administrator in Task Scheduler on login, and it takes care of it. The code uses rawinput to get the touchpad HID report, uses the HidP-* functions to parse it, and registers a low level mouse hook to remap the bad right clicks to left clicks based on some heuristics (mainly based around how long ago there was a two-finger click before the right click, while ignoring the official right-click area of the touchpad). </div><div><br /></div><div>So many hours that would have been saved if Microsoft just added an extra option.</div>Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-64387907933893215882024-02-15T10:19:00.005-06:002024-02-15T10:52:34.829-06:00Technology and dignitary harms<p>In contemporary ethics, paternalism is seen as <em>really bad</em>.
On the other hand, in contemporary technology practice, paternalism is
<em>extremely</em> widely practiced, especially in the name of security:
all sorts of things are made very difficult to unlock, with the main
official justification being that if if users unlock the things, they
open themselves to malware. As someone who always wants to tweak
technology to work better for him, I keep on running up against this: I
spend a lot of time fighting against software that wants to protect me
from my own stupidity. (The latest was Microsoft’s lockdown on direct
access to HID data from mice and keyboards when I wanted to remap how my
laptop’s touchpad works. Before this, because Chromecasts do not make
root access available, to get my TV’s remote control fully working with
my Chromecast, I had to make a hardware dongle sitting between the TV
and the Chromecast, instead of simply reading the CEC system device on
the Chromecast and injecting appropriate keystrokes.)</p>
<p>One might draw one of two conclusions:</p>
<ol type="1">
<li><p>Paternalism is not bad.</p></li>
<li><p>Contemporary technology practice is ethically really bad in
respect of locking things down.</p></li>
</ol>
<p>I think both conclusions would be exaggerated. I suspect the truth is
that paternalism is not quite as difficult to justify as contemporary
ethics makes it out, and that contemporary technology practice is not
<em>really</em> bad, but just a little bad in the respect in question,
even if that “a little bad” is very annoying to hacker types like
me.</p>
<p>Here is another thought. While the official line on a lot of the
locking down of hardware and software is that it is for the good of the
user, in the name of security, it is likely that often another reason is
that walled gardens are seen as profitable in a variety of ways. We
think of a profit motive as crass. But at least it’s not paternalistic.
Is crass better than paternalistic? On first, thought, surely not:
paternalism seeks the good of the customer, while profit-seeking does
not. On second thought, it shows more respect for the customer to have a
wall around the garden in order to be able to charge admission rather
than in order to control the details of the customer’s aesthetic
experience for the customer’s own good (you will have a better
experience if you start by these oak trees, so we put the gate there and
erect a wall preventing you from starting anywhere else). One does have
a right to seek reasonable compensation for one’s labor.</p>
<p>The considerations of the last paragraph suggest that the
<em>special</em> harm of paternalistic behavior is a dignitary harm.
There is no greater non-dignitary harm to me when I am prevented from
rooting my device for paternalistic reasons than when I am prevented
from doing so for profit reasons, but the dignitary harm is greater in
the paternalistic case.</p>
<p>There is, however, an interesting species of dignitary harm that
sometimes occurs in profit-motivated technological lockdowns. Some of
these lockdowns are motivated by protecting content-creator profits from
user piracy. This, too, is annoying. (For instance, when having trouble with one
of our TV’s HDMI ports, I tried to solve the difficulty by using an EDID
buffer device, but then I could no longer use our Blu-Ray player with
that port because of digital-rights management issues.) And here there
is a dignitary harm, too. For while paternalistic lockdowns are based on
the presumption that lots of users are stupid, copyright lockdowns are
based on the presumption that lots of users are immoral.</p>
<p>Objectively, it is worse to be treated as immoral than as stupid: the
objective dignitary harm is greater. (But oddly I tend to find myself
more annoyed when I am thought stupid than when I am thought immoral. I
suppose that is a vice in me.) This suggests that in terms of difficulty
of justification of technological lockdowns with respect to dignitary
harms, the ordering of motives would be:</p>
<ol type="1">
<li><p>Copyright-protection (hardest to justify, with biggest dignitary
harm to the user).</p></li>
<li><p>Paternalism (somewhat smaller dignitary harm to the
user).</p></li>
<li><p>Other profit motives (easiest to justify, with no dignitary harm
to the user).</p></li>
</ol>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-53260407043281852732024-02-14T11:09:00.004-06:002024-02-14T11:09:55.252-06:00Yet another tweak of the knowledge argument against physicalism<p>Here is a variant on the knowledge argument:</p>
<ol type="1">
<li><p>All empirical facts <em>a priori</em> follow from the fundamental
facts.</p></li>
<li><p>The existence of consciousness does not <em>a priori</em> follow
from the fundamental physical facts.</p></li>
<li><p>The existence of consciousness is an empirical fact.</p></li>
<li><p>Thus, there are fundamental facts that are not fundamental
physical facts.</p></li>
</ol>
<p>In support of 2, note that we wouldn’t be able to tell which things
are conscious by knowing their physical constitution without some <em>a
posteriori</em> data like “When I say ‘ouch’, I am conscious.”</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-21195186636275557282024-02-13T15:29:00.002-06:002024-02-13T15:29:52.194-06:00Physicalism and "pain"<p>Assuming physicalism, plausibly there are a number of fairly natural
physical properties that occur when and only when I am having a
phenomenal experience of pain, all of which stand in the same causal
relations to other relevant properties of me. For instance:</p>
<ol type="a">
<li><p>having a brain in neural state <span
class="math inline"><em>N</em></span></p></li>
<li><p>having a human brain in neural state <span
class="math inline"><em>N</em></span></p></li>
<li><p>having a primate brain in neural state <span
class="math inline"><em>N</em></span></p></li>
<li><p>having a mammalian brain in neural state <span
class="math inline"><em>N</em></span></p></li>
<li><p>having a brain in functional state <span
class="math inline"><em>F</em></span></p></li>
<li><p>having a human brain in functional state <span
class="math inline"><em>F</em></span></p></li>
<li><p>having a primate brain in functional state <span
class="math inline"><em>F</em></span></p></li>
<li><p>having a mammalian brain in functional state <span
class="math inline"><em>F</em></span></p></li>
<li><p>having a central control system in functional state <span
class="math inline"><em>F</em></span>.</p></li>
</ol>
<p>Suppose that one of these is in fact identical with the phenomenal
experience of pain. But which one? The question is substantive and
ethically important. If, for instance, the answer is (c), then cats and
computers in principle couldn’t feel pain but chimpanzees could. If the
answer is (i), then cats and computers and chimpanzees could all feel
pain.</p>
<p>It is plausible on physicalism (e.g., Loar’s version) that my concept
of pain refers to a physical property by ostension—I am ostending to the
state that occurs in me in all and only the cases where I am in pain,
and which has the right kind of causal connection to my pain behaviors.
But there are many such states, as we saw above.</p>
<p>We might try to break the tie by saying that by reference magnetism I
am ostending to the <em>simplest</em> physical state that has the above
role, and the simplest one is probably (i). I don’t think this is
plausible. Assuming naturalism, when multiple properties of a comparable
degree of naturalness play a given role, ostension via the role is
likely to be ambiguous, with ambiguity needing to be broken by a speaker
or community decision. At some point in the history of biology, we had
to decide whether to use “fish” at a coarse-grained functional level and
include dolphins and whales as fish, or at a finer-grained level and get
the current biological concept. One option might be a little more
natural than the other, but neither is <em>decisively</em> more natural
(any fish concept that has a close connection to ordinary language is
going to have to be paraphyletic), and so a decision was needed. And
even if (i) is somewhat simpler than (a)–(h), it is not decisively more
natural.</p>
<p>This yields an interesting variant of the knowledge argument against
physicalism.</p>
<ol type="1">
<li><p>If “pain” refers to a physical property, it is a “merely
semantic” question, one settled by linguistic decision, whether “pain”
could apply to an appropriately programmed computer.</p></li>
<li><p>It is not a “merely semantic” question, one settled by languistic
decision, whether “pain” could apply to an appropriately programmed
computer.</p></li>
<li><p>Thus, “pain” does not refer to a physical property.</p></li>
</ol>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com0tag:blogger.com,1999:blog-3891434218564545511.post-13158668558476163252024-02-13T10:23:00.004-06:002024-02-13T10:23:37.981-06:00Playing to win in order to lose<p>Let’s say I have a friend who needs cheering up as she has had a lot
of things not go her way. I know that she is definitely a better
badminton player than I. So I propose a badminton match. My goal in
doing so is to have her win the game, so as to cheer her up. But when I
play, I will of course be playing to win. She may notice if I am not,
plus in any case her victory will be the more satisfying the better my
performance.</p>
<p>What is going on rationally? I am trying to win in order that she may
win a closely contested game. In other words, I am pursuing two
logically incompatible goals in the same course of action. Yet the story
makes perfect rational sense: I achieve one end by pursuing an
incompatible end.</p>
<p>The case is interesting in multiple ways. It is a direct
counterexample to the plausible thesis that it is not rational to be
simultaneously pursuing each of two logically incompatible goals. It’s
not the only counterexample to that thesis. A perhaps more
straightforward one is where you are pursuing a disjunction between two
incompatible goods, and some actions are rationally justified by being
means to <em>each</em> good. (E.g., imagine a more straightforward case
where you reason: If I win, that’ll cheer me up, and if she wins,
that’ll cheer her up, so either way someone gets cheered up, so let’s
play.)</p>
<p>The case very vividly illustrates the distinction between:</p>
<ol type="a">
<li><p>Instrumentally pursuing a goal, and</p></li>
<li><p>Pursuing an instrumental goal.</p></li>
</ol>
<p>My pursuit of victory is instrumental to cheering up my friend, but
victory is not itself instrumental to my further goals. On the contrary,
victory would be incompatible with my further goal. Again, this is not
the only case like that. A case I’ve discussed multiple times is of
follow-through in racquet sports, where after hitting the ball or
shuttle, you intentionally continue moving the racquet, because the hit
will be smoother if you intend to follow-through even though the
continuation of movement has no physical effect on the ball or shuttle.
You are instrumentally pursuing follow-through, but the follow-through
is not instrumental.</p>
<p>Similarly, the case also shows that it is false that every end you
have you either pursue for its own sake or it is your means to something
else. For neither are you pursuing victory for its own sake nor is
victory a means to something else—though your <em>pursuit</em> of
victory is a means to something else.</p>
<p>Given the above remarks, here is an interesting ethics question. Is
it permissible to pursue the death of an innocent person in order to
save that innocent person’s life? The cases are, of course, going to be
weird. For instance, your best friend Alice is a master fencer, and has
been unjustly sentenced to death by a tyrant. The tyrant gives you one
chance to save her life: you can fence Alice for ten minutes, with you
having a sharpened sword and her having a foil with a safety tip, and
you must sincerely try to kill her—the tyrant can tell if you are not
trying to kill. If she survives the ten minutes, she goes free. If you
fence Alice, the structure of your intention is just as in my badminton
case: You are trying to kill Alice in order to save her life. Alice’s
death would be pursued by you, but her death is not a means nor
something pursued for its own sake.</p>
<p>If the story is set up as above, I think the answer is that, sadly,
it is wrong for you to try to kill Alice, even though that is the only
way to save her life.</p>
<p>All that said, I still wonder a bit. In the badminton case, are you
<em>really</em> striving for victory? Or are you striving to act <em>as
if</em> you were striving for victory? Maybe that is the better way to
describe the case. If so, then this may be a counterexample to my main
thesis <a
href="https://alexanderpruss.blogspot.com/2024/01/the-authority-of-game-rules.html">here</a>.</p>
<p>In any case, if there is a good chance the tyrant can’t tell the
difference between your trying to kill Alice and your intentionally
performing the same motions that you would be performing if you were
trying to kill Alice, it seems to me that it might be permissible to do
the latter. This puts a lot of pressure on some thoughts about the
closeness problem for Double Effect. For it seems pretty plausible to me
that it would be wrong for you to intentionally perform the same motions
that you would be performing if you were trying to kill Alice in order
to save people <em>other</em> than Alice.</p>
Alexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.com2