Showing posts with label estimates. Show all posts
Showing posts with label estimates. Show all posts

Wednesday, April 7, 2021

Non-propositional representations

I used to think that it’s quite possible that all our mental representations of the world are propositional in nature. To do that, I had to have a broad notion of proposition, much broader than what we normally consider to be linguistically expressible. Thus, I was quite happy with saying that Picasso’s Guernica expresses a proposition about war, a proposition that cannot be stated in words. Similarly, I was quite fine—my Pittsburgh philosophical pedigree comes out here—with the idea that an itch or some other quale might represent the world propositionally.

That broad view of propositions still sounds right. But I am now thinking there is a different problem for propositionalism about our representational states: the problem of estimates. A lot of my representations of the world are estimates. When I estimate my height at six feet, there is a proposition in the vicinity, namely the proposition that my height is exactly six feet. But that proposition is one that I am quite confident is false. There are even going to be times when I wouldn’t even want to say that my best estimate of something is approximately right—but it’s still my best estimate.

The best propositionally-based of what happens when I estimate my height at six feet seems to me to be that I believe a proposition about myself, namely that my evidence about my height supports a probability density whose mean is at six feet. But there are two problems with this. First, the representational state now becomes a representation of something about me—facts about what evidence I have—than about the world. Second, and worse, I don’t know that I would stick my neck out far enough to even make that claim about evidence unequivocally—my insight into the evidence I have is limtied. Moreover, even concerning evidence, what I really have is only estimates of the force of my evidence, and the problem comes back for them.

So I think that estimating is a way of representing that is not propositional in nature. Notice, though, that estimates are often well expressible through language. So on my view, linguistic expressibility (in the ordinary sense of “linguistic”—maybe there is such a thing as the “language of painting” that Picasso used) is neither necessary for a representation of the world to be propositional in nature.

I now wonder whether vagueness isn’t something similar. Perhaps vague sentences represent the world but not propositionally. But just as we can often—but not always—reason as if sentences expressing estimates expressed propositions, we can often reason as if vague sentences expressed propositions. The “logic” of the non-propositional representations is close enough to the logic of propositional ones—except when it’s not, but we can usually tell when it’s not (e.g., we know what sorts of gruesome inferences not draw from the estimate that a typical plumber has 2.2 children).

Monday, April 5, 2021

Best estimates and credences

Some people think that expected utilities determine credences and some thing that credences determine expected utilities. I think neither is the case, and want to sketch a bit of a third view.

Let’s say that I observe people playing a slot machine. After each game, I make a tickmark on a piece of paper, and if they win, I add the amount of the win to a subtotal on a calculator. After a couple of hours—oddly not having been tossed out by the casino—I divide the subtotal by the number of tickmarks and get the average payout. If I now get an offer to play the slot machine for a certain price, I will use the average payout as an expected utility and see if that expected utility exceeds the price (in a normal casino, it won’t). So, I have an expected utility or prevision. But I don’t have enough credences to determine that expected utility: for every possible payout, I would need a credence in getting that payout, but I simply haven’t kept track of any data other than the sum total of payouts and the number of games. So, here the expected utility is not determined by the credences.

The opposite is also not true: expected utilities do not determine credences.

Now consider another phenomenon. Suppose I step on an analog scale, and it returns a number w1 for my weight. If that’s all the data I have, then w1 is my best estimate for the weight. What does that mean? It certainly does not mean that I believe that my weight is exactly w1. It also does not mean that I believe that my weight is close to w1—for although I do believe that my weight is close to w1, I also believe it is close to w1 + 0.1 lb. If I were an ideal epistemic agent, then for every one of the infinitely many possible intervals of weight, I would have a credence that my weight lies in that interval, and my best estimate would be an integral of the weight function over the probability space with respect to my credence measure. But I am not an ideal epistemic agent. I don’t actually have much of a credence for the hypothesis that my weight lies between w1 − 0.2 lb and w1 + 0.1 lb, say. But I do have a best estimate.

This is very much what happened in the slot machine case. So expected values are not the only probabilistic entity not determined by our credences. Rather, they are a special case of best estimates. The expected utility of the slot machine game is simply my best estimate at the actual utility of the slot machine game.

We form and use lots of such best estimates.

Note that the best estimate need not even be a possible value for the thing we are estimating. My best estimate payoff for the slot-machine given my data might be $0.94, even though I might know that in fact all actual payouts are multiples of a dollar.

With this in mind, we can take credences to be nothing else than best estimates at the truth value, where we think of truth value as either 0 (false) or 1 (true). (Here, I think of the fact that the standard Polish word for probability is “prawdopodobieĊ„stwo”—truthlikeness, verisimilitude.) Just as in the case above, when my best estimate for the truth is 0.75, I do not think the actual truth value is 0.75: I like classical logic, and think the only two possible values are 0 and 1.

Here, then, is a picture of what one might call our probabilistic representation of the world. We have lots of best estimates. Some of these are best estimates of utilities. Some are best estimates of other quantities, such as weights, lengths, cardinalities, etc. Some are best estimates of truth values. A consistent agent is one such that there exists a probability function such that all of the agent’s best estimates are mathematical expetations of the corresponding values with respect to the probability function. In particular, this probability function would extend the agent’s credences, i.e., the agent’s best estimates for truth values.

On this picture, there is no privileging between expected utilities, credences or other best estimates. It’s just estimates all around.

Friday, December 9, 2011

Estimates, assertions and vagueness

I ask you to give me an estimate of how long a table is. You say "950 mm".

What did you do? You didn't assert that the table was 950 mm. Did you assert that you estimated the table at 950 mm? Maybe, but I think that's not quite right. After all, you might not have yourself estimated the table at 950 mm—you might have gone from your memory of what someone else said about it. So are you asserting that someone has estimated the table at 950 mm? No. For if someone had estimated the table at 700 mm and you could see that it wasn't (relevantly) near that, it wouldn't be very good for you to answer "700 mm", though it would be true that someone has estimated the table at 700 mm. Maybe you 're saying that the best estimate you know of is 950 mm. But that's not right either, because the best estimate you know of might be 950.1 mm.

Here is a suggestion. Giving an estimate is a speech act not reducible to the assertion of a proposition. It has its own norms, set by the context. The norm of assertion is truth (dogmatic claim): it is binary. But the norm of an estimate is not a binary yes/no norm as for assertion, but it can often be thought of as a continuous quality function. The quality function is defined by what it is that we are estimating and the context of estimation (purposes, etc.) Typically, the quality function is a Gaussian centered on the true value, with the Gaussian being wider when less precision is required. But it's not always a Gaussian. There are times when one has a lot more tolerance on one side of the value to be estimated—where it is important not to underestimate (say, the strain under which a bolt will be) but little harm in overestimating by a bit. In such cases, we will have an asymmetrical quality function. (This is also important for answering the puzzles here.) So in giving an estimate one engages in an act governed by a norm to give a higher quality result—but with a defeasibility: brevity can count against quality (so, you can say "950 mm" even if "950.1 mm" has slightly higher quality).

Moreover, what exactly is the quality function may depend on all sorts of features other than the exact value of the quantity being estimated. Thus, if you hand me a box with one cylindrical object in it and you ask me to give a good estimate of its diameter, how much precision is called for—i.e., how wide the Gaussian is—will depend on what the object is. If it is a gun cartridge, the Gaussian's width will be proportional to the tolerances on the relevant kind of gun; if it is an irregular hardwood dowel, the Gaussian's width will be significantly wider. So, in general, the quality function for an estimate that some quantity is x depends on:

  • what quantity x is being given
  • what the correct value is
  • other properties of what is being estimated
  • the linguistic context.
The second and third items can be subsumed as "the relevant bits of the extra-linguistic world".

So, here's a very abstract theory of estimates. Estimating is a language game one plays where the quality function keeps score. When one is asked for an estimate (or offers it of one's own accord), the context c sets up a function qc from pairs <x,w> to values, and one's score in the game is qc(x,@) where x is the value one gives and @ is the actual world.

Notice that this is general enough to encompass all sorts of other language games. For instance, the quantities need not be numbers. They might be propositions, names, etc. Assertion is a special case where the quantities are propositions, and qc(x,w) is "acceptable" when x is true at w and is "unacceptable" otherwise. Or consider the game initiated by this request: "Give me any approximate upper bound for the number of people coming to the wedding." The quality function qc(x,w) is non-decreasing in x: Because of the "any", saying "a googol" is just as good as saying "101", as long as both are actually upper bounds. Thus qc(x,w) is "perfect" in any world w where no more than x people come to the wedding. In worlds w where more than x people come to the wedding, qc(x,w) quickly drops off as x goes below the actual number of people coming to the wedding.

"Quantities" can be anything. They might be abstracta or they might be linguistic tokens. It doesn't matter for my purposes. Likewise, the values given out by the quality function could be numbers, utilities, or just labels like "perfect", "acceptable" and "unacceptable".

Conjecture: Assertoric use of sentences with vague predicates is not the assertion of a proposition but it is the offering of an estimate.

For instance, take as your quantities "yes" and "no", and suppose the context is where we're asked if Fred is bald. Then the quality function will be something like this: qc("yes",w) is less in those worlds w where Fred has more hair, and qc("no",w) is more in those worlds where Fred has less hair. Moreover, qc("yes",w) is "perfect" in worlds where Fred has no hair.

What if I am not asked a question, but I just say "Fred is bald"? The same applies. My saying is not an assertion. It is, rather, the offering of an estimate. We can take the quantities to be binary—say, "Fred is bald" and "Fred is non-bald"—but the quality function is non-binary.

What about more logically complex things, like "If Fred is blond, he is bald"? Well, formally treat qualities as truth values in a multivalent logic, but in the semantics, don't think that they are in fact truth values. They are quality values. So, assign qualities to sentences (keeping a context fixed), using some natural rules like:

  • qc("a or b",w) = max(qc("a",w),qc("b",w))
  • qc("a and b",w) = min(qc("a",w),qc("b",w))
  • qc("~a",w) = "perfect" − qc("a",w)
The rules may actually differ from context to context. That's fine, because this is not logic per se: this is the evaluation of quality (and that's how this approach differs from non-classical logic approaches to vagueness—maybe not formally, but in interpretation). Moreover, there may in some contexts be no assigned quality value to a particular sentence. Again, that's fine: there can be games with underdetermined rules.

In a nutshell: A vague sentence is an estimate of how the world is. Such sentences are not to be scored on their truth or falsity, but on the quality of the estimate.