Thursday, September 27, 2018

Learning without change in beliefs

There are books of weird mathematical things (e.g., functions with strange properties) to draw on for the sake of generating counterexamples to claims. This post is in the spirit of an entry in one of these books, but it’s philosophy, not mathematics.

Surprising fact: You can learn something without gaining or losing any beliefs.

For suppose proposition q in fact follows from proposition p, and at t1 you have an intellectual experience as of seeing q to follow from p. On the basis of that experience you form the justified and true belief that q follows from p. This belief would be knowledge, but alas the intellectual experience came from a chemical imbalance in the brain rather than from one’s mastery of logic. So you don’t know that q follows from p.

Years later, you consider q and p again, and you once again have an experience of q following from p. This time, however, the experience does come from your mastery of logic. This time you see, and not just think you see, that q follows from p. Your belief is now overdetermined: there is a Gettiered path to it and a new non-Gettiered path to it. The new path makes the belief be knowledge. But to gain knowledge is to learn.

But this gain of knowledge need not be accompanied by the loss of any beliefs. For instance, the new experience of q following from p doesn’t yield a belief that your previous experience was flawed. Nor need there by any gain of beliefs. For while you might form the second order belief that you see q following from p, you need not. You might just see that q follows from p, and form merely the belief that q follows from p, without forming any belief about your inner state. After all, this is surely more the rule than the exception in the case of sensory perception. When I see my colleague in the hallway, I will often form the belief that she is in the hallway rather than the self-regarding belief that I see her in the hallway. (Indeed, likely, small children and most non-human animals never form the “I see” belief.) And surely this phenomenon is not confined to the case of sensory perception. At least, it is possible to have intellectual perceptions where we form only the first-order belief, and form no any self-regarding second-order belief.

So, it is possible to learn something without gaining or losing beliefs.

In fact, plausibly, the original flawed experience could have been so clear that we were fully certain that q follows from p. In that case, the new experience not only need not change any of our beliefs, but need not even change our credences. The credence was 1 before, and it can’t go up from there.

OK, so we have a counterexample. Can we learn anything from it?

Well, here are two things. One might use the story to buttress the idea that even knowledge of important matters—after all, the relation between q and p might be important—is of little value. For it seems of very little value to gain knowledge when it doesn’t change how one thinks about anything. One might also use it to argue that either understanding doesn’t require knowledge or that understanding doesn’t have much value. For if understanding does require knowledge, then one could set up a story where by learning that q follows from p one gains understanding—without that learning resulting in any change in how one thinks about things. Such a change seems of little worth, and hence the understanding gained is of little worth.

Tuesday, September 25, 2018

Faith and belief

Christians are called to have faith in Jesus Christ.

The Old Testament, however, is big on not putting our faith in anything other than God.

Thus, someone who has faith in Jesus Christ but does not believe that Jesus Christ is God is risking violating a central principle of the Old Testament.

Moreover, faith in Jesus requires submission to Jesus. But Jesus wants his followers to obey the central principles of the Old Testament.

Thus, for someone aware of these observations, it is not possible to have faith in Jesus Christ without believing that he is God. This is a serious problem for accounts of faith that claim that a Christian need not have any doctrinal beliefs.

Friday, September 21, 2018

Lottery cases and Bayesianism

Here’s a curious thing. The ideal Bayesian reasons about all contingent cases just as she reasons about lottery cases. If the reasoning doesn’t yield knowledge in lottery cases (i.e., if the ideal Bayesian can’t know that she won’t win the lottery), it doesn’t yield knowledge in any contingent cases. So, if the ideal Bayesian doesn’t know in lottery cases, she doesn’t know in any contingent cases. So she knows in lottery cases, I say.

Wednesday, September 19, 2018

Gettier and lottery cases

I am going to give some cases that support this thesis.

  1. If you can know that you won’t win the lottery, then in typical Gettier cases you are in a position to know, in the sense that there is a line of statistical reasoning such that if you engage in it, then you know.

There are two conclusions you could draw. You might think, as I do, that you know you won’t win the lottery (assuming that in fact you won’t win). In that case, (1) will offer something rather cool: it will tell you that if you reason in a statistical sort of way, you can get knowledge where instead you would have had mere justified true belief. If knowledge is valuable, that will be a good reason to reason in that statistical way rather than the more usual way.

Or you might think the consequent of (1) is absurd, and conclude that you don’t know you won’t win the lottery.

Start with the colorful lottery case. In this case, there are 1000 lottery tickets, each in a sealed envelope. Of these, 990 are printed on white paper, and they are all losing tickets. The remaining 10 tickets are printed on red paper, and 9 of them are winning tickets. You’re handed a sealed envelope. So, you have a 99.1% probability of losing. I will assume that that’s good enough for knowledge. (If not, tweak the numbers. More on that issue later.) Suppose you come to believe you will lose on the basis of this argument:

  1. You have a ticket.

  2. If you have a ticket, it’s very likely that it’s a white losing ticket or a red losing ticket.

  3. So, very likely it’s a losing ticket.

  4. So (ampliatively) it’s a losing ticket.

Suppose, further, that in fact you’re right—it is a losing ticket. Then assuming you know in lottery cases, your belief that you will lose is knowledge. And I further submit that it doesn’t matter for knowledge whether your ticket is actually printed on white or red paper. All that matters for knowledge is that it is in fact a losing ticket. The reason it doesn’t matter is that the color of the tickets is just noise in the story. Clearly if your ticket is white, you knew you’d lose. But you also know it if it’s red. For regardless of which losing ticket you have, there is always some property (perhaps a disjunctive one) such that your losing ticket and all the nine winning tickets have that. That that property is redness doesn’t seem to matter at all.

So, I take it that if you can know that you won’t win the lottery, then in the colorful lottery case you know you won’t win when in fact you have a losing ticket—even if that losing ticket is red.

Now let’s move to Gettier cases. Take a standard Gettier case where you see what looks like a sheep in the field, and you come to believe that it is a sheep, and so you conclude that there is a sheep in the field. But in fact what you see isn’t a sheep but a dog standing in front of a sheep. So, you have a justified true belief that there is a sheep, but you don’t know it. But suppose that instead of reasoning through the claim that you see a sheep, you reason as follows:

  1. You seem to see a sheep.

  2. If you seem to see a sheep, it’s very likely that you see a sheep in the field or there is an unseen sheep in the field.

  3. So, very likely there is a sheep in the field.

  4. So (ampliatively) there is a sheep in the field.

This seems to be perfectly good reasoning. It clearly would be good reasoning if you have “you see a sheep in the field” in the consequent of (7). But adding the “unseen sheep” disjunct only makes the reasoning better. Moreover, this reasoning is exactly parallel to the colorful lottery case. So just as in the colorful lottery case you know that you have a losing ticket regardless of whether you have a white ticket or a red losing ticket, likewise in this case, you know that there is a sheep in the field regardless of whether you see a sheep in the field or there is an unseen sheep in the field.

So, it seems that although reasoning via the claim that what you see is a sheep would not lead to knowledge—Gettier is right about that—you had an alternate statistical form of reasoning (6)–(9) that would have given you knowledge.

If knowledge is valuable—something I doubt—that’s reason to prefer such statistical forms of reasoning.

Objection 1: In lottery cases you only know when the probabilities are overwhelming while in ordinary perceptual knowledge cases you know with much smaller probabilities. Thus, perhaps, you only know in lottery cases when the chance of winning is something like one in a million, but in perceptual cases you can know even if there is a 1% chance of the observations going wrong. If so, then standard Gettier cases may not be parallel to those (colorized) lottery cases where there is knowledge, because the “very likely” in (3) will have to be much stronger for knowledge than in (7).

Response: If you can know in perceptual cases at, say, 99% probability but need a much higher probability in lottery cases, you have the absurd consequence that you can say things like: “I think it’s a lot more likely that I lost the lottery than that I see a sheep. But I know that I see a sheep and I don’t know that I lost the lottery.” Now, this sort of reasoning will not appeal to everyone. But I think it is one of the main intuitions behind the thought that you know in lottery cases at all. And my argument was predicated on knowing in lottery cases.

Moreover, even if there is the sort of difference that the objection claims, we still get the very interesting result that in some Gettier cases—say, ones that involve highly reliable perception—we can gain knowledge if we switch to statistical reasoning.

Objection 2: You don’t know you will lose in the colorful lottery case when in fact you have a red losing ticket but you do know when in fact you have a white ticket.

Response: If that case doesn’t convince you, consider this variant one. A doctor has a treatment for a condition that on some, but relatively rare, occasions goes away on its own. The treatment is highly effective. Most of the time it fixes the condition. The doctor reasons:

  1. You will get the treatment.

  2. If you will get the treatment, very likely either you will recover because of it or you will recover spontaneously.

  3. So, very likely you will recover.

  4. So (ampliatively) you will recover.

And this reasoning seems to yield knowledge—at least if we admit knowledge of chancy future things as friends of knowledge in lottery cases must—as long as you will in fact recover, and will yield knowledge regardless of whether you recover because of the treatment or spontaneously.

Monday, September 17, 2018

Non-propositional conveyance

One sometimes hears claims like:

  1. There are things that can be conveyed through X (poetry, novels, film, art, music, etc.) that cannot be conveyed propositionally.

But what kind of a thing are those things? Facts? Not quite. For while some of the “things that can be conveyed … that cannot be conveyed propositionally” are in fact real and true, some are not. Leni Riefenstahl’s Triumph of the Will and Fritz Lang’s M are both good candidates for conveying “things … that cannot be conveyed propositionally”. But Triumph in doing so conveys falsehoods about the Nazi Party while M conveys truths about the human condition. But facts just are. So, the “things” are not just facts.

What I said about Triumph and M is very natural. But if we take it literally, the “things” must then be the sorts of things that can be true or false. But the primary bearers of truth are propositions. So when we dig deeper, (1) is undermined. For surely we don’t want to say that Triumph and M convey propositions that cannot be conveyed propositionally.

Perhaps, though, this was too quick. While I did talk of truth and falsehood initially, perhaps I could have talked of obtaining and not obtaining. If I did that, then maybe the “things” would have turned out to be states of affairs (technically, of the abstract Plantinga sort, not of the Armstrong sort). But I think there is good reason to prefer propositions to states of affairs here. First, it is dubious whether there are impossible states of affairs. But not only can X convey things that aren’t so, it can also convey things that couldn’t be so. A novel or film might convey ethical stuff that not only is wrong, but couldn’t be right. Second, what is conveyed is very fine-grained, and it seems unlikely to me that states of affairs are fine-grained enough. The right candidate seems to be not only propositions, but Fregean propositions.

But (1) still seems to be getting at something true. I think (1) is confusing “propositionally” with “by means of literalistic fact-stating affirmative sentences”. Indeed:

  1. There are things that can be conveyed through X (poetry, novels, film, art, music, etc.) that cannot be conveyed by means of literalistic fact-stating affirmative sentences.

(Note the importance of the word “conveyed”. If we had “expressed”, that might be false, because for any of the “things”, we could stipulate a zero-place predicate, say “xyzzies”, and then express it with “It xyzzies.” But while that sentence manages to express the proposition, it doesn’t convey it.)

Friday, September 14, 2018

A puzzle about knowledge in lottery cases

I am one of those philosophers who think that it is correct to say that I know I won’t win the lottery—assuming of course I won’t. Here is a puzzle about the view, though.

For reasons of exposition, I will formulate it in terms of dice and not lotteries.

The following is pretty uncontroversial:

  1. If a single die is rolled, I don’t know that it won’t be a six.

And those of us who think we know we won’t win the lottery will tend to accept:

  1. If ten dice are rolled, I know that they won’t all be sixes.

So, as I add more dice to setup, somewhere I cross a line from not knowing that they won’t all be six to knowing. It won’t matter for my puzzle whether the line is sharp or vague, nor where it lies. (I am inclined to think it may already lie at two dice but at the latest at three.)

Let N be the proposition that not all the dice are sixes.

Now, suppose that ten fair dice get rolled, and you announce to me the results of the rolls in some fixed order, say left to right: “Six. Six. Six. Six. Six. Six. Six. Six. Six. And five.”

When you have announced the first nine sixes, I don’t know N to be true. For at that point, N is true if and only if the remaining die is six, and by (1) I don’t know of a single die that it won’t be a six.

Here is what puzzles me. I want to know if in this scenario I knew N in the first place, prior to any announcements or rolls, as (2) says.

Here is a reason to doubt that I knew N in the first place. Vary the case by supposing I wasn’t paying attemption, so even after the ninth announcement, I haven’t noticed that you have been saying “Six” over and over. If I don’t know in the original scenario where I was paying attention, I think I don’t know in this case, either. For knowledge shouldn’t be a matter of accident. My being lucky enough not to pay attention, while it better positioned me with regard to the credence in N (which remained very high, instead of creeping down as the announcements were made), shouldn’t have resulted in knowledge.

But if I don’t know after the ninth unheard announcement, surely I also don’t know before any of the unheard announcements. For unheard announcements shouldn’t make any difference. But by the same token, in the original scenario, I don’t know N prior to any of the announcements. For it shouldn’t make any difference to whether I know at t0 whether I will be paying attention. When I am not paying attention, I have a justified true belief that N is true, but I am Gettiered. Further, there is no relevant epistemic difference between me before the die rolls and between the die rolls and the start of the announcements. If I don’t know N at the latter point, I don’t know N at the beginning.

So it seems that contrary to (2) I don’t know N in the first place.

Yet I am still strongly pulled to thinking that normally I would know that the dice won’t all be sixes. This suggests that whether I will know that the dice won’t all be sixes depends not only on whether it is true, but what the pattern of the dice will in fact be. If there will be nine sixes and one non-six, then I don’t N. But if it will be more “random looking” pattern, then I do know N. This makes me uncomfortable. It seems wrong to think the actual future pattern matters. Maybe it does. Anyway, all this raises an interesting question: What do Gettier cases look like in lottery situations?

I see four moves possible here:

A. Reject the move from not knowing in the case where you hear the nine announcements to not knowing in the case where you failed to hear the nine announcements.

B. Say you don’t know in lottery cases.

C. Embrace the discomfort and allow that in lottery cases whether I know I won’t win depends on how different the winning number is from mine.

D. Reject the concept of knowledge as having a useful epistemological role.

Of these, move B, unless combined with D, is the least plausible to me.

The value of knowledge

Here’s a curious phenomenon. Suppose I have enough justification for p that if p is in fact true, then I know p, but suppose also that my credence for p is less than 1.

Now consider some proposition q that is statistically independent of p and unlikely to be true. Finally consider the conjunctive proposition r that p is true and q is false.

If I were to learn for sure that r is true, I would gain credence for p, but it wouldn’t change whether I know whether p is true.

If I were to learn for sure that r is false, my credence for p would go down. How much it would go down depends on how unlikely q is. Fact: If P(q)=(2P(p)−1)/P(p), where P is the prior probability, then if I learn that r is false, my credence for p goes to 1/2.

OK, so here’s where we are. For just about any proposition p that I justifiedly take myself to know, but that I assign a credence less than 1 to, I can find a proposition r with the property that learning that r is true increases my credence in p and that learning that r is false lowers my credence in p to 1/2.

So what? Well, suppose that the only thing I value epistemically is knowing whether p is true. Then if I am in the above-described position, and if someone offers to tell me whether r is true, I should refuse to listen. Here is why. Either p is true or it is not true. If p is true, then my belief in p is knowledge. In that case, I gain nothing by learning that r is true. But learning that r is false would lose my knowledge, by reducing my credence in p to 1/2. Suppose p is false. Then my belief in p isn’t knowledge. In the above setup, if p is false, so is r. Learning that r is false, however, doesn’t give me knowledge whether p is true. It gives me credence 1/2, which is neither good enough to know p to be true nor good enough to know p to be false. So if p is false, I gain nothing knowledge-wise.

So, if all I care about epistemically is knowing the truth about some matter, sometimes I should refuse relevant information on the basis of epistemic goals (Lara Buchak argues in her work on faith that sometimes I should refuse relevant information on the basis of non-epistemic goals; that’s a different matter).

I think this is not a very good conclusion. I shouldn’t refuse relevant information on the basis of epistemic goals. Consequently, by the above argument, knowing the truth about some matter shouldn’t be my sole epistemic goal.

Indeed, it should also be my goal to avoid thinking I know something that is in fact false. If I add that to my goals, the conclusion that I should refuse to listen to whether r is true disappears. For if p is false, although learning that r is false wouldn’t give me knowledge whether p is true, in that case it would take away the illusion of knowledge. And that would be valuable.

Nothing deep in the conclusions here. Just a really roundabout argument for the Socratic thesis that it’s bad to think you know when you don’t.

Thursday, September 13, 2018

What's the good of consciousness?

A question has hit me today that I would really want to have a good answer to: What’s the point of consciousness? I can see the point of reasoning and knowledge. But one can reason and have knowledge without consciousness. What would we lose if we were all like vampire Mary?

One could suppose that the question has a false presupposition, namely that there is a point to consciousness. Perhaps consciousness is just an evolutionary spandrel of something genuinely useful.

Still, it seems plausible that there be an answer. I can think of two.

First, perhaps consciousness is needed for moral responsibility, while moral responsibility is clearly valuable. But this won’t explain what the point of brute animals being conscious.

Second, maybe contemplation of truth is valuable, where we use “contemplation” broadly to include both sensory and non-sensory versions. And while one can have unconscious knowledge, one cannot have unconscious contemplation. But why is contemplation of truth valuable? Intuitively, it’s a more intimate connection with truth than mere unconscious knowledge. But I fear that I am not making much progress here, because I don’t know in what way it’s more intimate and why this intimacy is valuable.

Perhaps there is a theistic story to be told. All truth is either about God or creation or both. Contemplating truths about God is a form of intimacy with God. But creation also images God. So contemplating truths about creation is also a form of intimacy with God, albeit a less direct one. So, perhaps, the value of consciousness comes from the value of intimacy with God.

Or maybe we can say that intimacy with being is itself valuable, and needs not further explanation.

Wednesday, September 12, 2018

Vampire Mary

In Peter Watts’ superb novel Blindsight, vampires are animals that function intelligently but lack consciousness. The lack of a detour of information processing through consciousness systems allows them to react with superhuman speed to stimuli.

It seems to me to be logically possible to have beings that have no consciousness but have knowledge and intelligence. After all, there are many things I currently know that I am not currently conscious of, and probably a lot of our thinking is unconscious. I don’t see why this couldn’t happen all the time.

If we want to allow this possibility, we have an interesting variant of the Mary thought experiment. Vampire Mary knows all of physics. But she has never experienced anything. Whatever we say about the original Mary and the quale of red, it seems plausible that vampire Mary has no idea what it is like to have an experience of red, or of anything else. And hence experience goes beyond physics.

Plausible, yes, but I am not satisfied with just that...

Tuesday, September 11, 2018

A simple version of the Mary argument

The following cute argument is valid.
  1. If physicalism is true, all reality is effable (because it can all be expressed in the language of completed physics).
  2. Qualia are ineffable.
  3. So, physicalism is not true.
Personally, while I accept the conclusion, I am inclined to deny (2), since it seems to me that it's easy to express a quale: the quale of red is an experience whose intentional object is an instance of redness. (For the same reason, I think the problem of qualia reduces to the problem of intentionality. And that's the real problem.)

Virtue versus painlessness

Suppose we had good empirical data that people who suffer serious physical pain are typically thereby led to significant on-balance gains in virtue (say, compassion or fortitude).

Now, I take it that one of the great discoveries of ethics is the Socratic principle that virtue is a much more significant contributor to our well-being than painlessness. Given this principle and the hypothetical empirical data, it seems that then we should not bother with giving pain-killers to people in pain—and this seems wrong. (One might think a stronger claim is true: We should cause pain to people. But that stronger claim would require consequentialism, and anyway neglects the very likely negative effects on the virtue of the person causing the pain.)

Given the hypothetical empirical data, what should we do about the above reasoning. Here are three possibilities:

  1. Take the Socratic principle and our intuitions about the value of pain relief to give us good reason to reject the empirical data.

  2. Take the empirical data and the Socratic principle to give us good reason to revise our intuition that we should relieve people’s pain.

  3. Take the empirical data and our intuitions about the value of pain relief to give us good reason to reject the Socratic principle.

Option 1 may seem a bit crazy. Admittedly, a structurally similar move is made when philosophers reject certain theodical claims, such as the Marilyn Adams claim that God ensures that all horrendous suffering is defeated, on the grounds that it leads to moral passivity. But it still seems wrong. If Option 1 were the right move, then we should now take ourselves (who do not have the hyptohetical empirical data) to have a priori grounds to hold that serious physical pain does not typically lead to significant on-balance gains in virtue. But even if some armchair psychology is fine, this seems to be an unacceptable piece of it.

Option 2 also seems wrong to me. The intuition that relief of pain is good seems so engrained in our moral life that I expect rejecting it would lead to moral scepticism.

I think some will find Option 3 tempting. But I am quite confident that the Socratic principle is indeed one of the great discoveries of the human race.

So, what are we to do? Well, I think there is one more option:

  1. Reject the claim that the empirical data plus the Socratic principle imply that we shouldn’t relieve pain.

In fact, I think that even in the absence of the hypothetical empirical data we should go for (4). The reason is this. If we reject (4), then the above reasoning shows that we have a priori reasons to reject the empirical data, and I don’t think we do.

So, we should go for (4), not just hypothetically but actually.

How should this rejection of the implication be made palatable? This is a difficult question. I think part of the answer is that the link between good consequences and right action is quite complex. It may, for instance, be the case there are types of goods that are primarily the agent’s own task to pursue. These goods may be more important than other goods, but nonetheless third parties should pursue the less important goods. I think the actual story is even more complicated: certain ways of pursuing the more important goods are open to third-parties but others are not. It may even be that certain ways of pursuing the more important goods are not even open to first-parties, but are only open to God.

And I suspect that this complexity is species-relative: agents of a different sort might have rather different moral reasons in the light of similar goods.

Monday, September 10, 2018

Infinity, Causation and Paradox: Kindle Edition

The Kindle edition of my Infinity, Causation and Paradox book is now out. Alas, the price is excessive (a few dollars cheaper than the hard cover), but for those who prefer electronic editions, or don't want to wait for the hardcover edition, it might be worth it.

Friday, September 7, 2018

Beauty and goodness

While listening to a really interesting talk on beauty in Aquinas,I was struck by the plausibility of the following idea (perhaps not Aquinas'): The good is what one properly desires to be instantiated; the beautiful is what one properly desires to behold. So the distinction between them is in how we answer Diotima's question about desire (or eros): what do we want to do with the object of desire?

Wednesday, September 5, 2018

Quasi-causation

You pray for me to get a benefit and God grants your prayer. The benefit is in an important sense a result of your prayer. But you didn’t cause the benefit, for if you had, it would have been an instance of causation with God as an intermediate cause, and it seems to violate divine aseity for God ever to be an intermediate cause.

Still, relations involving to the benefit is relevantly like a causal one. For instance, means-end reasoning applies just as it does to non-deterministic causal chains:

  • You want me to improve morally. I will improve morally if God gives me grace. So you pray that God gives me grace.

And I owe you gratitude, though I owe more to God.

There are even cases of blameworthiness where the action “goes through God”. For instance, it is a standard view (and dogma for Catholics) that God creates each soul directly. But a couple can be blameworthy for having a child in circumstances where the child can be reasonably expected to grow up morally corrupted (e.g., suppose that a white supremacists are sure to steal one’s children if one has any). Or consider sacramental actions: a couple can be blameworthy for marrying unwisely, a priest for consecrating the Eucharist in a sacrilegious context, etc.

I call these sorts of relations “quasi-causal”. It would be good to have an account of quasi-causation.

Perhaps Lewis-style counterfactual accounts of causation, while not being good accounts of causation nonetheless provide a good start at accounts of quasi-causation?

Are there any cases of quasi-causation that do not involve God? I am not sure. Perhaps constitutive explanations provide cases. Suppose your argument caused the other members of the committee to vote for the motion. Their voting for the motion partially constituted the passing of the motion. But perhaps it is not correct to say that you caused, even partially, the passing of the motion. For what you caused is the vote, and the vote isn’t the passing, but merely partially constitutive of it. But maybe we can say you quasi-caused the passing of the motion.

This post is really an invitation for people to work on this interesting notion. It also comes up briefly towards the end of my new infinity book (which is coming out in about two weeks).

Tuesday, September 4, 2018

Conciliationism with and without peerhood

Conciliationists say that when you meet an epistemic peer who disagrees with you, you should alter your credence towards theirs. While there are counterexamples to conciliationism here is a simple argument that normally something like conciliationism is correct without the assumption of epistemic peerhood:

  1. That someone’s credence in a proposition p is significantly below 1/2 is normally evidence against p.

  2. Learning evidence against a proposition typically should lower one’s credence.

  3. So, normally, learning that someone’s credence is significantly below 1/2 should lower one’s credence.

In particular, if your credence is above 1/2, then learning that someone else’s is significantly below 1/2 should normally lower one’s credence. And there are no assumptions of peerhood here.

The crucial premise is (1). Here is a simple thought: Normally, people’s credences are responsive to evidence. So when their credence is low, that’s likely because they had evidence against a proposition. Now the evidence they had either is or is not evidence you also have. If you know it is not evidence you also have, then learning that they have additional evidence against the proposition should normally provide you with evidence against it, too. If it is evidence you also have, that evidence should normally make no difference. You don’t know which of these is the case, but still the overall force of evidence is against the proposition.

One might, however, have a worry. Perhaps while normally learning that someone’s credence is significantly below 1/2 should lower one’s credence, when that someone is an epistemic peer and hence shares the same evidence, it shouldn’t. But actually the argument of the preceding paragraph shows that as long as you assign a non-zero probability to the person having more evidence, their disagreement should lead you to lower your credence. So the worry only comes up when you are sure that the person is a peer. It would, I think, be counterintuitive to think you should normally conciliate but not when you are sure the other person is a peer.

And I think even in the case where you know for sure that the other person has the same evidence you should lower your credence. There are two possibilities about the other person. Either they are a good evaluator of evidence or not. If not, then their evaluation of the evidence is normally no evidence either for or against the proposition. But if they are good evaluators, then their evaluating the evidence as being against the proposition normally is evidence that the evidence is against the proposition, and hence is evidence that you evaluated badly. So unless you are sure that they are a bad evaluator of evidence, you normally should conciliate.

And if you are sure they are a bad evaluator of evidence, well then, since you’re a peer, you are a bad evaluator, too. And the epistemology of what to do when you know you’re bad at evaluating evidence is hairy.

Here's another super-quick argument: Agreement normally confirms one's beliefs; hence, normally, disagreement disconfirms them.

Why do I need the "normally" in all these claims? Well, we can imagine situations where you have evidence that if the other person disbelieves p, then p is true. Moreover, there may be cases where your credence for p is 1.