A perfect Bayesian agent is really quite simple. It has a database of probability assignments, a utility function, an input system and an output system. Inputs change the probability assignments according to simple rules. It computes which outputs maximize expected utility (either causally or evidentially--it won't matter for this post). And it does that (in cases of ties, it can take the lexicographically first option).
In particular, there is no need for consciousness, freedom, reflection, moral constraints, etc. Moreover, apart perhaps from gerrymandered cases (Newcomb?), for maximizing expected utility of a fixed utility function, the perfect Bayesian agent is as good as one can hope to get.
So, if we are the product of entirely unguided evolution, why did we get consciousness and these other things that the perfect Bayesian agent doesn't need, rather than just a database of probability assignments, a utility function that maximizes reproductive potential, and finely-honed input and output systems? Perhaps it is as some sort of compensation for us not being perfect Bayesian agents. There is an interesting research program available here: find out how these things we have compensate for the shortfalls, say, by allowing lossy compression of the database of probability assignments or providing heuristics in lieu of full optimizations. I think that some of the things the perfect Bayesian agent doesn't need can fit into these categories (some reflection and some moral constraints). But I doubt consciousness is on that list.
Consciousness, I think, points towards a very different utility function than one we would expect in an unguidedly evolutionarily produced system. Say, a utility function where contemplation is a highest good, and our everyday consciousness (and even that of animals) is a mirror of that contemplation.
I disagree that Newcomb is gerrymandered; such cases exist in real life to the degree that other people can predict your actions, and to some extent they can. But this would just be an argument that you have to use evidential decision theory rather than causal decision theory, and so this is not really relevant to the point of your post.
ReplyDeleteAtheists and materialists would typically argue that consciousness is an accidental but necessary by-product, and that it would probably be present even in such a perfect Bayesian agent. But I think they don't get the point: even if consciousness is absolutely necessary given how we work externally, that does not explain why consciousness is necessary (saying that something is necessary is not saying why it is necessary) and this suggests that the truth is, at least, something like what you propose.
1. Cases where people can predict actions imperfectly seem different from Newcomb. They are cases where I think it's pretty clear that causal decision theory is the thing to use.
ReplyDelete2. It would be really interesting, though, if the stuff about prediction was what made consciousness evolutionarily useful.
3. I think that if a perfect Bayesian agent is conscious, so is a Bayesian spam filter. But the latter is not conscious.
I have pondered this kind of question for a long time. A couple of refinements:
ReplyDeleteFirst, I think one should distinguish phenomenal consciousness from intentional consciousness. I don’t know why we have either one of these things, but they seem quite different.
I am not sure whether it matters whether a being is phenomenally conscious. It has occurred to me (half-seriously) that when Daniel Dennett denies there are such things as qualia, maybe he is right in his own case. Even so, he would still be a rational being, deserve respect, and so on.
I am not sure whether it matters whether a being is intentionally conscious either, except that Alex suggests that this is part of contemplation and that contemplation has a lot to do with the human good. I had never thought of that before. But certainly, for some senses of ‘rational’ (e.g. the Bayesian one), being rational does not require intentional consciousness. Maybe those are non-valuable types of rationality?
You can make a good case that the traditional God is neither phenomenally nor intentionally conscious. Not phenomenally, in that he has no sense organs, and not intentionally, in that his thought is not discursive. If we imitate God in contemplation, I’m not sure how.
There is a very good empirical case to be made that the non-conscious mind uses fast frugal heuristics, and that the conscious rational mind evolved to be a kind of reflective check on these results. Cognitive biases are systematic failures of these heuristics. Kahneman, Thinking Fast and Slow, Gigerenzer, Simple Heuristics that Make Us Smart, and at a more general level Haidt, The Happiness Hypothesis cover this territory. Still, I'm not sure any of it explains (so to speak) the conscious part of consciousness.
Also, David Bentley Hart wrote a book arguing for the existence of God in which consciousness figures as a prominent part of the argument.
Alex,
ReplyDeleteThere isn't enough room in a thread for an in-depth look at all of the issues your post raise, but briefly:
a. Natural selection is not the only evolutionary force.
b. Natural selection tends to optimize locally, not globally.
c. Even the local optimization is constrained by the sort of universe we live in (laws of nature, nomological possibility, or whatever one calls it).
d. A zillion other issues that would take too long to address, but purely for example, it seems extremely improbable that zombies are possible in our universe - i.e., anything with a brain like ours will be conscious like we are -, so there is no option of a zombie.
Perhaps, you might say how about a simpler zombie brain? Why did that not evolve instead of us?
But then, that seems committed to the view that there were in the past nomologically possible evolutionary options (on unguided evolution) that were more conducive on average to reproductive success and which would have resulted in smaller zombie brains. The same of course would apply to other conscious animals, like chimps, or even dogs or cats, etc. (unless you think they're zombies!).
If that were correct, then what you would need is massive genetic engineering along the evolutionary process, or culling or some other intervention, preventing evolution from taking those routes; science would have it all wrong, and that could be established empirically by different means, if not now, then in the future. I would say that's extremely improbable, but if that does not convince you, I'd add it does sound like a magician with a magic wand, in Francis's terms.
On the other hand, if you're saying that just the way the universe is (i.e., with consciousness) is evidence of God or whatever, then that's another matter: the "unguided evolution" argument is orthogonal to that.
Angra:
ReplyDelete"it seems extremely improbable that zombies are possible in our universe": It doesn't sound improbable to me. :-)
"If that were correct, then what you would need is massive genetic engineering along the evolutionary process": We don't know that it would have to be massive.
Heath:
"There is a very good empirical case to be made that the non-conscious mind uses fast frugal heuristics, and that the conscious rational mind evolved to be a kind of reflective check on these results." That's the kind of thing I was thinking would be the best move. But then I agree with: "Still, I'm not sure any of it explains (so to speak) the conscious part of consciousness."
Everyone:
Here's another argument in the vicinity. We can imagine a Bayesian computational system that is behaviorally more clever than we are--we would imagine it building spaceships and doing science better than human beings do--but which is in a sense algorithmically very simple, though with very high memory and speed demands. And we have the intuition that this system wouldn't be conscious, since it would be just a spam filter with more memory, more speed, more inputs and more outputs, unless perhaps a supernatural being chose to give it consciousness (just as such a being might be able to give consciousness to a spam filter or a thermometer). Now suppose we dumb down the system, replacing its optimal calculations with heuristics and checks and balances on the heuristics, but with the advantage that we don't have as great memory and speed demands. Why would we think that this dumbing down would bestow consciousness on the system, again absent something supernatural?
ReplyDeleteIt doesn't sound improbable to me. :-)
It doesn't sound to me you're going to find many people who believe in unguided evolution and who will share that view. :-)
But I guess it depends on your target audience.
We don't know that it would have to be massive.
It would be massive in the sense it would have to be everywhere, affecting all species, and leading to conditions very different from what otherwise would have resulted.
That said, I'm not the best person for discussing the matter.
For more information, I suggest you posit the theory that evolution would have led to very different animals in a forum, blog, etc., for biologists. Or a place where biologists post. Or a place where at least more informed people post (I could suggest a place, but the reply would not be friendly, even if it would be detailed and true).
Here's another argument in the vicinity. We can imagine a Bayesian computational system that is behaviorally more clever than we are--we would imagine it building spaceships and doing science better than human beings do--but which is in a sense algorithmically very simple, though with very high memory and speed demands. And we have the intuition that this system wouldn't be conscious, since it would be just a spam filter with more memory, more speed, more inputs and more outputs, unless perhaps a supernatural being chose to give it consciousness (just as such a being might be able to give consciousness to a spam filter or a thermometer).
I can't imagine it, and I'm not sure it's possible. If it's algorithmically very simple, how would it manage to do that?
I can see it can be simpler than we are (removing all of the machinery for social interaction), but not particularly simple.
I would not be inclined to say it's unconscious. I don't know about that (by the way, panpsychism is definitely an open possibility for me, and no less probable than any alternative).
Now suppose we dumb down the system, replacing its optimal calculations with heuristics and checks and balances on the heuristics, but with the advantage that we don't have as great memory and speed demands.
Why would that be "dumbed down"?
In which sense is it dumber?
Maybe the complex heuristics (far from being dumber) make the difference. But who knows?
I really don't know whether either of the systems would be conscious. I don't know much about the systems; maybe with more info, I could make an assessment. But I'm not even sure about that, either.
But in any case, that would not be a zombie. A zombie has to have something that looks like our brains. If you want to use "zombie" more broadly, your choice of course. But natural selection only worked on carbon-based living organisms, which are very different from those computers. I have no idea how consciousness would work with different particles.
Alex,
ReplyDeleteA perfect Bayesian agent is really quite simple. It has a database of probability assignments, a utility function, an input system and an output system. Inputs change the probability assignments according to simple rules.
How would you expect evolution to build the database without all of the other things?
I'm no expert for sure, but very roughly, evolution goes as follows:
Evolution beings with unicellular organisms. That's not enough to build the agent. You don't even have a brain. We don't know how they become multicellular exactly, but it's probably colonies of unicellular ones becoming more integrated. And that's still without brains. When you get to brains, you have different parts of the organism already doing different functions and pulling in different directions. Then, minor hacks (mutations) may alter such-and-such predispositions to some extent, which may be useful for reproductive success in that particular environment. Then the environment changes (even the other organisms evolve), but the predispositions are already there, and evolution works with minor hacks, etc.
You get the idea. By the time you have a considerable more complex brains, there are different subsystems that sometimes cooperate and sometimes compete, and also nothing like a general motivation "Maximize reproductive success". In fact, that would seem to require a lot of computational capability, so bigger brains.
But when you have a significantly bigger brain, then you already have a complex mind with different predispositions, etc., not a perfect Bayesian agent (conscious or not, though at this point the agent is surely conscious, if they weren't always so). Now, some minor modifications can be conducive to reproductive success, so when some mutations happen, etc.
Now, you say that's improbable. Why? And why has the people working on this (scientists, philosophers of science, etc.) not realize it?
In any case, I would like to ask for your alternative. Let's say you start with unicellular organisms. How do you get from there to your perfect Bayesian agent? I'm not asking for details, but roughly, what's evolutionary the path from unicellular organisms to a perfect Bayesian agent? (conscious or not)
Angra:
ReplyDeleteI think you're missing a crucial part of the intuition in my comment. I start with the thought that the purely Bayesian system would be a zombie. And I conclude that the other system -- the one that looks just like us -- would probably be a zombie as well, barring a miraculous infusion of a soul or the like. So this version of the argument doesn't depend on any doubts about whether evolution would be likely to produce beings with brains like ours.
Alex,
ReplyDeleteMy latest comment on evolution was a more elaborate reply to your OP. It wasn't only about consciousness, but about whether we should expect a perfect Bayesian agent (conscious or not) from evolution. I'm pretty sure we should not (in fact, I think a perfect Bayesian agent with the goal of reproducing would indicate design! )
My comment before that mentions evolution because my point is that silicon-based computers - which are not alive - and carbon-based brains - which are - are very different animals. I believe brains clearly have some subjective experience. But I'm not sure what sort of silicon-based structure will also have it. For example, it might be that a computer based on the sort of design we're making now has many systems with a lower level of consciousness instead of a single higher level. What do I know?
If the behavior of the computer system actually resembles in some intuitively relevant features that of the brain-based organism, then I would be inclined to say it's probably conscious too, though I would be pretty uncertain about the kind of subjective experiences it has (i.e., how similar are they to our own?). But for now, computers do not seem to resemble that, and I don't know enough about the computers in your example to tell whether they do.
My gut feeling (but still very tentative) hypothesis is that there are some figments of consciousness pretty much all around, either really everywhere (panpsychism), or at least in systems with certain level of complexity and certain specific characteristics. But I have no clue what those conditions/characteristics would be, beyond what we know about brains.
For example, I would be willing to say that a cockroach probably feels pain or something like it, but I would be much less inclined to guess what sort of structure a similarly complex non-living silicon-based computer would have to have in order to have some similarly integrated subjective experience. It would be probable in my view if it behaves in certain ways that are similar to the ways a cockroach (or another bug, etc.) behaves. But what if it doesn't?
That aside, I'm not sure why you think that the first system in your scenario is unlikely to be conscious, or that those are our intuitions, but I know I don't intuit anything about it - though that might be because I've not been able to get an idea of what the system is actually doing. The description seems too vague to me. For example, I would like to know things like: How does it make spaceships? Does it design them? Does it adapt to events like comets, asteroids, etc.? Does it communicate with its makers (if any)? Does it respond to verbal commands? And so on.
I also don't have any intuitions about the other system, or why it would be dumb. I would like to know things like: How slow is it? What kind of extra features does it have? Does it talk? Is it as smart as a cockroach? Etc.
By the way, do you believe in something like Cartesian souls, or something like it? (rather than, say, an Aristotelian-Thomistic view).
Because otherwise, it's difficult for me to see how the zombies might work.
Then again, even with Cartesian souls, perfect zombies would only seem to work if epiphenomenomalism is true - which is not the case.
Given that you say the nomological possibility of zombies doesn't look improbable to you, I would like to ask how you think that a soulless human would behave. (i.e., what sort of work the soul is actually doing?).
I believe in Aristotelian-Thomistic souls. In such a view, zombies might just be heaps of matter without a form, albeit looking and physically behaving as if they had form.
ReplyDeleteBut if the soulless zombie and the person physically behave in the same manner under the laws of our universe, then what does the soul does?
ReplyDeleteI mean, the soulless being (if it's a zombie) would utter mathematical statements, prove them, utter moral judgments, etc., so how does consciousness (including conscious choices, etc.) play a causal role on particles?
Or does consciousness only affect future consciousness, and perhaps other mental stuff?
There would be no intentionality, no teleology, no norms, and above all, no entity, just a heap. It is the form that makes a plurality of particles into a something.
ReplyDeleteFair enough, but what I'm trying to get at is this: if the soulless heap physically behaves in the same manner as a human being under the laws of our universe, how does the soul, consciousness, free choices, etc., play a causal role on cars, computers, chairs, particles, and generally physical stuff?
ReplyDeleteOr do souls, consciousness, free choices, etc., play no causal role on physical stuff?
On the other hand, if they do play a causal role, how could the zombies physically behave as if they had form?
On my favored version of Aristotelian ontology, the forms of the individual particles disappear when the particles become part of a larger object with form. As a result, all the causal powers of the particles are subsumed into the causal power of the larger object. So just as the form of an unattached electron is responsible for the electron's behavior, the form of a dog is responsible for the dog's behavior, at both the micro and macro levels, including the behavior that one is apt to attribute to the electrons in the dog's atoms.
ReplyDeleteThis lets me maintain that macroscopic objects fundamentally exist, and hence that I fundamentally exist, which is important to me. :-)
I'm not sure I'm getting this right, but do the formless particles continue to behave just as particles with form, even without any choice from the person?
ReplyDeleteAnd they interact with each other exactly as other particles with causal powers would interact, by virtue of an unconscious causal power of the person that just matches the behavior of the particles without the person?
Alex:
ReplyDeleteAfter re-reading your latest post and trying to find alternatives, I'm more confident that I got the interpretation right (Please let me know if that's not the case). But doesn't it look to you like an extremely improbable event that the person (and all people) just keeps making free choices that match exactly what a soulless heap that looks exactly like the person (and whose causally effective particles behave exactly like those causally effete particles in the person) would do?
Another issue: Assuming that subjective experience does not make a difference to physical behavior (i.e., compared with unconscious heaps), could there be non-moral agents who live lives of horrible suffering (even without human intervention)? What would be the purpose of all of that suffering? Zombies could do the same physically, without suffering.
1. I conceded too much to the materialist when I suggested that the behavior of the heap would be human-like. I agree that it is unlikely that free choices would look just like the soulless heap's activity. Still, but the soulless heap might look pretty human in behavior. Or not. I really don't know.
ReplyDelete2. "could there be non-moral agents who live lives of horrible suffering (even without human intervention)? What would be the purpose of all of that suffering? Zombies could do the same physically, without suffering." I find some plausibility in the theory that God always prevents pointless suffering in animals by removing the qualia of pain while leaving the pain behaviors unchanged. This would be better than having just zombie heaps.
1. Fair enough.
ReplyDeleteJust to clarify: I'm not a materialist. I'm actually not sure the term "material" is precise enough to be used in the philosophical contexts in which it's usually used. But in any case, I don't hold that all theories usually regarded as "materialism" are together more probable than panpsychism, and given that there are other views I give a probability greater than zero, I hold that materialism is improbable (i.e., less than 0.5).
2. But if the qualia of the pain is removed and mind is causally effective over physical stuff, you would expect them not to behave as if they were in pain. More precisely, they would likely try to behave in a certain way - the way they normally behave when they are not in pain -, but somehow their behavior would fail to match their intent.
Re. 2: It might be that a divergence between the behavior of a zombie-heap and that of an animal occurs only when the animal exhibits free will. And maybe only moral agents exhibit free will. Epiphenomenalism may be true of non-human animals.
ReplyDeleteRe.2:
ReplyDeleteI see some potential difficulties, like:
a. If the forms of the particles disappear when the dog is formed, and the causal powers are the causal powers of the dog's form, but the dog's subjective experiences are epiphenomenal, then what is causing the dog's form to have subjective experiences? It's not the particles (they're causally effete), so is the dog form causing its own epiphenomenal experiences, as long as causing both the particles to move exactly as the particles of heaps would?
It seems much simpler to me to posit that dogs are heaps. While that alternative is counterintuitive, so is the epiphenomenal view (see b.), and the heap alternative avoids the complications I mentioned above, which make it I think even less probable.
b. In any event, the epiphenomenal view would make our usual judgments about non-human animals to be completely off-track - we would need an error theory about that.
For example, people would be normally inclined to say that the dog was barking/bit a person, etc., because he was angry, or that the dog was complaining because it was in pain, or that they wag their tail when they see the owner because they're happy, etc., that deer run from wolves because they're afraid of wolves, that chimpanzees fight in order to gain or defend territory, or a chimp attacks his human caretaker because he's angry, and so on.
In short, we would seem to live in a pretty deceitful universe.
c. The epiphenomenal/non-epiphenomenal view requires a massive ontological leap from non-humans to humans. So, it seems there would be such thing as the first humans, whose parents would be epiphenomenal non-humans. So, for example, a child might reckon that his mother protects him because she loves him, but in reality, she protects him because the particles just happen to be moving in that direction, and her love for him is epiphenomenal.