Monday, July 17, 2017

Computer consciousness and dualism

Would building and running a sufficiently “smart” computer produce consciousness?

Suppose that one is impressed by the arguments for dualism, whether of the hylomorphic or Cartesian variety. Then one will think that a mere computer couldn’t be conscious. But that doesn’t settle the consciousness question. For, perhaps, if one built and ran a sufficiently “smart” computer (i.e., one with sufficient information processing capacity for consciousness), a soul would come into being. It wouldn’t be a mere computer any more.

Basically the thought here supposes that something like the following is a law of nature or a non-coincidental regularity in divine soul-creation practice:

  1. When matter comes to be arranged in a way that could engage in the kind of information processing that is involved in consciousness, a soul comes into existence.

Interestingly, though, a contemporary hylomorphist has very good reason to deny (1). The contemporary
hylomorphist thinks that the soul of an animal comes into existence at the beginning of the animal’s existence as an animal. Now consider a higher animal, say Rover. When Rover comes into existence as an animal out of a sperm and an egg, its matter is not arranged in a way capable of supporting the kind of information processing involved in consciousness. Yet that is when it acquires its soul. When finally the embryo grows a brain capable of this kind of information processing, no second soul comes into existence and hence (1) is false. (I am talking here of contemporary hylomorphists; Aristotle and Aquinas both believed in delayed ensoulement which would complicated the argument, and perhaps even undercut it.) The same argument will apply to those Cartesian dualists who are willing to admit that they were once embryos without brains.

Perhaps one could modify (1) to:

  1. When matter comes to be arranged in a way that could engage in the kind of information processing that is involved in consciousness and a soul has not already come into existence, then a soul comes into existence.

But notice now two things. First, (2) sounds ad hoc. Second, we lack inductive evidence for (2). We know of no cases where the antecedent of (2) is true. If we were to generate a computer with the right kind of information processing capabilities, we would know that the antecedent of (2) is true, but we would have no idea if the consequent is true. Third, our observations of the world so far all fit with the following generalization:

  1. Among material things, consciousness only occurs in living things.

But a “smart” computer would still not be likely to be a living thing. If it were, we would expect there to be non-“smart” computers that are alive, by analogy to how just as there are conscious living things, there are unconscious ones. But it is not plausible that there would be computers that are alive but not “smart” enough to be conscious. One might as well think that the laptop I am writing this on will be conscious.

This isn’t a definitive refutation of (2). God has the power to (speaking loosely) provide an appropriately complex computer with a soul that gives rise to consciousness. But inductive generalization from how the world is so far gives us little reason to think he would.

10 comments:

  1. Hi Alex,

    Regarding the observations of the world, you mention "so far", which suggest (if I'm reading this correctly) that some potential observations would not fit with that generalization. If so, I'd like to ask if you have some potential observations in mind.
    For example, would machines behaving in a way similar to, say, cockroaches, be enough? (or do you not count cockroaches as conscious? They're not self-aware, but consciousness does not require self-awareness...I think "conscious" also could use clarification).
    Or would you require something like an intelligent machine generally making true claims also claiming that it's conscious? Or would that not be enough, either?

    ReplyDelete
  2. I've never heard of anybody--except I suppose a panpsychist--proposing that any currently extant machine is conscious. (Of course we could be wrong about that.) But it seems a serious possibility that cockroaches are conscious. And nobody would be surprised if it turned out that some extant machine has the information processing capacity of a cockroach (does any? I don't know). Yet that would not, I think, make us think the machine is conscious, would it?

    This suggests that our intuitions about consciousness have some connection with life. (It's suggestive that in Star Trek, Data is characterized as an "artificial life form", rather than an intelligent machine.)

    ReplyDelete

  3. Regarding my previous questions, I got that you think our intuitions connect consciousness with life, but I don't know what sort of potential observation (if any) you think would not fit with the generalization that "Among material things, consciousness only occurs in living things."

    As to your points, assuming panpsychism is false (it looks to me no less probable than any alternative hypothesis proposed so far, but that does not make it probable; I'm agnostic on this matter), I would say that processing capacity is not an indication of consciousness, but behavior seems to be, and that seems to me to require (nomologically, not metaphysically) some processing capacity, but not life.

    I'm pretty sure a computer - no matter how complex - that is turned off is not conscious (if panpsychism is true, I'd say the individual particles that make up the computer have some mind-like stuff, but not the computer qua computer in that case), but then again, the same goes for a frozen wood frog, or a tardigrade in space (with no protection), or Data or the Terminator when deactivated.

    But if some robot actually exhibited behavior as complex as that of a cockroach, I would be inclined to say there is a good chance it's conscious, even if less than the cockroach because in that case we also have evidence from similarity to humans (but then again, if the robot has some sort of neural network that behaves in a brain-like fashion, the odds would seem to approach the cockroach).

    Back to Data, he may be called an "artificial life form" by other characters, but he does not seem to share the properties usually associated with life. In any case, if real a Data-like entity would be likely alive, wouldn't that conflict with your argument that "But a “smart” computer would still not be likely to be a living thing. If it were, we would expect there to be non-“smart” computers that are alive, by analogy to how just as there are conscious living things, there are unconscious ones. "?

    It seems to me that Data is a very smart computer. The way he looks like is surely not the relevant factor when it comes to our (or the other characters') evidence of consciousness, and he does not have cells that reproduce, or anything of the sort. Is it maybe the positronic brain, made to be somewhat similar to a human brain (if so, let's consider the Doctor in "Star Trek: Voyager")? Or do you think people in Star Trek should not believe that Data is conscious?

    Also, there several are other examples in fiction in which the entity in question is a very smart computer, and yet other characters (many of them at least) seem to accept that they're conscious (e.g., Skynet, the Terminator, DARYL., KITT, Westworld's characters, etc.), and viewers also think that they're conscious in the fictional world. Granted, some of those cases, viewers get a view from the artificial character's perspective, so that settles it for viewers. But not all cases are like that, and also other characters do not get that perspective and accept that they're conscious.

    ReplyDelete
  4. I don't know what observations would be strictly incompatible with the generalization that only living things are conscious. But it is also true that we have made no observations where it is the least bit controversial (apart from panpsychism) whether they fit with the generalization. And we can imagine observations where it would become controversial.

    As for behavior, the points are good. I would also be cautious, because of the sorts of things people say about how we have a hyperactive agency detection system. That system is set off by various things in the world that have no minds, and would be even more set off by computers that behave with apparent intelligence.

    ReplyDelete
  5. Would you consider robots that behave like insects, or computers that argue that they are conscious controversial?

    ReplyDelete
  6. Controversial that they exist? Or that they are conscious?

    ReplyDelete
  7. I meant if we were to observe robots that behave like insects, or computers that argue that they are conscious, would they count as examples of observations that are not strictly incompatible with the generalization that only living things are conscious, but in which it's controversial whether the generalization holds? (also, I'd like to ask if you have other examples of observations such that, if they happened, it would become controversial whether the generalization holds).

    By the way, even leaving panpsychism aside, there is debate about the consciousness of some non-living things under materialism (e.g., ht tp://faculty.ucr.edu/~eschwitz/SchwitzAbs/USAconscious.htm , ht tps://philpapers.org/rec/KAMHAM ). Does that make the generalization controversial in the sense you have in mind? Or does it require a certain percentage of philosophers, or something along those lines?

    On a different note, I wonder: if behavior is not enough, how should one go about persuading an intelligent computer that one is in fact conscious?

    ReplyDelete
  8. Good questions!

    1. I expect that if we observed computers that (appear to) argue that they are conscious, there would be a controversy over whether they are conscious. Less so, but probably still so, in the insect case.

    2. I think overt behavior by itself is fairly weak evidence for the presence of consciousness. I imagine a possible world where there is a being that looks and behaves just like me, but its skull is empty and it has no soul. Instead, that world has a big load of laws of nature, one for each counterfactual about my behavior. Thus, it has a law of nature that when the being is asked "Are you conscious?" in such-and-such circumstances it emits a "Yes" sound, etc. I have no temptation to think that the being is conscious, because although its overt behavior is just like mine, the way in which that behavior is produced is completely different.

    3. I am not sure I need to invoke consciousness in the explanation of any of my overt behavior other than of my verbal reports of conscious behavior ("I saw..." etc.). Explanations involving decisions, desires, intentions, thoughts, etc. can be neutral on whether the decisions, desires intentions, thoughts, etc. were conscious or not, since the consciousness of the decision, etc. does not seem relevant to explaining what I did.

    But behavior which does not need consciousness for explanation tends to be pretty weak evidence of consciousness by itself. So robots that behave like insects wouldn't impress me. Computers that say "I am conscious" would impress me if I had evidence that they *meant* by "conscious" what I mean by it. But why would I think that that's what they meant by the word?

    4. Also related to Schwitzgebel's piece:

    http://alexanderpruss.blogspot.com/2008/08/are-associations-entities-exercising.html
    http://alexanderpruss.blogspot.com/2010/11/naturalist-theories-of-mind-and.html

    But note that both his pieces and mine are conditional. Mine are more clearly meant as a reductio.

    ReplyDelete
  9. With regard to point 2., our theories about the world around us are at least almost always under-determined by observations, and at least almost always one can find possible worlds in which the observations are the same, but the correct theory is not (or one can find metaphysically impossible but logically possible worlds in some cases, but that seems to work just the same).
    For example, there is a possible world in which someone observes what I observe in the actual world, but as it turns out, the Moon Landing was a hoax. But that possible world doesn't (and shouldn't) undermine my confidence in the assessment that the Moon Landing was not a hoax. Also - and to mirror your example more closely -, it seems to me there is a possible world in which things that look like insects, reptiles, birds, etc., do not have conscious states, even if they look like those in our world (a lot of complicated laws too). But that does not seem to make behavior + similarity in brains, etc., weak evidence of consciousness.
    So, I would ask why you think the evidence from behavior is weak in our world, and from a human perspective (e.g., the fact that there is a possible world with empty skulls, etc., does not seem to make the evidence from behavior weak as far as I can tell).

    Regarding 3., explanations involving states such as pain, hunger, etc., seem to involve conscious states in general. I don't know that that is needed, though - maybe they can be explained in terms of particles, but then that might require more computational power than we have -, but I don't see why it would be crucial that invoking consciousness be needed.
    As to why you'd think the computer is using the word to mean the same you mean, I would say one would have to assess meaning by usage, and thus by behavior. But if that option is not available in the first place and behavior is weak evidence, you place the computer in a difficult position! If it's conscious, how would it go about persuading you?
    But then again, if the computer is using a similar rationale, it seems to me we'll have difficulty persuading it that we're conscious as well.

    ReplyDelete
  10. Angra,

    I am thinking that in our world, an implicit part of the inference from behavior to consciousness is that the thing is alive, something that we can normally conclude from the very same behavior, so that when we expressly assume that the thing is not alive, this weakens the inference.

    Regarding 3, normally hunger and pain explain behavior by means of the desire for food and desire for avoidance of stimulus which are constituent parts of hunger and pain respectively. Conscious awareness may be another constituent part of hunger and pain, over and beyond the desire. But in normal cases the behavior is explained not by the conscious awareness, but by the desires. (Desires we are not aware of are as good at explaining behavior as ones we are aware of.) Maybe. I am far from confident of this.

    ReplyDelete