Consider a computational theory of mind overlaid on a reductive physicalist ontology. Here’s I think how the story would have to work. We need a mapping between physical system (PS) and an abstract model of computation (AMC), because on a computational theory of mind, thoughts need to be defined in terms of the functioning of an AMC associated with a PS. But there are infinitely many mappings between PSs and AMCs. If thought is defined by computation and yet if we are to avoid a hyper-panpsychism on which every physical system thinks infinitely many thoughts, we need to heavily restrict the mappings between PSs and AMCs. I know of only one promising strategy of mapping restriction, and that is to require that if we specify the PSs using a truly fundamental language—one whose primitives are “structural” in Sider’s sense—the mapping can be sufficiently briefly described.
If we were dealing with infinite PSs and infinite AMCs, there would be a nice non-arbitrary way to do this: we could require that the mapping description be finite (assuming the language has expressive resources like recursion). But with finite PSs and AMCs, that will still generate hyper-panpsychism, since there will be infintely many finite AMCs that can be assigned to a given PS.
This means that not only we have to restrict the mapping description to a finite description, but to a short finite description. Once we do that, we will specify that a PS x thinks the thoughts that are associated with an AMC y if and only if the mapping between x and y is short. One obvious problem here is the seeming arbitrariness of whatever threshold of shortness we have.
But there is another interesting problem. This approach will violate the multiple realizability intuition that leads many people to computational theories of mind. For imagine a reductive physicalist world w* which is just like ours at the macroscopic level, and even at the atomic level, but whose microscopic reduction goes a number of extra levels down, with the reductions being quite complex. Thus, although in our world facts about electrons may be fundamental, in w* these facts are far from fundamental, being reducible to facts about much more fundamental things and reducible in a complex way. Multiple realizability intuitions lead one to think that macroscopic entities in a world like w* that behave just like humans down to the atomic level could think like we do. But if the reduction from the atomic level to the fundamental level in w* is sufficiently complicated, then the brain to human-like AMC mapping in w* will fail to meet the brevity condition, and hence the beings won’t think, or at least not like we do.
The problem is that it is really hard to both avoid hyper-panpsychism and allow for multiple realizability intuitions while staying within the confines of a reductive physicalist computational theory of mind. A dualist, of course, has no such difficulty: a soul can be attached to w*’s human-like organisms with no more difficulty than it can to our world’s human organisms.
Suppose the computationalist denies that multiple realizability extends to worlds like w*. Then there is a new and interesting feature of fine-tuning in our world that calls out for explanation: our world’s fundamental level is sufficiently easily mapped to a neural level to allow the neural level to count as engaging in thoughtful computation.
2 comments:
I see many problems with this argument, but let's pick one.
You ask us to imagine "world w* which is just like ours at the macroscopic level, and even at the atomic level" and then "in our world facts about electrons may be fundamental, in w* these facts are far from fundamental, being reducible to facts about much more fundamental things and reducible in a complex way."
For your argument this has to imply a complexity cost for the additional levels, so the initially simplest mapping between mind and realization is no longer simplest. But cost relative to what?
- If we are talking about cost relative to other possible reductions within world w*, then either
(a) the reduction still explains electrons, in which we have found a simpler reduction and can just substitute that and keep everything from the electrons up, or
(b) the reduction works through a new physical theory which doesn't involve electrons, in which case the worlds no longer appear equivalent.
- If we are talking about cost relative to the explanation in our world (w) then we can ignore the complexity of the reduction of electrons in w*, and compare just the complexity of the reduction down to the electrons. Since the electrons behave the same in both cases any lower level structure can't make a difference.
It's absolute cost that I am worried about. Pretty much every physical system can be mapped to pretty much every finite computational system if one isn't worried about the cost of the mapping. So to avoid hyper-panpsychism, one needs to limit the absolute cost of the mapping.
But the cost of the mapping has to include the cost of the reduction all the way down to the fundamental base, since the choice of any non-fundamental base is ad hoc.
Post a Comment