Consider a computational theory of mind overlaid on a reductive physicalist ontology. Here’s I think how the story would have to work. We need a mapping between physical system (PS) and an abstract model of computation (AMC), because on a computational theory of mind, thoughts need to be defined in terms of the functioning of an AMC associated with a PS. But there are infinitely many mappings between PSs and AMCs. If thought is defined by computation and yet if we are to avoid a hyper-panpsychism on which every physical system thinks infinitely many thoughts, we need to heavily restrict the mappings between PSs and AMCs. I know of only one promising strategy of mapping restriction, and that is to require that if we specify the PSs using a truly fundamental language—one whose primitives are “structural” in Sider’s sense—the mapping can be sufficiently briefly described.
If we were dealing with infinite PSs and infinite AMCs, there would be a nice non-arbitrary way to do this: we could require that the mapping description be finite (assuming the language has expressive resources like recursion). But with finite PSs and AMCs, that will still generate hyper-panpsychism, since there will be infintely many finite AMCs that can be assigned to a given PS.
This means that not only we have to restrict the mapping description to a finite description, but to a short finite description. Once we do that, we will specify that a PS x thinks the thoughts that are associated with an AMC y if and only if the mapping between x and y is short. One obvious problem here is the seeming arbitrariness of whatever threshold of shortness we have.
But there is another interesting problem. This approach will violate the multiple realizability intuition that leads many people to computational theories of mind. For imagine a reductive physicalist world w* which is just like ours at the macroscopic level, and even at the atomic level, but whose microscopic reduction goes a number of extra levels down, with the reductions being quite complex. Thus, although in our world facts about electrons may be fundamental, in w* these facts are far from fundamental, being reducible to facts about much more fundamental things and reducible in a complex way. Multiple realizability intuitions lead one to think that macroscopic entities in a world like w* that behave just like humans down to the atomic level could think like we do. But if the reduction from the atomic level to the fundamental level in w* is sufficiently complicated, then the brain to human-like AMC mapping in w* will fail to meet the brevity condition, and hence the beings won’t think, or at least not like we do.
The problem is that it is really hard to both avoid hyper-panpsychism and allow for multiple realizability intuitions while staying within the confines of a reductive physicalist computational theory of mind. A dualist, of course, has no such difficulty: a soul can be attached to w*’s human-like organisms with no more difficulty than it can to our world’s human organisms.
Suppose the computationalist denies that multiple realizability extends to worlds like w*. Then there is a new and interesting feature of fine-tuning in our world that calls out for explanation: our world’s fundamental level is sufficiently easily mapped to a neural level to allow the neural level to count as engaging in thoughtful computation.