Thursday, May 7, 2020

Swapping ones and zeroes

Decimal addition can be done by a computer using infinitely many algorithms. Here are two:

  1. Convert decimal to binary. Add the binary. Convert binary to decimal.

  2. Convert decimal to inverted binary. Inv-add the binary. Convert inverted binary to decimal.

By conversion between decimal and inverted binary, I mean this conversion (in the 8-bit case):

  • 0↔11111111, 1↔11111110, 2↔11111101, …, 255↔00000000.

By inv-add, I mean an odd operation that is equivalent to bitwise inverting, adding, and bitwise inverting again.

You probably thought (or would have thought had you thought about it) that your computer does decimal addition using algorithm (1).

Now, here’s the fun. We can reinterpret all the physical functioning of a digital computer in a way that reverses the 0s and 1s. Let’s say that normally 0.63V or less counts as zero and 1.17V or higher counts as one. But “zero” or “one” are our interpretation of analog physical states that in themselves do not have such meanings. So, we could deem 0.63V or less to be one and 1.17V or higher to be zero. With such a reinterpretation, logic gates change their semantics: OR and AND swap, NAND and NOR swap, while NOT remains NOT. Arithmetical operations change more weirdly: for instance, the circuit that we thought of as implementing an add should now be thought of as implementing what I earlier called an inv-add. (I am inspired here by Gerry Massey’s variant on Putnam reinterpretation arguments.)

And if before the reinterpretation your computer counted as doing decimal addition using algorithm (1), after the reinterpretation your computer uses algorithm (2).

So which algorithm is being used by a computer depends on the interpretation of the computer’s functioning. This is a kind of flip side to multiple realizability: multiple realizability talks of how the same algorithm can be implemented in physically very different ways; here, the same physical system implements many algorithms.

There is nothing really new here, though I think much of the time in the past when people have discussed the interpretation problem for a computer’s functioning, they talked of how the inputs and outputs can be variously interpreted. But the above example shows that we can keep fixed our interpretation of the inputs and outputs, and still have a lot of flexibility as to what algorithm is running “below the hood”.

Note that normally in practice we resolve the question of which algorithm is running by adverting to the programmers’ intentions. But we can imagine a case where an eccentric engineer builds a simple calculator without ever settling in her own mind how to interpret the voltages and whether the relevant circuit is an add or an inv-add, and hence without settling in her own mind whether algorithm (1) or (2) is used, knowing well that either one (as well as many others!) is a possible interpretation of the system’s functioning.

No comments: