Monday, June 22, 2020

Thomson's core memory paradox

This is a minor twist on the previous post.

Magnetic core memory (long obsolete!) stored bits in the magnetization of tiny little rings. It was easy to write data to core memory: there were coils around the ring that let you magnetize it in one of two directions, and one direction corresponded to 0 and the other to 1. But reading was harder. To read a memory bit, you wrote a bit to a location and sensed an electromagnetic fluctation. If there was a fluctuation, then it follows that the bit you wrote changed the data in that location, and hence the data in that location was different from the bit you wrote to it; if there was no fluctuation, the bit you wrote was the same as the bit that was already there.

The problem is that half the time reading the data destroys the original bit of data. In those cases—or one might just do it all the time—you need to write back the original bit after reading.

Now, imagine an idealized core not subject to the usual physics limitations of how long it takes to read and write it. My particular system reads data by writing a 1 to the core, checking for a fluctuation to determine what the original datum was, and writing back that original datum.

Let’s also suppose that the initial read process has a 30 second delay between the initial write of the 1 to the core and the writing back of the original bit. But the reading system gets better at what it’s doing (maybe the reading and writing is done by a superpigeon that gets faster and faster as it practices), and so each time it runs, it’s four times as fast.

Very well. Now suppose that before 10:00:00, the core has a 0 encoded in it. And read processes are triggered at 10:00:00, 10:00:45, 10:00:56.25, and so on. Thus, the nth read process is triggered 60/4n seconds before 10:01:00. This process involves the writing of a 1 to the core at the beginning of the process and a writing back of the original value—which will always be a 0—at the end.

Intuitively:

  1. As long as the memory is idealized to avoid wear and tear, any possible number—finite or infinite—of read processes leaves the memory unaffected.

By (1), we conclude:

  1. After 10:01:00, the core encodes a 0.

But here’s how this looks from the point of view of the core. Prior to 10:00:00, a 0 is encoded in the core. Then at 10:00:00, a 1 is written to it. Then at 10:00:30, a 0 is written back. Then at 10:00:45, a 1 is written to it. Then at 10:00:52.5, a 0 is written back. And so on. In other words, from the point of view of the core, we have a Thomson’s Lamp.

This is already a problem. For we have an argument as to what the outcome of a Thomson’s Lamp process is, and we shouldn’t have one, since either outcome should be as likely.

But let’s make the problem worse. There is a second piece of core memory. This piece of core has a reading system that involves writing a 0 to the core, checking for a fluctuation, and then writing back the original value. Once again, the reading system gets better with practice. And the second piece of core memory is initialized with a 1. So, it starts with 1, then 0 is written, then 1 is written back, and so on. Again, by premise (1):

  1. After the end of the reading processes, we have a 1 in the core.

But now we can synchronize the reading processes for the second core in such a way that the first reading occurs at 9:59:30, and space out and time the readings in such a way that prior to 9:59:30, a 1 is encoded in the core. At 9:59:30, a 0 is written to the core. At 10:00:00, a 1 is written back to the core, thereby completing the first read process. At 10:00:30, a 0 is written to the core. At 10:00:45, a 1 is written back, thereby completing a second read process. And so on.

Notice that from around 10:00:01 until, but not including, 10:01:00, the two cores are always in the same state, and the same things are done to it: zeroes and ones are written to the cores at exactly the same time. But when, then, do the two cores end up in different final states? Does the first core somehow know that when, say, at 10:00:30, the zero is written into it, that zero is a restoration of the value that should be there, so that at the end of the whole process the core is supposed to have a zero in it?

6 comments:

Gazza said...

Hi, Alex. How can I respond to this atheists argument using symbols?

Everything that begins to exist has a cause

The universe began to exist

Therefore, the universe has a cause

This is a tautology because if we substitute "Everything that begins to exist" for "The universe" we get

The universe has a cause

The universe began to exist

Therefore, the universe has a cause

Boom. Kalam destroyed.

How to respond? Thanks.

Alexander R Pruss said...

I don't understand. Sorry.

Michael said...
This comment has been removed by the author.
Michael said...

Alex,

I think the best way to answer this is to sa exactly at 10:01. The answer would be on the last reset after a read, but you would say there is no 'last reset'. I beg to differ, since we can complete a sequence by appending a point at infinity that represents 10:01 and any time beyond. Your sequence is only defined from [10:00,10:01) and we can include the \omega point to account for 10:01.

If a sequence sequence is defined for every even i, a_i = 1 and a_{i + 1} = 0, then asking what a_{\omega} is just seems like an ill-defined question, at least mathematically, since \omega is neither even nor odd. What you can do is ask what happens at the limit, and of course (a_n) diverges.

But I think if you look at premise 1 mathematically, this would not be like above, but instead would be saying that we define (b_n) = (a_n) and appending a point b_\omega = a_0. I think then problem goes away.

In your example, the first sequence would terminate with 0 at 10:01 and the second would terminate with 1 at 10:01, since 10:01 would correspond to the \omega'th term that is appended to the definition in both.

Philip Rand said...

Notice that from around 10:00:01 until, but not including, 10:01:00, the two cores are always in the same state, and the same things are done to it: zeroes and ones are written to the cores at exactly the same time. But when, then, do the two cores end up in different final states? Does the first core somehow know that when, say, at 10:00:30, the zero is written into it, that zero is a restoration of the value that should be there, so that at the end of the whole process the core is supposed to have a zero in it?

The solution is staring you in the face...

When an hot liquid is placed in a thermos, the liquid remains hot... and when a cold liquid is placed in a thermos it remains cold... How does the thermos know?

IanS said...

Trivia: I had thought, and the linked article seems to confirm it, that the control wires are not coiled around the core, but pass through the hole. In effect, a control wire and its completed circuit make a single loop around the core. And ‘long obsolete’ makes me feel old. :-)

On topic: A standard response to Thomson’s Lamp is to say that the rules of the setup don’t determine the end state. I suggest a similar approach here.

The rule for the core seems to be that its state at any time except during a write operation is either its original state (if no write has changed it) or the state resulting from the latest earlier write. At 10:01:00, there is no latest earlier write, so the state is not determined.

This approach implies rejection of (1) of the previous post, and of any modification of it in the same spirit.