Showing posts with label algorithms. Show all posts
Showing posts with label algorithms. Show all posts

Wednesday, March 9, 2022

Greedy strategies for Wordle

Today, I wanted to see how well the following greedy strategy works on Wordle. Given a set H of words, a word w partitions H into equivalence classes or “cells”, where two words count as equivalent if they give the same evaluation if w is played. We can measure the fineness of a partition by the size of its largest cell. Then, the greedy strategy is this:

  1. Start with the 12792 five-letter Scrabble words of the original Wordle (things will be a bit different for the bowdlerized New York Times version).

  2. Guess a word that results in a finest partition of the remaining words, with ties broken by favoring guesses that are themselves one of the remaining words, and remaining ties broken alphabetically.

  3. If not guessed correctly, repeat 2–3 with the remaining words replaced by the cell of the partition that fits the guess.

A quick and dirty python (I recommend pypy) implementation is here.

Note that this greedy strategy does not cheat by using the secret list of answer words, but only uses the publicly available list of five-letter Scrabble words.

Here some observations:

  • The greedy strategy succeeds within six guesses for all but one of the 2315 answer words in the original Wordle. The remaining word needs seven guesses.

  • A human might be able to guess the answer word that the algorithm fails for, because a real human would not be partitioning all the five-letter Scrabble words, but only the more common words that are a better bet for being on the answer list.

  • The greedy algorithm’s first guess word is “serai”, and the maximum cell size in the resulting partition is only 697 words.

  • While only one answer list word remained that the greedy strategy fails for, it looks like Wardle may have deliberately removed from the answer list some other common, clean and ordinary English words that the greedy strategy does not guess in six guesses.

  • It would be a cheat, but one can optimize the algorithm by starting with only the 2315 answer words. Then indeed the greedy strategy guesses every answer word in six guesses, and all but two can be done in five.

  • A human can do something in-between, by having some idea of what words an ordinary person is likely to know.

Tuesday, November 16, 2021

Functionalism and multiple realizability

Functionalism holds that two (deterministic) minds think the same thoughts when they engage in the same computation and have the same inputs. What does it mean for them to engage in the same computation?

This is a hard question. Suppose two computers run programs that sort a series of names in alphabetical order, but they use different sorting algorithms. Given the same inputs, are the two computers engaging in the same computation?

If we say “no”, then functionalism doesn’t have the degree of multiple realizability that we thought it did. We have no guarantee that aliens who behave very much like us think very much like us, or even think at all, since the alien brains may have evolved to compute using different algorithms from us.

If we say “yes”, then it seems we are much better off with respect to multiple realizability. However, there is a tricky issue here: What counts as the inputs and outputs? We just said that the computers using different sorting algorithms engage in the same computation. But the computer using a quicksort typically returns an answer sooner than a computer using a bubble sort, and heats up less. In some cases, the time at which an output is produced itself counts as an output (think of a game where timing is everything). And heat is a kind of output, too.

In my toy sorting algorithm example, presumably we didn’t count the timing and the heat as features of the outputs because we assumed that to the human designers and/or users of the computers the timing and heat have no semantic value, but are merely matters of convenience (sooner and cooler are better). But when we don’t have a designer or user to define the outputs, as in the case where functionalism is applied to randomly evolved brains, things are much more difficult.

So, in practice, even if we answered “yes” in the toy sorting algorithm case, in a real-life case where we have evolved brains, it is far from clear what counts as an output, and hence far from clear what counts as “engaging in the same computation”. As a result, the degree to which functionalism yields multiple realizability is much less clear.

Thursday, May 7, 2020

Swapping ones and zeroes

Decimal addition can be done by a computer using infinitely many algorithms. Here are two:

  1. Convert decimal to binary. Add the binary. Convert binary to decimal.

  2. Convert decimal to inverted binary. Inv-add the binary. Convert inverted binary to decimal.

By conversion between decimal and inverted binary, I mean this conversion (in the 8-bit case):

  • 0↔11111111, 1↔11111110, 2↔11111101, …, 255↔00000000.

By inv-add, I mean an odd operation that is equivalent to bitwise inverting, adding, and bitwise inverting again.

You probably thought (or would have thought had you thought about it) that your computer does decimal addition using algorithm (1).

Now, here’s the fun. We can reinterpret all the physical functioning of a digital computer in a way that reverses the 0s and 1s. Let’s say that normally 0.63V or less counts as zero and 1.17V or higher counts as one. But “zero” or “one” are our interpretation of analog physical states that in themselves do not have such meanings. So, we could deem 0.63V or less to be one and 1.17V or higher to be zero. With such a reinterpretation, logic gates change their semantics: OR and AND swap, NAND and NOR swap, while NOT remains NOT. Arithmetical operations change more weirdly: for instance, the circuit that we thought of as implementing an add should now be thought of as implementing what I earlier called an inv-add. (I am inspired here by Gerry Massey’s variant on Putnam reinterpretation arguments.)

And if before the reinterpretation your computer counted as doing decimal addition using algorithm (1), after the reinterpretation your computer uses algorithm (2).

So which algorithm is being used by a computer depends on the interpretation of the computer’s functioning. This is a kind of flip side to multiple realizability: multiple realizability talks of how the same algorithm can be implemented in physically very different ways; here, the same physical system implements many algorithms.

There is nothing really new here, though I think much of the time in the past when people have discussed the interpretation problem for a computer’s functioning, they talked of how the inputs and outputs can be variously interpreted. But the above example shows that we can keep fixed our interpretation of the inputs and outputs, and still have a lot of flexibility as to what algorithm is running “below the hood”.

Note that normally in practice we resolve the question of which algorithm is running by adverting to the programmers’ intentions. But we can imagine a case where an eccentric engineer builds a simple calculator without ever settling in her own mind how to interpret the voltages and whether the relevant circuit is an add or an inv-add, and hence without settling in her own mind whether algorithm (1) or (2) is used, knowing well that either one (as well as many others!) is a possible interpretation of the system’s functioning.

Sunday, March 5, 2017

Super-simple fractal terrain generator

Here's a very simple algorithm for generating random terrains: Start with a equilateral triangle in the x-y plane. Add a uniformly and symmetrically distributed random z-offset to each vertex. Bisect each resulting edge and add a random z-offset at the newly added point, half of the magnitude of the random offsets earlier. Repeat several times.

The algorithm is no doubt known, but some quick searches for terrain generation algorithms didn't turn it up, so I am posting for people's convenience.

There are some artifacts at internal triangle boundaries that better algorithms presumably don't have, but the algorithm is super simple to implement, and because it is based on triangles it directly makes a triangular mesh. Here is some code which does this, starting with a hexagon divided into six equilateral triangles, and putting out an STL file.


Friday, March 6, 2009

Two problems of multiple realizability for functionalists

Problem 1

Functionalism can best be seen as a response to the multiple realizability argument against physicalism: the same kinds of mental events can happen in carbon-based brains, silicon-based chips, plasma-based alien minds, etc., but if a belief that 2+2=4 is a particular configuration of neurons in a brain, then no critter without neurons could believe that 2+2=4. So the functionalist says that there is a functional isomorphism between states that all of these could have, and that's all that's needed for mental sameness. At the same time, the functionalist, unlike the behaviorist, is interested in lower-level functional states than just inputs and dispositions to outputs.

I am now thinking that functionalism is subject to a higher level multiple-realizability worry. Start with the intuition that the same computational results can be obtained through different, non-isomorphic algorithms (think of insertion sort and quick sort algorithms). Very plausibly, the same behavioristic states—relations between inputs and dispositions to outputs—can be obtained through different, non-isomorphic functional arrangements. Imagine, then, an alien being, a product of natural selection in a different environment from ours, that has a mind that is not functionally isomorphic to ourself, but where the alien is basically behavioristically isomorphic to us. Why wouldn't this be possible? (That question is not much of an argument, I know.) We would, I think, rightly assume that this being is in fact a person, and has beliefs, feelings, etc., despite the lack of functional isomorphism. But the functionalist must deny that such an alien would have beliefs, feelings, etc. since those kinds of states are defined by their functional connections, and the alien doesn't have those functional connections.

Epistemically, we seem to be behaviorists. Maybe this is only pragmatic—we don't want to mistreat someone who might turn out to be a person (this suggestion is due to Todd Buras). But suppose it's more than that. Then functionalism is in trouble, unless it can supply an argument that only systems that are (approximately?) functionally isomorphic to us could be (approximately?) behavioristically isomorphic to us.

The theistic dualist can do well here. Because of the great value in rational beings, there is good reason that God bestows souls on any natural kind of being whose behavior is sufficiently sophisticated to be compatible with a mental life.

Problem 2

Two states are functionally isomorphic provided that they give the same map between inputs and outputs. In particular, the states must be connected up with isomorphically corresponding modules. But it is very unlikely that a pain in a mouse and a pain in a human are functionally isomorphic. The pain in the human is probably connected into modules (such as higher level judgment modules) that have no corresponding modules in the mouse. Consequently, if functionalism holds, it is improbable that mice feel pain. The non-naturalist theist can get a nice disjunction out of this: Either functionalism is false (in which case, probably, some form of dualism is true, since I think that's the best alternative to functionalism), or else the problem of animal pain is not a problem at least in regard to less smart mammals.