All posts from 2022.

2022 April 12

Scala and Kolacny Brothers - Evigheden

Unicorn Heads - A Mystical Experience

A mathematical April fools joke: an interactive model of a sphere tiled by hexagons. I haven’t figured out how it works….

It’s not possible to represent real numbers exactly in a computer; the most common representation, IEEE754 has a variety of oddities, notably both positive and negative zero. For most purposes these behave identically, but one surprising difference is that compilers can optimize into but can’t optimize into ! Why? Because , so the additive identity in IEEE754 is negative zero! Therefore in some circumstances it might be *slightly* more efficient to use negative zero instead of positive zero.

A Japanese researcher conducted an experiment for 24 days in which they slept at whatever location their cat chose to sleep. They experienced no reduction in average quality of sleep.

“In reality, there are no such things [as dates in Excel spreadsheets]. What you have are floating point numbers and pious hope.”

Italy has started a superbonus scheme for investments in energy-efficient building upgrades. It is called a *superbonus* because the government subsidizes homeowners for 110% of the capital cost of the upgrade. An estimated 33 billion euros will be invested in the scheme, which has already this year increased GDP by 0.7%. The advantage of such a scheme is to delegate decision making to the smallest scale, where people can fine tune for their particular circumstances, and the 110% reimbursement helps cover the many non-financial burdens that come with upgrading your home. The limited time of the subsidy may encourage more uptake due to fear-of-missing-out. The downside of delegating decision making is that the system is vulnerable to fraud, with already 1 billion euro of fraud having been identified. (Using a fixed menu of subsidy amounts for each type of upgrade would be the reverse trade off: less room for fraud, but less flexible decision-making for the people most attuned to the circumstances.) Unfortunately fraud is inevitable in any country-sized project.

2022 April 06

I have been obsessed for far too many years with incorrect statements of the Monty Hall problem, once going so far as to pick up a Korean pop-science book just to see if its explanation was wrong (conclusion: I can’t read Korean), and I fear the only way to sate this obsession is to write my own. Er, my own *correct* explanation, that is.

I will spoil the ending by giving the take-away lesson up front. The Monty Hall problem involves probability, which is to say it has an element of randomness. **Randomness is a property of a process that yields some result, not a property of the result itself**:

Therefore any presentation of the Monty Hall problem that simply describes the sequence of events is incomplete, and has no single right answer: a complete statement must also give the underlying random process that yields the observed events.

Treating randomness as a process, not a result, is often the domain of computer science, but this is a broadly general concept that is vital to successfully navigating any confusing problem involving probability. Losing track of your sources of randomness and how they determine the observed outcomes is a common cause of errors. Perhaps related is one of the benefits of pre-committing to the statistical analysis performed in a scientific study – by designing your analysis before seeing the data, you are forced to grapple with the *data generation process*, rather than the particular observed data values.

The common presentation of the Monty Hall problem, as seen in any number of popular media, is akin to the following:

You are on a gameshow hosted by Monty Hall featuring 3 doors, one of which conceals a fancy car you want, and the other two hiding worthless gag “prizes”. The host asks you to choose which door’s prize you want: you choose door 1. He then opens door 3, revealing a gag prize, and gives you a chance to change your mind about which door to take. What is the probability that door 2 conceals the car?

The classic answers are either 1/2 or 2/3, with the latter being “correct”. However if you read the spoiler above, you already see a major issue: how can the answer be a probability between 0 and 1 if no source of randomness is stated in the problem?

It is commonly understood from the context of a gameshow that the three prizes are initially distributed behind the three doors uniformly at random, which serves as a source of randomness. With some re-wording we can suppose the player’s initial door choice was also at random, although this turns out to be redundant with the prizes being distributed at random. (Just as in rock-paper-scissors it is redundant to choose at random if you know your opponent is choosing at random.)

So let us consider the three possibilities:

- Case 1. Car is behind door 1, and events transpire as described.
- Case 2. Car is behind door 2, and events transpire as described.
- Case 3. Car is behind door 3, which is inconsistent with the specified sequence of events, so something else happens.

Again we are stuck: the problem as stated doesn’t tell us what happens in the 1/3 of cases that the car was behind door 3. Clearly *something* happens, but what?

What we are missing is a complete description of the host’s behavior. It is not enough to know what the host did in actuality, but also what the host would do in every counterfactual. We aren’t even told if the host’s behavior is random or deterministic! Indeed we could justify any answer from 0 to 1 by carefully choosing what procedure for the host to follow.

Suppose for example that the host’s procedure is to always open door 3. Then the cases are:

- Car is behind door 1 and host reveals door 3 has a gag prize.
- Car is behind door 2 and host reveals door 3 has a gag prize.
- Car is behind door 3 and host reveals door 3 has the car.

Then, the conditional probability that the car is behind door 2 given that the host reveals a gag prize is equal to 1/2. (Of course, the probability given that the host reveals a car is zero, since we can see the car is behind door 3.)

Note that, in this case, we don’t actually need to know how the player chooses which door for their initial selection, since it had no influence on the sequence of events. (Assuming that the player has no prior information about the location of the car and therefore that conditioning on their selecting door 1 does not change the probability of where the car is.)

Now suppose that the player’s procedure is to always select door 1, and the host’s procedure is to always open the lowest numbered door that is not the selection and is a gag prize. Then the cases are:

- Car is behind door 1 and host reveals door 2 has a gag prize.
- Car is behind door 2 and host reveals door 3 has a gag prize.
- Car is behind door 3 and host reveals door 2 has a gag prize.

Now, the conditional probability that the car is behind door 2 given that the host revealed a gag prize behind door 3 is 100%!

I was pleasantly surprised to see that the importance of specifying the host’s procedure was clearly explained by none other than Monty Hall himself, in a 1991 interview with the New York Times, a very rare example of math or science being accurately reported on in mainstream media:

[Monty Hall] picked up a copy of Ms. vos Savant’s original column, read it carefully, saw a loophole and then suggested more trials.

On the first, the contestant picked Door 1.

“That’s too bad,” Mr. Hall said, opening Door 1. “You’ve won a goat.”

“But you didn’t open another door yet or give me a chance to switch.”

“Where does it say I have to let you switch every time? I’m the master of the show. Here, try it again.”

[…] Whenever the contestant began with the wrong door, Mr. Hall promptly opened it and awarded the goat; whenever the contestant started out with the right door, Mr. Hall allowed him to switch doors and get another goat. The only way to win a car would have been to disregard Ms. vos Savant’s advice and stick with the original door.

Furthermore:

Dr. Diaconis and Mr. Gardner both noticed the same loophole when they compared Ms. vos Savant’s wording of the problem with the versions they had analyzed in their articles.

“The problem is not well-formed,” Mr. Gardner said, “unless it makes clear that the host must always open an empty door and offer the switch.”

This is not an idle objection. Any problem statement necessarily depends on the reader sharing some context to understand the meaning of the words being used, just as we inferred that the prizes were distributed randomly (and that the player’s initial selection is independent of where the prize is) from the context of it being a gameshow. However, it is a much bigger step to infer the host’s behavior from the incomplete information as given above. The original gameshow *Let’s Make a Deal* that Monty Hall hosted featured a wide assortment of variety games that did not follow any fixed format or rigid rules; it would have been entirely in keeping with that show for Hall to adapt the format dynamically with circumstances, or even entice players away from the main prize with cash incentives. So it is not reasonable to leave the reader to guess at the procedure Hall is following from contextual clues alone: the procedure should be fully specified.

Here is a (more) complete statement of the Monty Hall problem:

A gameshow has three doors, one of which is chosen in advance uniformly at random to conceal a car; the others conceal gag prizes. A player is asked to select a door, and does so at random. Regardless of the choices so far, the host then opens a door, chosen at random from among those doors which the player did not select and does not have the car. Given that the player selected door 1 and the host opened door 3, what is the probability that door 2 has the car?

There are three sources of randomness (which are implied to be *independent*), and we need 12 cases to fully work out every possibility. However it is fairly clear that the problem is symmetric with respect to which door the player selected, so we can fix that as nonrandom without changing the problem. This leaves us with only four cases:

- Case 1a, (1/6 chance). Car is behind door 1, player selects door 1, host opens door 2.
- Case 1b, (1/6 chance). Car is behind door 1, player selects door 1, host opens door 3.
- Case 2, (1/3 chance). Car is behind door 2, player selects door 1, host opens door 3.
- Case 3, (1/3 chance). Car is behind door 3, player selects door 1, host opens door 2.

(It is simple enough to see how to write out the other 8 cases to allow for if the player selects door 2 or 3, remembering to divide the probabilities by 3 so they still add up to 1.)

In cases 2 and 3, the host is “randomly” choosing which door to open from only one possibility. The host’s randomness only is relevant in cases 1a and 1b.

The conditional probability is

We’ve used the counterfactual outcomes to assess the conditional probability of the outcome that occurred. If we can eliminate case 1a, for example, case 1b raises in probability to 1/3 and the answer we get at the end is 1/2. Alternatively, if we can add cases 2a, 2b, 3a, 3b in which the host reveals the car, then the probability of that branch lowers from 1/3 to 1/6 and we again get 1/2 as the answer.

If the incomplete Monty Hall problem, as typically stated, lacks sufficient information to uniquely determine the answer, what makes this version of the problem *the* correct way of completely specifying it? While correct statements of the Monty Hall problem are rare in pop science discussions of it, most agree on a more-or-less acceptable explanation of how to get to the answer 2/3. Working backwards from this solution one finds the problem being solved. Indeed, this was the original justification vos Savant gave when critics of her column which popularized the problem pointed out the ambiguities: she said that from her explanation of the answer it was clear what was intended in the problem. (And, in vos Savant’s defense, the original incomplete statement of the problem was not hers but quoting from an inquiring reader.)

While most popular discussions of the Monty Hall problem fail to properly state it, some do. Wikipedia, of course, gives the correct explanation, saying “The given probabilities depend on specific assumptions about how the host and contestant choose their doors.”, and has the most thorough discussion of the problem and its variants of any source I’ve seen. The New York Times interview with Hall, discussed above, doesn’t give an explicit mathematical treatment but lucidly captures the key point that the host’s behavior must be specified.

Three decades before vos Savant’s column popularized the Monty Hall problem, an exactly equivalent problem was given by Martin Gardner (the same who was quoted above):

Three prisoners, A, B, and C, are in separate cells and sentenced to death. The governor has selected one of them at random to be pardoned. The warden knows which one is pardoned, but is not allowed to tell. Prisoner A begs the warden to let him know the identity of one of the two who are going to be executed. “If B is to be pardoned, give me C’s name. If C is to be pardoned, give me B’s name. And if I’m to be pardoned, secretly flip a coin to decide whether to name B or C.”

The warden tells A that B is to be executed. Prisoner A is pleased because he believes that his probability of surviving has gone up from 1/3 to 1/2, as it is now between him and C. Prisoner A secretly tells C the news, who reasons that A’s chance of being pardoned is unchanged at 1/3, but he is pleased because his own chance has gone up to 2/3. Which prisoner is correct?

Being both unambiguous and having priority in time, I would prefer if this formulation supplanted the classic presentation of the Monty Hall problem, but of course its very ambiguity led to its popularity.

A popular variant is to suppose there are 100 doors, of which the host opens 98. (vos Savant proposed 1000000 doors, but I guess people got tired of writing that many zeros.) This improves the intuition behind the correct solution to the problem but doesn’t by itself improve the clarity of the problem statement. Similarly, consider another formulation:

Three tennis players. Two are equally-matched amateurs; the third is a pro who will beat either of the amateurs, always.

You blindly guess that Player A is the pro; the other two then play. Player B beats Player C. Do you want to stick with Player A in a Player A vs. Player B match-up, or do you want to switch? And what’s the probability that Player A will beat Player B in this match-up?

After writing this article I found an interesting twist in which the player bribes the host in advance to choose their behavior, subject to the limitation that the host must open exactly one door that is not the player’s selection nor the car. The goal then is to choose a strategy (for both player and host) to maximize the chance of getting the car, and calculate that probability.

(Note that some of the behaviors I’ve mentioned above result in getting the car with 0% or 100% probability, but they do not satisfy the specified constraint on the host’s behavior, so we can’t use them here.)

Suppose, say, the bribed host opens door 3. This outcome occurs in every case that the car is behind door 2 (i.e. “case 2” above), and between 0% and 100% of the cases that the car is behind door 1 (i.e. “case 1b” above), with the probability depending on what the player bribed the host to do. Thus here the player does at least as well by switching as by staying, and likewise if the bribed host opened door 2 (cases 3 and 1a above). Therefore always switching is an optimal strategy for the player, with which the player wins in cases 2 and 3, with a probability of 2/3 – regardless of the behavior of the host. (Though for certain host behaviors this is not the *only* optimal strategy.)

The only thing the player can change by bribing the host is the distribution of wins and losses between doors 2 and 3: e.g., so that whenever the host opens door 3 it is a guaranteed win, but at the cost that they only win half the time that door 2 is opened. The *overall* win rate will still be the same.

Most popular presentations of the Monty Hall problem ask only whether you should switch, but as we just saw, switching dominates staying in most interpretations of the problem, side-stepping the central issue; this is why I phrased the problem as asking more specifically for the probability of finding the car.

2022 April 05

- Part 1: What is negative temperature?
- Part 2: What is temperature? (coming)
- Part 3: Why does entropy increase? Resolving Loschmidt’s paradox. (coming)
- Part 4: Temperature of black holes (coming?)

This is a four-part exploration of topics related to temperature. In part 1 I start by asking about the units in which we measure temperature and end up investigating a model that permits negative temperature. We are left with some foundational questions unresolved, and in part 2 we must retreat to the very basics of statistical mechanics to justify the concept of temperature. Part 3 edges into philosophical grounds, attempting to justify the central assumption of statistical mechanics that entropy increases with time, with implications on the nature of time itself. Finally in part 4 I jump topics to black holes to look at their unusual thermodynamic properties.

Our bodies are able to directly sense variations in the temperature of objects we touch, which forms the motivation of our understanding of temperature. The first highly successful quantified measurements of this property came with the invention of the mercury thermometer by Fahrenheit, along with his scale temperatures were measured on. He chose to use higher numbers to represent the sensation we call “hotter”.

With the discovery of the ideal gas laws came the realization that temperature is not “affine”, by which I mean that the laws of physics are *not* the same if you translate all temperatures up or down a fixed amount. In particular, there exists a minimum temperature. The Fahrenheit and Celsius scales were designed in ignorance of this fact, but they were already well-established by that point and continued to be used for historical reasons. Kelvin decided to address this problem by inventing a new scale, in which he chose to use zero for the minimum temperature.

Now with our modern theory of statistical mechanics, temperature is no longer just a thing that thermometers measure but is defined in terms of other thermodynamic properties:

where is the temperature of a system (in Kelvin), is its energy, and is its entropy. We will skip past the question of “what is energy” (not because it is easy, but because there is no good answer) to quickly define entropy. While in classical thermodynamics entropy was originally defined in terms of temperature (via the above equation), in statistical mechanics entropy is defined as a constant times the amount of information needed to specify the system’s microstate:

where is the number of possible microstates (assuming uniform distribution), and we will return to in just a moment.

Unfortunately there remain two more inelegancies with this modern definition of temperature. The first is that the proportionality constant is a historial accident of Celsius’s choice to use properties of water to define temperature. Indeed, temperature is in the “wrong” units altogether. We like to think of temperature as related to energy somehow. It turns out, the Boltzmann constant accomplishes exactly that:

The that appeared in the definition of entropy was just there to get the units to agree with historical usage – if we dropped it, we could be measuring temperature in Joules instead! Well, more likely in zeptoJoules, since room temperature would be a bit above J. In summer we’d talk about it being a balmy 4.2 zeptoJoules out, but 4.3 would be a real scorcher.

(When measuring temperature in Joules it should not be confused with thermal energy, which is also in Joules! The former is an intensive quantity, while the latter is an extensive quantity. Maybe it is for the best that we have a special unit for temperature, just to avoid this confusion.)

To introduce what I perceive to be the second inelegancy of the modern definition of temperature, I will walk through a worked example of calculating the temperature of a simple system.

Suppose we have magnetic particles in an external magnetic field. Each particle can either be aligned with the field, which I will call “spin down”, or against it, i.e. “spin up”; a spin up particle contains more energy than one that is spin down. This is the Ising model with no energy contained in particle interactions.

A *microstate* is a particular configuration of spins. The *ground state* or lowest-energy state of the model is the configuration that has all spins down. If some fraction of the spins are up, then the energy is

We need to calculate the distribution of possible microstates that correspond to a given energy level. Certainly we can find the *number* of microstates with up spins, but are they are equally likely to occur? Though it may seem strange, perhaps depending on the geometry of the system certain particles may have correlated spins causing certain microstates to be more probable than others.

In fact, with certain minimal assumptions, the distribution over all possible microstates is necessarily uniform. We suppose that nearby particles in the system can interact by randomly exchanging their spins. Furthermore distant particles can exchange spins via some chain of intermediate particles: if not, the system would not have a single well-defined temperature, but each component of the system would have its own temperature.

Thus the system varies over all configurations with up spins: it remains to see that the distribution is uniform. The particle interaction is adiabatic and time reversible, so the probability that two particles exchange spin is independent of what their spins are, and therefore it can be shown that the Markov chain for the state transition converges to a uniform distribution over all accessible configurations. That is, the distribution of microstates is uniform on a sufficiently long time period; on a shorter timescale the system does not have a well-defined temperature.

The number of configurations with particles and up spins is , and is the information needed to specify which microstate we are in. Thus the entropy is

where we have used Stirling’s approximation. (Note that when using Stirling’s approximation on binomial coefficients, the second term in the approximation always cancels out.)

What is the point of all this faffing about with microstates and our information about them? After all, with temperature defined in this way, temperature is not a strictly physical property but rather depends on our information about the world. However at some point we would like to get back to physical phenomena, like the sensation of hot or cold that we can feel directly: this connection will be justified in part 2.

Now

and , so and

Alright, so what have we learned? Let’s plug in some values for and see. First, for the ground state , we have so therefore we get (in Kelvin). That’s not a big surprise, that the lowest energy state is also absolute zero.

What about the highest energy state ? Now we have so again . Wait, what?

Maybe this model just thinks everything is absolute zero? How about the midpoint : then so . Great.

We can just graph this so we can see what is really going on:

(Note the temperature is only defined up to a scalar constant that depends on the choice of .) As the system gets hotter, the temperature starts from +0 Kelvin, increases through the positive temperatures, hits K, increases through the negative temperatures, and then reaches -0 Kelvin, which is the hottest possible temperature.

So zero Kelvin is two different temperatures, both the hottest and coldest possible temperatures. Positive and negative infinity are the same temperature. (Some sources in the literature I read incorrectly state that positive and negative infinity are different temperatures.) Negative temperatures are hotter than positive temperatures.

This is quite a mess, but one simple change in definition fixes everything, which is to use *inverse temperature*, also called thermodynamic beta or coldness:

has units of inverse Joules, but thinking of temperature from an information theory perspective and using 1 bit = , we can equate room temperature to about 45 gigabytes per nanoJoule. This is a measure of how much information is lost (i.e. converted into entropy) by adding that much energy to the system.

With , the coldest temperature is and the hottest temperature is . Ordinary objects (including all unconfined systems) have positive temperatures while exotic systems like the magnetic spin system we considered above can have negative temperatures. If you prefer hotter objects to have a higher number on the scale, you can use , but then ordinary objects have negative temperature.

The magnetic spin system of the previous section is a classic introduction to statistical mechanics, but as we’ve seen it admits the surprising condition of negative temperature. What does it mean for an object to have negative temperature, and can this happen in reality?

The first demonstration of a negative temperature was in 1951 by Purcell and Pound, with a system very similar to the magnetic spin system we described. They exposed a LiF crystal to a strong 6376 Gauss magnetic field, causing the magnetized crystal to align with the imposed field. The magnetic component of the crystal had a temperature near room temperature at this point. Then, reversing the external field, the crystal maintained its original alignment, causing its temperature to be near negative room temperature, then cooling to room temperature through K over the course of several minutes. As it cooled, it dissipated heat to its surroundings.

Why did it cool off? On a time scale longer than it takes for the LiF’s magnet to reach internal thermal equilibrium, but shorter than it takes to cool off, the LiF’s magnet and the other forms of energy (e.g. molecular vibrations) within the LiF crystal are two separate thermal systems, each with their own temperature. However these two forms of energy can be exchanged, and on a time scale longer than it takes for them to reach thermal equilibrium with each other they can be treated as a single system with some intermediate temperature. As the molecular vibrations had a positive temperature, and the magnetic component had a negative temperature, thermal energy flowed from the latter to the former.

Consider more generally any tangible object made of ordinary atoms and molecules in space. The object will contain multiple energy states corresponding to various excitations of these atoms, such as vibrations and rotations of molecules, electronic excitations, and other quantum phenomena. For these excitations, the entropy increases rapidly with energy, so the corresponding temperatures will always be positive. Whatever other forms of energy the object exhibits, they will eventually reach equilibrium with the various atomic excitations, and thus converge on a positive temperature.

While such an object will exhibit increasing entropy with energy on the large scale, maybe it is possible for it to have a small-scale perturbation in entropy and thus a narrow window with negative temperatures. Perhaps, but some of the atomic excitations are very low energy, and thus they have a gradually increasing entropy that is smooth even to quite a small scale. Only under some very unusual circumstance could that object have negative temperatures.

None-the-less in 2012 researchers were able to bring a small cloud of atoms to a negative temperature in their motional modes of energy. I read the paper but didn’t really understand it.

As an aside – what if some exotic object had a highest energy state but no *lowest* energy state? This object could only have negative temperatures, would always be hotter than any ordinary objects it was in contact with, so would perpetually have thermal energy flowing from it.

(An object with no highest or lowest energy state could never reach internal thermal equilibrium, wouldn’t have a well-defined temperature, and its interactions with other objects couldn’t be represented as an average of small-scale phenomena, making statistical mechanics inapplicable.)

However, negative temperatures are closely related to a phenomenon which is essential in the functioning of an important modern technology: lasers! Lasers produce light when excited electrons in the lasing medium drop to a lower energy level, emitting a photon. While an excited electron can spontaneously emit a photon, lasers produce light through stimulated emission of an excited electron with a photon of the matching wavelength: the stimulated photon is synchronized exactly with the stimulating photon, which is why lasers can produce narrow, coherent beams of light. However, the probability that a photon stimulates emission from an excited electron is equal to the probability that the photon would be absorbed by an *unexcited* electron, so for a beam of light to occur the majority of the electrons must be excited.

This is an example of population inversion, where a higher-energy state has greater occupancy than a lower-energy state. Population inversion is a characteristic phenomenon of negative temperatures: an object at thermal equilibrium with a positive temperature will always have occupancy decrease with energy, while at negative temperature it will always have occupancy increase with energy. (At infinite temperature, occupancy of all states is equal.)

But is the lasing medium at thermal equilibrium? Certainly we have to consider a short enough time scale for the electronic excitations to be treated separately from the motional energy modes. However even considering only electronic modes, modern lasers do not operate at thermal equilibrium. Originally lasers used three energy levels (a ground state and two excited states), for which it would have been debatable whether calling the lasing medium “negative temperature” is appropriate. Now, most lasers use four (or more) energy levels due to their far greater efficiency.

Such a laser has two low-energy levels, and two high-energy levels. The narrow transitions (between the bottom two or the top two levels) are strongly coupled with vibrational or other modes, so electrons rapidly decay to the lower of the two. As a result, there is a very low occupancy in the second-lowest energy level, making it easy to generate a strong population inversion between the two *middle* energy levels, which is the lasing transition. Thus the laser still operates at positive (but non-equilibrium) temperature, as most electrons remain in the ground state even when a population inversion between two higher levels is created.

We claimed that, under reasonable physical assumptions, the spin system with up spins will eventually be equally likely to be in any of the such configurations. (A particular configuration of up / down spins is called a *microstate*, and the collection of all microstates with the same energy is called a *microcanonical ensemble* (MCE).) This is intuitively sensible since any particular spin goes on a random walk over all particles, and thus any particular particle is eventually equally likely to have any of the starting spins. However I found rigorously proving this more involved than I thought, and I haven’t seen any similar argumentation elsewhere, so let’s work out all the details.

Any two “nearby” particles can exchange spins with some probability, not necessarily equal for all such pairs. Let us discretize so that in each time step either 0 or 1 such exchanges happen. The graph of particles with edges for nearby particles is connected: otherwise the system would actually be multiple unrelated systems. There are finitely many microstates in the MCE, and we can get from any one to any other in at most time steps with positive probability, for example using bubble sort.

Let be the state transition matrix. In the language of Markov chains, we have a time-homogeneous Markov chain (because the probability of a swap does not depend on the spins involved) which is irreducible (i.e. any state can get to any other) and is aperiodic (a consequence of the identity transition having positive probability) and thus ergodic (irreducible and aperiodic). First we would like to establish that the Markov chain converges to a stationary distribution: this is effectively exploring the behavior of for large .

The behavior of is dominated by its largest eigenvalue. Since all elements of are positive real numbers, by the Perron-Frobenius_theorem it has a positive real eigenvalue which has strictly larger magnitude than any other eigenvalue, and which has an eigenspace of dimension 1, and furthermore an eigenvector with all positive coefficients.

Now the eigenvalues of are simply the eigenvalues of raised to the power , so likewise has a real eigenvalue with strictly larger magnitude than any other eigenvalue, and an eigenspace of dimension 1, etc.. (Use both and to show that cannot be negative; and to show is not complex other powers can be used.) Then for any vector with positive components we get

by writing in the eigenspace decomposition for , using generalized eigenvectors if is defective. (In fact .)

Alternatively, observe that for any two different states such that the transition has positive probability, then and are related by a single swap of two spins. The transition is the same swap, and therefore has the same probability, as the probability of a swap does not depend on the values of the spins. Therefore is real symmetric, and thus is diagonalizable with real eigenvalues. This saves us some casework above for dealing with complex eigenvalues or generalized eigenvectors.

We have seen that for any starting state, the system eventually converges to the same stationary distribution. (This is a basic result of Markov chains / ergodic theory, that any ergodic chain converges to a stationary distribution, but I didn’t see a proof spelled out somewhere.) What is that distribution? Since is a transition matrix, the total probabilities going out of each state equals 1. Then from we see that the total probabilities going *in* to each state is 1, so the uniform distribution is stationary.

More generally, for any system in which the MCE respects some symmetry such that each state has the same total probability going in, the uniform distribution will be a stationary distribution; and if the system is ergodic, then it always converges to that unique stationary distribution.

As a side note, using instead of and taking care that the entries are strictly positive instead of nonnegative, permitting a time step to have 0 exchanges, and so forth is only necessary to deal with the case that the connectivity graph is bipartite: if the graph is bipartite, the system can end up oscillating between two states with each swap. This is purely an artifact of the discretization; in the real system, as time becomes large, the system “loses track” of whether there were an even or odd number of swaps. Thus a lot of the complication in the proof comes from dealing with an unimportant case.

2022 February 02

With MIT Mystery Hunt 2022 (wrap up video, stats summary) having come to a close, the second year that Mystery Hunt was completely virtual, I am also winding down the second year that I helped out with a portion of the behind-the-scenes tech stack for team **I’m not a planet either**… and sometimes less “behind the scenes” when things go awry! It is a unique position to be in, with goals driven by a mixture of what automated tools I would like to make use of, what support the team needs so that people can puzzle together uninterrupted, and what surprises the hunt organizers throw our way.

Being in this position also lets me piece together some statistics on the team’s progress on the hunt, which I also did last year. First, how many puzzles were available to our team, and how many of them had been solved at any time:

Like all images here, click to zoom. Note that this year events were not considered “puzzles” and are not in the statistics. We see that for a period of about 4 hours on Saturday afternoon we were reduced to just three open puzzles, and briefly only two. The story behind this is apparent when separately track puzzles by round:

**The Investigation** was the first round of hunt, and solving its meta unlocked the second round. Two puzzles from this round went untouched for a very long time until they were backsolved near the end of the hunt, using the solution of the meta to work backwards to determine their answers. They were also the first and third most popular puzzles for other teams to request the solution to.

**The Ministry** was the second round of hunt, containing 25 puzzles, 5 metapuzzles, 1 metametapuzzle, and a run-around. Completing The Ministry was intended by the hunt organizers to serve as an intermediate goal that would be feasible for non-competitive teams to accomplish, so was structured like the ending of hunt. 60 teams successfully finished The Ministry.

As a result, progress was bottle-necked on completing The Ministry: unlocking the third round required completing the run-around, which was unlocked 15 minutes after completing the metametapuzzle, which itself was unlocked by completing all 5 metas. The only other puzzles available at the time were the two from The Investigation.

The third and far the largest round was **Bookspace**, containing many “subrounds”. We were steadily working our way through it when time was called. One puzzle from this round was recorded as being unlocked long before it actually was, because we were given information about it at the beginning of The Ministry.

Below is the list of every puzzle solved by the team. Some of the solution times of early puzzles were inaccurate as we had to manually record puzzle solutions in the beginning of the hunt. In total we gained access to 84 puzzles, of which we solved 65.

Puzzle | Round | Solve time |
---|---|---|

Kid Start-up | The Ministry | 22m 54s |

The Boy with Two Heads | The Ministry | 37m 15s |

Something Command | The Quest Coast | 44m 51s |

Does Any Kid Still Do This Anymore | New You City | 51m 7s |

Crewel | The Ministry | 52m 21s |

Ada Twist Scientist | The Investigation | 58m 24s |

My First ABC | The Investigation | 1h 0m |

The Day You Begin | The Ministry | 1h 1m |

The Wonderful Wizard of Oz | The Investigation | 1h 4m |

Fruit Around | The Ministry | 1h 4m |

The Hobbit | The Ministry | 1h 9m |

Make Way for Ducklings | The Ministry | 1h 13m |

Dinotopia | The Ministry | 1h 18m |

Harold and the Purple Crayon | The Ministry | 1h 23m |

The Talking Tree | The Ministry | 1h 43m |

Tikki Tikki Tembo | The Ministry | 1h 47m |

Cemetery Boys | The Ministry | 1h 50m |

Pippi Långstrump/Pippi Longstocking | The Ministry | 1h 53m |

The Missing Piece | The Investigation | 1h 56m |

Your Name is a Song | The Ministry | 2h 9m |

Watership Down | The Ministry | 2h 13m |

Kiki’s Delivery Service | The Ministry | 2h 15m |

Teach Us Amelia Bedelia | The Investigation | 2h 17m |

Just a Dream | The Ministry | 2h 18m |

Potions | The Quest Coast | 2h 21m |

The Ministry | The Ministry | 2h 26m |

Where The Wild Things Are | The Investigation | 2h 29m |

Peter Pan | The Investigation | 2h 32m |

The Last Olympian | The Ministry | 2h 58m |

A Wizard of Earthsea | The Ministry | 2h 59m |

Magically Delicious | The Quest Coast | 3h 16m |

The Investigation | The Investigation | 4h 56m |

A Wrinkle in Time | The Ministry | 5h 9m |

Too Many Toys | The Investigation | 5h 20m |

Go The F*** To Sleep | The Ministry | 5h 30m |

Oxford Children’s Dictionary | The Ministry | 5h 34m |

Sometime After Midnight | The Ministry | 5h 43m |

Frankenstein’s Music | Lake Eerie | 5h 53m |

I Don’t Have a Clue! | Noirleans | 5h 54m |

The Colour Out of Space | Lake Eerie | 6h 23m |

Mysterious Mechanics | Noirleans | 6h 27m |

The Adventures of Pinocchio | The Ministry | 7h 33m |

The Mad Scientist’s Assistant | Lake Eerie | 8h 1m |

The Hound of the Vast-Cur Villes | Noirleans | 8h 10m |

Sorcery for Dummies | The Quest Coast | 8h 19m |

Alice’s Adventures in Wonderland | The Ministry | 8h 57m |

Dancing Triangles | Noirleans | 9h 31m |

The Thin Pan | Noirleans | 9h 50m |

Albumistanumerophobia | Lake Eerie | 10h 10m |

Charlotte’s Web | The Ministry | 10h 51m |

A Number of Games | The Quest Coast | 14h 2m |

Curious Customs | The Quest Coast | 14h 24m |

The Enchanted Garden | The Quest Coast | 15h 8m |

Curious and Determined | Noirleans | 16h 20m |

Billie Barker | The Ministry | 18h 18m |

Randy and Riley Rotch | The Ministry | 19h 2m |

Everybody Must Get Rosetta Stoned | New You City | 19h 55m |

Danni Dewey | The Ministry | 20h 11m |

Trickster Tales | Noirleans | 20h 34m |

Herschel Hayden | The Ministry | 20h 35m |

Alexei Lewis | The Ministry | 20h 45m |

Book Reports | New You City | 30h 37m |

My Dinner With Big Boi | New You City | 30h 38m |

The Messy Room | The Investigation | 51h 13m |

The Neverending Story | The Investigation | 51h 35m |

The longest two puzzle solves were backsolves; we received the solution to My Dinner With Big Boi as a bonus; and Book Reports was recorded in our system as available long before it actually was unlocked. The five longest puzzles from The Ministry were all metas, so the longest standard puzzle solution was Trickster Tales.

Compared to last year, we solved a much higher fraction of the puzzles accessible to us, and the slowest ordinary solutions were much faster; although conversely there were no 3 minute puzzle solutions this year. Many fewer puzzles were open at any time, possibly due to some combination of each individual puzzle being “bigger” and the hunt being much more linear.

2022 February 01

Short version: Read the disclaimer, how to make the testing solution, how to perform the test, and my results.

US healthcare workers who rely on respirators, such as N95s, for their safety undergo annual *fit testing* to make sure that they are protected. Of course professional fit testing equipment is priced like all aspects of American healthcare… beyond the reach of the typical American. Here I detail the steps I took to imitate this procedure and how you can perform them as well.

Professional PPE usage is intended to provide a level of protection suitable for someone in sustained direct contact with highly-infectious covid patients. If you were to have a passing encounter with a contagious person in public, your exposure risk would be much lower than in a hospital setting. My goal is to gain most of the protection of professional PPE with a minimum fraction of the effort, in the hopes that this compromised level of protection is more than sufficient for my likely exposure level.

If you live in my area and would like masks or to borrow my testing solution, let me know!

If you need professional-level respiratory protection, disregard this article and follow the relevant regulations. When I write things like (e.g.) disposing of a mask after a single usage is wasteful, I am writing for those, like me, who are looking for modest protection from incidental covid exposure in a public setting. I am not an expert.

Masks are not perfect at blocking small particles. There are two ways they can fail: either through leakage around the sides, or through inadequate filtration of air that passes through the filter. Masks must compromise between these two failure modes, as additional layers of filtration material improve filtration but increase breathing resistance, and therefore increase the tendency for air to pass through any small gaps around the sides.

The goal of *fit testing* is to identify a model of mask that gives a high quality fit for a specific person. The person is exposed to a strong-smelling/tasting chemical while wearing a mask; the odiferous chemical is aerosolized into tiny droplets for which the mask’s material has a high filtration efficiency, so detection of the chemical indicates that air is leaking around the mask. Note that a fit test does not test whether the filtration material is adequate: it only checks for leaks. A wide variety of particulate sizes and types would need to be used to validate the quality of the filtration media, but are irrelevant for detecting leakage.

Those in a professional setting will typically undergo fit testing annually through a procedure regulated by OSHA; see below. A variety of masks are tested until one that passes the test is found. Like other garb, a proper fit depends on the shape of the mask matching the person wearing it, so there is no one “best” mask for everyone.

Once an appropriate model of mask has been found, *seal checking* (or a “fit check”) is performed everytime that mask is donned or adjusted before entering the hazardous area. It only takes a few seconds and I do it everytime I go out: I find leaks the majority of the time.

To perform a seal check, cover the filtration surface of the respirator with your hands (while wearing it), and either breathe in (negative pressure) or breathe out (positive pressure). You should feel increased breathing resistance, and you should not feel any air passing the side of the mask. The mask should also visibly deflate / inflate slightly. Adjust the mask and repeat if leaks are found.

See a video of a seal check.

The increased breathing resistance is more apparent in the negative pressure test, as breathing in is harder than breathing out. However breathing in tends to tighten the seal, concealing leaks, so leakage around the sides is easier to detect when breathing out. If you wear glasses, fogging of the glasses can be the most obvious sign of leakage around the nose on breathing out. I suggest using a mask with a foam nose insert and wearing it high up on the nose to reduce leakage there.

Don’t forget to shave: studies find that facial hair located at the mask’s seal increases the leakage by a factor of 20 to 1000 times. Even a small stubble compromises the seal.

Neither fit testing nor seal checking serves much function for cloth or surgical masks, or KN95s. (Though if you try, let me know!)

Double-masking to improve filtration is generally unhelpful as it can increase leakage instead. However, a close-fitting cloth mask worn over a surgical mask can be beneficial if it improves the fit of the latter. Never wear an N95 over another mask: this combines the fit of a surgical mask with the breathing resistance of a respirator.

A better alternative to double-masking is a mask fitter / mask brace. A medical face mask rated at ASTM-2 or ASTM-3 with a correctly sized mask fitter might be compared to the protection of an N95.

Cloth masks are often found to be substantially inferior to surgical masks.

I have purchased 3M Aura N95s online from Northern Safety Industrial and Home Depot. In both cases they cost on the order of $2 per mask. Another source is Industrial Safety Products. I believe that mask manufacturers, particularly 3M whose masks are very popular in healthcare, prioritize selling to medical distributors, so any masks sold to the general public are after medical demands have been satisfied. (Note that 3M masks have become scarce again since covid omicron hit the news.)

Only purchase N95s and KN95s from an established seller… and certainly not off of Amazon.

The majority of KN95s in the US are counterfeit!

For a mask to qualify as an N95 it must be verified by NIOSH, a part of the CDC, to meet specific regulations of performance. (“KN95” is a standard regulated by China, and inferior to N95 due to the use of ear loops; “FFP2” is the European equivalent, with “FFP1” inferior and “FFP3” superior.) Refer to this CDC page for guidance on recognizing counterfeit masks and how to look up your mask’s NIOSH approval.

I believe the majority of “counterfeit” N95 masks in the US simply lie about being NIOSH approved; this can easily be detected by the above. Detecting forgeries, which imitate genuinely approved N95 masks, is harder.

Above I linked a video of a seal check, which included demonstration of donning and doffing. CDC procedures are to only handle masks by the straps when donning and doffing. I doubt this matters much; of course you should wash your hands after handling the mask regardless.

A little etymological diversion… “don” and “doff” are contractions of “do on” and “do off”. They had become obsolete by the 17th century when Sir Walter Scott repopularized them, though the words “dup/dub” (open) and “dout” (put out) went extinct.

Disposable masks are meant to be used once. However this is excessive and wasteful of limited resources.

Unless you are visibly soiling the respirator surface, the limiting factor on the re-use of your mask is the strength of the elastic band. As it weakens with use, particularly during donning and doffing, the mask is held less tightly to the face and leaks develop. This is also why masks with ear loops can’t seal as effectively as those whose straps go behind the head.

Certain masks allow for the tightness of the elastic head bands to be adjusted by the wearer: this greatly extends the life and performance of the mask by allowing the fit to be tightened as the elasticity weakens with re-use.

Quality of N95 fit declines measurably by around 5 to 30 uses, though still far superior to cloth or surgical masks.

Masks can be sterilized between uses with UV light; however leaving them in a paper bag at room temperature for several days is a simpler and more reliable technique. I rotate through several masks and don’t reuse them more than once every 3 days. I write the date of its first use on each mask’s bag and then infrequently dispose of the oldest mask.

Water and other cleaning fluids should only be applied to cloth masks. Brief exposure to water (e.g. in light rain) is not a concern. Alcohol-based cleaners will destroy the filtration material!

A study based on data from a Chinese hospital from January to March 2020 found that wearing glasses reduced the risk of being hospitalized with covid by a factor of about 10. This study has been sometimes cited to suggest that glasses provide some protection against infection. However, as one might guess from the ridiculous conclusion they reached, the study is riddled with fundamental methodological flaws. Besides, anyone who has worn glasses in rain or wind is well aware they do not provide any protection against air-borne droplets reaching the eye.

I am not aware of any firm evidence that covid can, or cannot, be transmitted via the eyes. If you wish to protect your eyes you should wear goggles or a full-face respirator. I don’t think this is necessary but I did drop $5 on getting cheap goggles in case they might be useful in the future.

OSHA publishes official regulations on how fit testing should be performed which employers are required to follow for employees who will be exposed to respiratory hazards. All fit testing procedures that I have reviewed are derived from these procedures. If reading “regulationese” is not your desired pastime, 3M has a one page quick reference to the qualitative test.

*Quantitative* fit testing involves puncturing the mask being tested with a device for measuring particles while the user is wearing it; it does not rely on the wearer’s senses to test the fit of the mask. This test yields a filtration efficiency, allowing to quantify how much better one mask is than another. We will not be considering quantitative fit tests further.

*Qualitative* fit testing yields only a “pass” or “fail” depending on whether the wearer detected any of the test substance while wearing the mask.

I summarize the official OSHA procedure as follows:

A

*dilute*solution of 0.83 grams sodium saccharine in 100 mL distilled water is prepared.A

*concentrated*solution of 83 grams sodium saccharine in 100 mL distilled water is prepared.During the whole test the subject wears an enclosed hood into which the test solutions will be released.

With no mask, the dilute solution is introduced into the hood to verify the subject can taste it, and at what concentration.

With the mask being tested, the concentrated solution is introduced into the hood. The wearer performs several acts, including moving the head, grimacing, exercising, and talking, and should include the sort of motions that will be done in the hazardous environment.

The mask’s fit passes if the concentrated solution is not detected at all.

Bitrex, a nasty bitter substance which OSHA describes as a “taste aversion agent”, can be substituted for the sodium saccharine solution. This increases the sensitivity of the test with the downside that you have to taste Bitrex.

The OSHA procedure recommends sticking the tip of your tongue out which I found to be unhelpful.

Why use the oddly specific concentration of 83 g / 100 mL? When I attempted to make that concentration, I found that it was just a little bit higher than the saturation concentration of sodium saccharine. Note that saturation is strongly dependent on temperature, and OSHA specifies *warm* water. Therefore my guess is that they simply chose the saturation point, to make the strongest-tasting solution possible. However, when I tried to look up the saturation point of sodium saccharine, I got wildly conflicting information: most sources simply said “greater than 10 g / 100 mL”, and others gave values which were well below what I observed.

The instructions I give in the next section use half this concentration to reduce annoyances like precipitation on temperature change.

The process I describe is based on the official OSHA procedure described above; I removed the portions that are annoying (e.g., using distilled water), difficult (making a hood), or expensive (buying medical-grade equipment). This simplified procedure retains only the core concept of applying strong-tasting particulates to the outside of a mask and testing if they can be detected.

You require:

- Sodium saccharin. This is the active ingredient in the artificial sweetener Sweet’n Low (except in Canada), though if you use Sweet’n Low you will have less control over the concentration due to the presence of other ingredients. Do not use other artificial sweeteners!
- A device to aerosolize particles. I purchased this ultrasonic “disinfecting sprayer” for $10 and found it to be excellent. I did find that solid percipitate would regularly clog the device; I’m not sure if another would not have this issue. Note that the tiny aerosols from ultrasonic humidifiers are hazardous to your health, so I would not recommend buying this product for its intended use. For a step up in quality you might search for a “medical nebulizer”; cheap nebulizers are around $50. If you are using a different device you will have to adapt the instructions below.

To prepare the testing solution:

Weigh out about 8 grams of sodium saccharin.

Mix into 20 mL of room-temperature tap water; it will take some effort to dissolve fully.

Alternatively, if you do not have any measuring equipment:

Collect enough water to fill about one third of the aerosolizer’s reservoir (10 mL out of 30 mL).

Slowly add in sodium saccharin, mixing it as you go, until there are solid particles that do not dissolve even with continuous mixing for more than a minute. This will be about 8 grams – nearly as much sodium saccharin as water.

Dilute by mixing 1 to 1 with tap water. Mix thoroughly and pour liquid into the reservoir, disposing of any excess or remaining precipitate.

The purpose of diluting sodium saccharin is to discourage precipitation out of solution. The exact concentration is not important.

Once you have filled and re-attached the reservoir to the aerosolizer, and charged it, you are ready to perform a test. The button on the aerosolizer toggles whether it is on or off; check that it works. Some aerosolizers operate by dispensing once each time it is triggered, which may allow for greater control over the amount dispensed.

If its performance declines, check it is charged and check for precipitate accumulating on the outlet. When left unattended for a long time I find a significant build up of sodium saccharin clogging the device. It is easy to clean with a damp cloth or by flushing with tap water.

Test indoors in a location you can ventilate. See a video demonstration, though I recommend breathing naturally, unlike the person in the video.

Wearing your mask, point the aerosolizer at your face from a few inches away and turn it on for a few seconds. I suggest starting a little farther and then moving close and going around the whole perimeter of the mask, especially around the nose and under the chin. You don’t have to worry about getting the mist in your eyes.

While spraying the aerosol, breathe shallowly in and out through your mouth. I and others have found the solution produces no odor, so do not bother trying to smell it. Rather, it yields a very mild sweet flavor in the back of the tongue or throat, which can take a few seconds to be detectable. The solution is very strong and you will detect even small amounts getting past the mask. Take note of whether you can detect it, and under what circumstances.

If 10 or so seconds of spraying the solution directly at the seals of the masks produces no sensation, then the result is a complete success.

In any case, after removing the mask there should be a very stark difference as even lingering aerosol in the air will be easily detectable.

While the official OSHA procedure is a binary success or failure, it is easy enough to find gradations in how strong the taste is or how much spray is needed to be detectable. **Assessing the quality of the mask fit based on what you detect is left to your judgement**; I describe my own experiences below to provide context.

**Do not breathe in the aerosol directly without a close-fitting respirator**! In initial testing I tried to get a tiny whiff from a foot away without my mask but inhaled too strongly, and ended up coughing vigorously for the next two hours, with a sickly sweet flavor to every cough. If you avoid direct exposure it’s not too bad: others found the lingering aerosol to be unpleasant but tolerable without a mask.

If you are paranoid you can use a more diluted solution but just don’t spray it in your face and you’ll be fine.

I have used this process with about 15 masks across 10 people; the masks included a variety of N95 models, and one P100 mask.

Note that the concentration of solution described above is *very strong*. This is valuable as it increases the discrimination of the test – but do not be disappointed if your mask is not perfect. The results of this procedure cannot be directly compared to that of the official OSHA procedure, as the method of application of the aerosol is different, and subject to how the user operates the aerosolizer.

Every N95 I helped test experienced some degree of leakage. Wearing a carefully adjusted 3M Aura mask, I could only detect a faint hint of sweetness in my throat after several seconds of spraying the aerosol directly at the mask’s seals; compare this to my experience above of exposure without a mask. On this basis I feel very confident in the protection that this mask provides me.

Some of the masks tested had different outcomes on different people, and vice versa: fortunately everyone who tried this procedure with me appeared to be satisfied with at least one mask they tried, up to their standard of precaution.

Some people, like me, felt re-assured in the benefit of their mask after experiencing the sharp contrast between comfortably breathing with aerosol spraying in one’s face and the unpleasantness of passing through that room without a mask well after testing had stopped.

The P100-rated mask proved to be the only mask totally impervious to the testing solution – the person wearing it could not detect the solution at all.

If you decide you are only safe with a respirator rated above an N95, then you should absolutely perform a fit testing: a P100 that does not fit you is worse than an N95 that does. There is no reason to invest in an expensive, uncomfortable mask but not invest in checking if it fits you.

The best video-format information on masks and mask testing is a series of videos produced by Aaron Collins, a non-expert. Includes discussion of mask re-use, glasses fogging, comparison of N95 / KN95 / etc., double masking, mask fitters, and many, many hours of footage of him performing quantitative tests of masks. In particular he demonstrates the differing quality of nose wires; the 3M Aura’s easily moldable nose piece which retains its shape is why it gives a consistently good fit and is such a highly regarded mask.

3M has created an introduction to understanding respirators, such as N95 masks, for the general public.

3M video demonstrating the full OSHA fit testing procedure.

An amateur video demonstration of the fit testing procedure performed with a homemade hood and improvised equipment. Includes discussion of merits of KN95, surgical masks, and mask fitters.

The US Army has instructions for making your own testing hood out of a garbage bag and clothing hangers.

Scientific study (pdf) comparing official testing hood and aerosolizer to homemade substitutes. Note the small sample size, and the use of testing solution at 1 / 100 that specified by OSHA (or 1 / 50 the concentration that I used). I did not find their results informative.

A mask FAQ written by a non-expert, covering a variety of topics including why you can still detect odors through a mask.

An independent group made the following comparison of N95 and other masks:

Wearing a mask or getting vaccinated has often been compared to wearing a seatbelt. Let us extend this analogy a little further, and talk about the multiple layers of protection in the Swiss cheese model of hazards.

I will somewhat arbitrarily group protection measures in three layers:

Avoiding hazardous conditions.

Avoiding hazard exposure risk when in hazardous conditions.

Mitigating against severity of hazard when it happens.

For avoiding car crashes, this means:

Not driving on a road with unsafe drivers.

Driving safely and “defensive driving”.

Wearing a seatbelt, as well as airbags and other safety equipment.

For mitigating covid, this means:

Physical distancing from people with covid.

Wearing a mask and having substantial ventilation or air filtration when around people with covid.

Getting vaccinated, bolstering your immune system with sufficient sleep, and other health measures like losing weight.

You only *need* to actually take precautions from contagious people, just as you only need to avoid drivers who get into car accidents… but in practice that means taking precautions from everyone. I would like to emphasize that getting covid or spreading covid is not something that only happens to “other” people or to “bad” people. Wearing a mask around someone is not a moral condemnation of them any more than wearing a seatbelt when they drive. “We’ve been friends for 10 years, you don’t need to wear a mask around me” is asking for you to “trust” that they can’t catch covid… might as well “trust” that they won’t develop diabetes or cancer. Being a loyal friend does not make you immune to medical ailments!

So let us return to risk assessment, free of any distractions about morals. To catch covid, all three layers of protection must fail you. (Similarly for other endpoints of interest, such as being hospitalized due to or dying of covid.)

Early in the pandemic, without vaccinations or general access to quality masks, the only effective layer of protection was physical distancing (confusingly called “social distancing” in the media). This would be akin to having a 1950s car with no seatbelt or airbag in a snow storm: best just stay home.

Now this has changed. Vaccines, of course, are amazing, with perhaps a 10x or 20x reduction in death risk, and lesser protection against hospitalization and symptomatic illness. But vaccinations’ real value is in how easy and consistent they are: you can never forget to “put on” your vaccination, or worry about around whom you will choose to go unvaccinated. If you are willing to put in just a little more effort to get a higher level of protection, then you should invest in the other layers of protection.

My approach is to treat the second layer, masking, as my main line of defense against covid. It is not an impervious layer: vaccination is so easy that it’d be foolish to go without it as a backup, especially for situations where wearing a mask is infeasible. But I find investing in masks much preferable to investing in the first layer, which would entail avoiding people on the presumption that they may be contagious. Wearing a high quality mask whose fit I have verified gives me the confidence to not worry about who I interact with. (Don’t forget seal checks each time you wear your mask! I wouldn’t be surprised if doing seal checks reduces the leakage of my mask by a factor of 10.)

Of course, you can never be too careful about what might be in your mask.

2022 January 28

Talos Principle OST - Virgo Serena

Maxence Cyrin - Where is my mind (piano cover)

Zbigniew Preisner - Requiem for my friend - Lacrimosa

Mathematical overkill: using Goedel’s compactness theorem to solve a geometry problem by creating a formal system whose consistency is equivalent to the existence of a solution to the problem.

Letterlocking is a security technique of intricately folding and sealing a letter to protect it from inspection or tampering in transit without visibly damaging the letter. Letterlocking has been used for more than 700 years and folding methods were often personalized by individual letter writers. Recent work has allowed researchers to defeat historical locked letters using xrays and computer modeling to inspect them without damaging them.

In Penney’s game, for a fixed , two players sequentially name a different sequence of coin flips of length (e.g., the first player might say TTH, and then the second player might respond HHH). Remarkably the second player always has a winning strategy, making this a non-transitive game.

Herbert Dingle (1890 - 1978) was a respected physicist who became reputed as a crank in his later years. However, it seems he had no idea what he was doing all along:

At this point his colleagues were convinced that Dingle was insane through age or loneliness […] But there is a small point. Dingle was not crazy. For 35 years from 1920 to 1935 (that is before starting his campaign against Relativity) Dingle had written, held conferences about and lectured on a theory

of which he’d never understood any part.[…] Nor is [it] a novel event for a famous scientist to start supporting an absurd or ‘heretical’ theory, completely losing any credibility, maybe for ideological or political reasons, or out of academic rivalry. Here, though, we face a different matter, and an even more chilling one: someone who is a supposed expert in a sector in which ‘peer reviews’ exist with all the accolades and the respectability that entails, who shows that he hasn’t understood a word of things that he’s been left to discuss for years. […] if one looks at his books on the history of scientific philosophy, they are full of blunders. In practice, it’s not that Dingle forgot some things, or was acting under false pretences. He really didn’t understand some things.

That blog also has a fascinating mini-biography of Alexander Grothendieck, one of the greatest and most independent mathematicians of the last century. I hadn’t really internalized just how unusual it is to go to North Vietnam during the US-Vietnam War to teach mathematics in a war zone, just one of many oddities of his life.

Grothendieck seemed indifferent to the danger. When the bombings got too violent, his hosts moved their classes to the jungle. It wasn’t a problem for Grothendieck. He dressed as a Vietnamese peasant, wore sandals made from old car tires, and slept on the ground. The math lessons were very advanced, and Alexander hove into the sights of the western secret service, which continued to track him for years. But his Vietnamese visit had an important outcome in that Grothendieck became the rapporteur of the dissertations of Hoàng Xuân Sính, the first important female Vietnamese mathematician and founder of Than Long University [and first female professor in Vietnam of a technical field], who gained her doctorate under Alexander’s supervision in 1975.

2022 January 24

Anybody who’s seen their field of expertise appear in the mainstream media knows that for quality reporting you should look to investigative reporters who specialize in the field. (Incidentally for math news that is exclusively Quanta Magazine.) With the increased attention on the investigation into the Jan 6 insurrection, here are some of the sources I’ve been following that are at least a step up from the mainstream:

Empty Wheel is an independent news site/blog authored principally by Marcy Wheeler. For the last year it has focused primarily on the investigation into the insurrection, with (eg) both high level discussion of why the DOJ did not appoint a special counsel and low level analysis of the reasoning behind the choice of which crimes to charge different participants.

Opening Arguments is a biweekly podcast featuring a practicing attorney paired with an interviewer who serves as avatar for the audience, covering both the latest legal news as well as interesting non-ephemera. This long-form format does an excellent job of breaking down each piece to get at all the details overlooked by short-form reporting while still making it accessible to a non-lawyer.

Lawfare is a larger independent news site looking at American legal news in national security, international relations, warfare, and cyber security. They notably featured high quality reporting on the Trump-Russian collusion.

*Follow RSS/Atom feed or twitter for updates.*