0.1% of the US has died of covid-19

2020 December 27

As of today, 0.1% of the US has officially died of covid-19; the true death toll is likely much higher. (population 332.0m, covid deaths 333055)

The states with at least 0.1% deaths are NJ (0.2%), NY, MA, ND, SD, CT, RI, LA, MS, IL, MI, IA, PA, IN, AZ, AR, NM, and DC.

The countries with at least 0.1% deaths are Belgium (0.16%), San Marino (0.16%), Slovenia (0.12%), Bosnia and Herzegovina, Italy, North Macedonia, Peru, Andorra, Spain, Montenegro, the UK, Bulgaria, Czechia, and the US. Note that Belgium is unusual in classifying many suspected cases as official deaths.

Why did this happen? Overwhelmed each day with the extraordinary evil and ineptitude of the Republican organization, it is easy (and right) to blame the spread of the disease in part on them and their supporters. However the US did not uniquely fail among all countries.

Neither was the disease so terrible that failure was inevitable. China, Singapore, Australia, New Zealand, Taiwan and possible a few other countries (Saudi Arabia?) all succeeded at fully containing the disease after widespread community transmission had begun.

Rather western governments, in every country and at every level, regardless of political affiliation, give the appearance of being impotent to respond to change.

Dealing with the covid pandemic required a large and rapid response. Within 45 days of the first unambiguous case of human-to-human transmission (the infection of a healthcare worker on 2020 January 10), China had already employed more than 9000 contact tracers for the city of Wuhan alone, tracing tens of thousands of possible contacts per day; about 90% of contacts underwent medical observation. At the same time, more than 40000 healthcare workers were brought into Wuhan from other regions to deal with the outbreak. For comparison, by the end of the outbreak, there had been fewer than 83000 confirmed cases in all of China.

Proportionate to the number of confirmed cases, which are grossly undercounted in the US, this would be as if the US had hired 2 million contact tracers and almost 9 million additional healthcare workers. The US and other western countries failed to devote anywhere near sufficient resources to the problem, even with the benefit of months of advance notice and very specific advice given by the WHO.

Of course, had the US dedicated such resources early, not nearly so much would be required. In Hubei, (note that parts of these claims on Wikipedia are unsourced)

Since Wuhan’s healthcare system was overrun which could have tremendously under diagnosed patients, new laboratories had to be built at a rapid rate. On February 5, a 2000-sq-meter emergency detection laboratory named “Huo-Yan” was opened by BGI, which can process over 10,000 samples a day. With the construction overseen by BGI-founder Wang Jian and taking 5-days, modelling has shown cases in Hubei would have been 47% higher and the corresponding cost of the tackling the quarantine would have doubled if this testing capacity hadn’t come on line.

Testing in China was very thorough, and a key component of their success. In May, after the epidemic had passed, 93% of Wuhan was tested and no infectious people were found.

I have not gone into detail for New Zealand or Australia as I do not have such information, but their success demonstrates that a totalitarian government is not necessary to defeat covid.

Besides testing, tracing, and healthcare, China imposed mandatory restrictions on the movement and gathering of people, beginning with the lockdown of 11 million people in Wuhan on January 23, 13 days after the first unambiguous human-to-human transmission. I am unaware of a single lockdown taking place in the “West”, other than some partial restrictions imposed in northern Italy. Although months later, some western governments did eventually make proclamations concerning mask wearing, curfews, or restrictions on travel and businesses. I’ve observed and read about many flagrant violations of these “mandates” (including by those proclaiming or “enforcing” them), but never heard any instance of a violation being penalized.

Defeating the disease pays for itself. Among the 48 countries listed here, China is the only whose economy grew from Q2 2019 to Q2 2020; only Ireland, Turkey, China, Luxembourg, and New Zealand grew year-on-year in Q3 2020, with South Korea performing the best among OECD countries. Note that ordinarily about 20% of New Zealand’s economy is based on tourism. Taiwan grew year-on-year ending both Q2 and Q3 2020. Most other economies that grew in the last year were in Africa, south-east Asia, or Guyana (due to recent oil discoveries). Unsurprisingly, the OECD’s 2020 December macroeconomic report mentions covid on almost every page.

Returning to the US, it is facile to exclusively blame Republicans when failure to meaningfully act in the face of the pandemic seems to be ubiquitous at all levels of government (though degree of failure is correlated with Republican-ism). What accounts for this inaction? (While actually defeating the pandemic is impossible without help from the federal government, lesser interventions would still be helpful.) Did areas not fully captured by the Republican organization passively sit by because they are insufficiently politically progressive? Even liberal “stronghold” Massachusetts has a Republican governor and voted 32% for Trump in 2020. Or is there another factor besides politics, some underlying tendency towards bureaucracy and inaction pervasive in western institutions that New Zealand somehow escaped?

Either way bodes poorly for addressing climate change. I’d trade ten more of these pandemics, equally ineptly handled, to fix the climate. …Although I wonder if pandemics might be a regular feature of our future regardless.

Covid-19 vaccine mechanism and safety

2020 December 21

What’s in the covid-19 vaccine and is it safe? I’ve assembled here the basic information about the proposed vaccine and its safety. There are multiple covid vaccines, which operate in different ways, and so they need to be analyzed individually. These different mechanisms are referred to as vaccine platforms. I will be focusing on questions of whether these vaccines can cause the disease or latent long-term negative effects, as these are the easiest to answer with confidence. Indeed most of the vaccines are totally incapable of causing the disease, as no sars-cov-2 is used in any step of their manufacture! A more detailed discussion on vaccine safety is at the end.

While the various mechanisms of action are straightforward to understand, I am not an expert in biology or medicine. Therefore my endorsement of their safety is based not just on my own limited understanding, but also on the enthusiastic endorsements given by leading infectious disease experts, whose assessment is made on the basis of a deep theoretical understanding together with extensive testing. You should check with experts and authorities before acting on any information here.

While all of the vaccine platforms discussed below are acceptably safe, they may vary considerably in other factors such as effectiveness (either short-term or long-term), cost of manufacture, speed of manufacture, short-term side effects like sore arm or fever, difficulty of storage and administration, and dependency on having a massive stockpile of 100s of millions of chicken eggs; these are some of the factors that motivate new platform development. I won’t examine these issues, particularly short-term side effects.

Structure of sars-cov-2 and other coronaviruses. The outside of the virus is a fatty envelope, which can be destroyed by soap, and contains four different types of proteins embedded in it. The spike protein is used to bind to cells, and is also the main target of the immune system because it physically protrudes the most. Inside of the envelope is a helical capsid that holds the virus genome as positive-sense single-stranded RNA, functionally similar to mRNA.

(If you are using dark mode, note that the labels are in black text, so open the image in a separate tab to read them.)

Lists of candidate vaccines

Wikipedia has the easiest to understand list of covid-19 vaccine candidates. The New York Times vaccine tracker contains slightly more in-depth information about some vaccines, as well as an accessible explanation of how mRNA vaccines work. The WHO vaccine candidate spreadsheet appears to be more comprehensive, especially for vaccines in early phases, as well as containing links to studies on the candidates, but is harder to read and doesn’t include explanatory exposition. A spreadsheet by the Milken Institute seems to be similarly comprehensive. The New York Times tracker lists several candidates as being in a later phase than the other sources.

Persistent adaptive immunity

The adaptive immune system allows the body to learn to recognize specific molecules as foreign and to respond by aggressively attacking those molecules in the future. A molecule that triggers such a response is called an antigen. It takes about 5 to 7 days after exposure to large amounts of a foreign molecule for the system to learn to recognize it.

During an infection, the immune system adapts to some molecule found in the pathogen. Vaccines work by exposing the body to that molecule, thus triggering the adaptive immune system without an infection. The immune system’s response to the vaccine can result in mild fever or other symptoms despite the absence of infection.

Vaccine platforms

Dead, attenuated, and heterologous vaccines

Synthesizing molecules from viruses and bacteria can be extremely difficult, so often the easiest way to make them is to simply grow the pathogen in some host. After harvesting from the host, the pathogen is killed off or weakened (“attenuated”) before being put in the vaccine so that the vaccine doesn’t cause an infection.

A well-known vaccine of this type is the flu vaccine. Until 2019 the flu vaccine was grown in chicken eggs; for this reason the US keeps a strategic stockpile of secret chickens.

Attenuation is performed by growing the pathogen in an environment unlike the natural host; for the flu, this involves successively colder temperatures until the virus can only survive in the upper respiratory system.

Vaccines of this type always require a sample of the pathogen itself (or a close relative, for heterologous vaccines, of which smallpox and tuberculosis are the only prominent examples). In contrast, most of the vaccine platforms below are manufactured using synthesized genetic material that can be made as soon as the pathogen is sequenced, and no live pathogen is required at any stage in the manufacture process.

(Dead virus is generally referred to as “inactivated virus” to side-step any questions of whether viruses are “really” alive.)

Dead sars-cov-2 vaccines in phase III trials are BBIBP-CorV, developed by Sinopharm, CoronaVac, developed by Sinovac, and BBV152 (Covaxin), developed by Bharat Biotech. The first of these had already been administered to one million people in China by mid-November. (The WHO and others list two different Sinopharm vaccine candidates in phase III, but I can’t find any information that distinguishes them.) A number of other such vaccines have not yet reached phase III trials. None of the attenuated covid-19 vaccine candidates have entered phase I trials yet.

mRNA vaccines

The most important discovery of 20th century biology is where proteins come from. Everywhere we look in the body can be found highly intricate, specialized protein machines. One could even say you “are” your proteins – take out the proteins and you’d be left with bones in greasy, salty sugar water.

So when the body grows or reproduces, one’s mass grows, and new proteins need to be made. However, like almost all chemicals, proteins can’t reproduce, even with the help of the body’s cellular machinery (although see prions). The essential idea behind protein synthesis is that the body makes proteins out of RNA, and makes RNA out of DNA:

\text{DNA} \xrightarrow{\parbox{3cm}{\centering enzymes}} \text{RNA} \xrightarrow{\parbox{3cm}{\centering enzymes}} \text{proteins}

(The above diagram is often, though improperly, referred to as the “Central Dogma”.) As for DNA, it does reproduce in the body. There are a few minor variants on the above scheme, and notably RNA serves many different functions in the body; only a minority of RNA is messenger RNA (mRNA), whose purpose is to be a stepping stone to make proteins as depicted above.

While most living organisms follow the above scheme, the main exception is RNA viruses. Viruses face a unique challenge in how to fit their genome in their capsid shell that larger organisms like bacteria do not have an issue with. Notably, this is why viruses have highly symmetric capsids: the size of a protein is proportional to the size of the gene that encoded it, so viruses make their capsids out of many identical copies of small proteins that can be encoded by short genes.

A glimpse of some of the mathematical concerns that go into understanding the symmetries of virus capsids. For an overview, see the wikipedia article on capsids, and for details see Caspar and Klug 1962. Notice the similarity to soccer ball designs; the mathematics of “tiling” a sphere shows up in many contexts.

RNA viruses take the minimalistic approach of using RNA as a genome, which can be directly translated into proteins. (Although note that the very smallest viruses are ssDNA.) However, the host does not contain enzymes that would reproduce RNA, so RNA viruses must come with the code for making such enzymes. Alternatively, retroviruses such as HIV contain code for enzymes that copy RNA into DNA, which is usually injected into the host’s genome. In theory, if a retrovirus were to infect a germ cell it could be passed on genetically to all of one’s descendants; while this is exceptionally rare, about 5% of the human genome is believed to have arisen from viruses.

Sars-cov-2, the virus that causes covid-19, is an RNA virus. Its genome encodes proteins for reproducing its genome, for its virus capsid, and for other purposes including famously the giant “spike protein” that protrudes from the virus envelope (the fatty layer outside the virus capsid). The spike is used by the virus to bind to human cells, allowing it to enter and reproduce; it is also responsible for the outward appearance of the virus, giving “coronavirus” (crown virus) its name. As the spike is the physically outward most part, it is also the easiest target for the immune system: immune cells need to actually touch the molecules they are reacting to.

The active ingredient of the covid mRNA vaccines is modified mRNA that encodes the sars-cov-2 spike protein. Ordinarily injecting RNA into the body would have little effect, as foreign RNA is typically a sign of viral infection, and the body produces copious RNAses to destroy any ambient RNA. The main challenge of designing an mRNA vaccine is presumably the modifications necessary to evade the body’s defenses against RNA – ironically also the same gauntlet that RNA viruses face. The two leading mRNA vaccine candidates both embed the mRNA in lipid nanoparticles for delivery. They also replace some of the RNA nucleosides with non-standard variants that make it harder for the body to recognize them as being RNA.

If the mRNA successfully gets into the cell and tricks the body into accepting it, then it is translated to make spike proteins. These spike proteins then trigger the immune response in the usual way.

mRNA degrades over time in the body, typically within a few days, after which the production of spike proteins will stop. Neither the body nor the vaccine contains any of the machinery necessary to reproduce RNA or reverse-transcribe it into DNA, so once the mRNA is degraded the vaccine should have no further direct effects; the spike proteins will eventually degrade or be cleaned up by the immune system, leaving only a better adapted immune system behind.

An mRNA vaccine would be equally useful for DNA viruses; it doesn’t matter whether the genes in the vaccine matches the genes in the virus, so long as the proteins they create do match.

There is some possibility that the solid lipid nanoparticles (SLNs) used to carry the mRNA may cause harm. SLNs are a very new technology; they were first approved by the FDA in 2018. The only source I have seen indicating a danger of SLNs is an unsourced 2017 news article claiming that Moderna had problems with them during animal testing of their mRNA treatment of Crigler-Najjar syndrome: apparently repeated doses were harmful, so Moderna pivoted to vaccines where fewer and smaller doses are effective. (Treatment of Crigler-Najjar would require regular doses instead of a one-time cure.) While SLNs are very new, liposomes are a similar drug delivery mechanism that have been in use for many decades; liposomes have a lipid bilayer membrane with the drug contents in suspension inside, instead of solid fat. Liposomes have been used in chemotherapy treatments to carry the toxic chemotherapy drugs into cancer cells, and most papers I saw on liposome toxicity referred to premature release of these chemotherapy drugs. Only one paper I read suggested that certain types of liposomes themselves could be harmful. It is unclear to me if this could apply to SLNs.

The mRNA vaccines in phase III trials are Tozinameran (BNT162b2) developed by BioNTech / Pfizer, mRNA-1273 developed by Moderna. A number of other mRNA vaccine candidates are in phase II or earlier.

DNA vaccines

DNA vaccines work by exactly the same mechanism as mRNA viruses, except that the injected DNA is transcribed to make the mRNA which is then translated to make the spike proteins.

The argument for DNA vaccine safety is less clear-cut than that for mRNA vaccine safety, as the body does contain the cellular machinery to reproduce DNA. Personally I don’t like the idea of foreign DNA being put into my cytoplasm. However, it would be quite hard to get such DNA to replicate deliberately, much less accidentally: without special enzymes like those found in retroviruses, the DNA plasmids can’t be incorporated into the cell’s copy of the genome, and therefore will not be copied by the cell.

The DNA is delivered either as an injection of raw DNA plasmids, possibly with the help of electroporation, or with a gene gun which is a helium gas-propelled gun that shoots gold particles coated in DNA plasmids into the tissue. As the latter inserts the DNA directly into the cytoplasm of the target cells, it can use very low doses.

I speculate that DNA vaccines might have fewer side effects than mRNA vaccines, as lower dosages are possible: DNA is more robust than mRNA, and each DNA plasmid could make many mRNA strands before degrading.

Five of the candidate DNA vaccines have entered phase I or II trials. I believe they use injected DNA plasmids, and at least one uses electroporation.

Non-replicating viral vector vaccines

These vaccines contain a virus which has been modified in some way to make it incapable of reproducing in humans. The virus either has its genome removed or altered in some way; I have had difficulty finding precise details. Additionally, the gene for the sars-cov-2 spike protein is added to the virus, either in the form of DNA or mRNA (some sources have mentioned the use of viral vectors with single-stranded DNA; the AstraZeneca vaccine uses double-stranded DNA). The virus thus delivers this gene into cells; as its own genome is not fully present, it is incapable of causing illness.

In principle this is exactly how the mRNA/DNA vaccines discussed above work, but rather than using lipid nanoparticles to deliver the mRNA, a virus that has evolved specifically for this purpose is used. In practice this may represent a significant difference in safety or effectiveness, as the body might react to the virus quite differently than it reacts to lipid nanoparticles, but I have no specific reason to believe this matters. Phase III trials should reveal if one of these mechanisms is actually better than the other in practice.

The AstraZeneca vaccine uses a chimpanzee adenovirus as its carrier. The reason to use a chimp virus over a human virus is so that humans do not have any existing immunity to the injected virus, which might impair the effectiveness of the vaccine. I wonder if it is possible to develop such an immunity if sufficiently many vaccines were administered that all use this same delivery platform.

One paper I read stated (without sources) that the chimp adenovirus in the AstraZeneca vaccine has the spike protein in its surface, as opposed to merely carrying the gene for making the spike protein in its genome. None of the other sources I read were clear on this point.

Viral vectors have been used since the 1970s, originally for research purposes and later for gene therapy. I am not aware of any use of them for vaccination before covid-19.

There are four non-replicating viral vector vaccines in phase III trials. AZD1222, developed by University of Oxford and AstraZeneca, is the most well known vaccine candidate of this type, and is the one that uses a chimp adenovirus for delivery. Gam-COVID-Vac (Sputnik V), developed by the Gamaleya Research Institute of Epidemiology and Microbiology in Russia, uses human adenoviruses. I believe that Ad5-nCov, developed by CanSino Biologics in China, and Ad26.COV2.S, developed by Janssen pharmaceutica, use human adenoviruses. Other vaccines of this type are phase I and earlier.

Replicating viral vector vaccines

These vaccines are akin to those using non-replicating viral vectors, except (as one might guess) the modifications to the virus vector do not eliminate its ability to reproduce in the body.

Only one of the replicating viral vector vaccine candidates for covid have entered clinical testing.

Subunit vaccines (raw antigen)

These vaccines involve directly injecting the antigenic molecule. These are called subunit vaccines because the antigen is one piece of the whole pathogen. For sars-cov-2, the only reasonable choice for antigenic molecule is the spike protein. By itself, the spike protein cannot cause any infection, as it is missing the rest of the virus, in particular the virus’s genome.

Such vaccines are generally manufactured by creating recombinant cells (usually bacteria, but sometimes yeast, insect, or mammal) which contain the gene for the spike protein. These cells are grown and harvested, and then the spike protein is purified from the result.

(In theory these vaccines could be manufactured by growing the pathogen directly; in this case, conceivably improper purification and sterilization could result in a vaccine accidentally causing the disease. However I’m not aware of any vaccines made this way, and sterilization can be done very reliably.)

NVX-Cov2373, by Novavax, is a protein subunit vaccine in phase III trials. Various sources describe a protein subunit vaccine by Anhui Zhifei Longcom Biopharmaceutical as in phase II or III. There are an additional 9 such candidates in phase I or II and 65 vaccine candidates not yet in clinical trials.

Live recombinant bacterial vector vaccines

These vaccines involve the creation of recombinant bacteria, as described in the previous section, which produce the spike protein or other antigen. Instead of purifying the spike protein, the bacteria are directly injected into the body. Depending on choice of bacterial vector it may be possible for the vector itself to cause infection; but in this case antibiotic can be administered to treat it.

I am not aware of any working vaccines of this type, whether for covid or other diseases. However there have been proposals to use recombinant BCG as a vector. Live non-recombinant BCG is used as a vaccine for tuberculosis, as the two bacteria are closely related, but the former only rarely causes disease in healthy individuals.

There are two candidate covid vaccines of this type; neither have reached clinical testing.

Virus-like-particle vaccines

Vaccines containing VLPs (virus-like particles) are like subunit vaccines, except that the subunit is most of the virus; for example, it could be the whole virus capsid. As virus capsids readily self-assemble, these are generally manufactured by “just” making each of the subunits that are in the capsid and mixing them. VLPs do not include the genome and possibly other key components of the virus (such as RNA polymerase for negative-sense RNA viruses) so that no infection occurs.

Manufacture of VLPs for enveloped viruses like sars-cov-2 is somewhat harder as the envelope generally needs to be made by budding from a host cell, as in a live infection. (No capsid would be included in enveloped VLPs.) Eukaryotic hosts are generally required to make enveloped VLPs, which are harder to work with than bacterial hosts.

CoVLP by Medicago Inc. is a VLP vaccine that involves recombinant bacteria living in plants; I believe the bacteria produce the envelope proteins while the plant cells perform the budding to create the VLPs. Sources variously list it in phase III or I trials. A vaccine by SpyBiotech and the Serum Institute of India appears to be in phase I-II trials. Other VLP vaccine candidates are pre-clinical.

Toxoid vaccines

For pathogens that create harm through the release of a toxin, toxoid vaccines train the immune system on that toxin rather than on the pathogen. Such vaccines contain “toxoids” that are chemically similar to the toxin without being toxic themselves. Tetanus, diphtheria, and pertussis are vaccinated this way; this platform cannot be used to protect from covid-19.

A note on “recombinant vaccines”

A number of vaccine candidates I examined were described as “recombinant vaccines”; I had some difficulty understanding exactly what was meant by this label. I believe it can be used to describe any vaccine for which the creation of recombinant organisms (cells or viruses that contain genes naturally found in different source organisms) is one of the steps of the manufacture, not just for vaccines where such recombinant organisms are in the vaccine itself. Because recombinant bacteria (or other “expression hosts”) are broadly used for cheap protein production, they form one of the early steps in the manufacture of most vaccines described above that use the viral vector, subunit, or VLP platforms. Recombinant bacteria have many applications in pharmaceuticals, notably for large-scale insulin production.

Discussion

Particular dangers of the covid vaccine

First, we can dispense with the question of whether the vaccines can cause covid-19. Only the vaccines of dead or attenuated virus involve any actual sars-cov-2 virus at any stage in their manufacture; the other vaccines are missing essential components of the virus, such as RNA polymerase, and are absolutely incapable of causing covid-19.

The safety of vaccination with dead or attenuated virus depends on trusting the reliability of the processes that kill or weaken them. Illness caused by such a vaccine is exceptionally rare: reversion of attenuated poliovirus causes polio in about 1 in 5 million children receiving the attenuated vaccine, but I am otherwise unaware of any cases of reversion in dead or attenuated vaccines of any disease. “The cold-adapted influenza vaccine […] has never reverted to virulence in a vaccinee.” (source) The MMR / MMRV vaccine is an attenuated vaccine given to 85% of children worldwide before the age of one. Some attenuated vaccines may be unsuitable for severely immuno-compromised individuals.

Due to the theoretical potential of a dead or attenuated vaccine to cause disease, I would personally reject an attenuated covid vaccine if one existed for me to reject, and would prefer another covid vaccine platform over a dead sars-cov-2 vaccine. This only applies to novel dead and attenuated vaccines, and the attenuated polio vaccine, as other established dead and attenuated vaccines have thoroughly demonstrated their safety.

The immune system’s reaction to the antigen, which is the purpose of the vaccine, may itself be dangerous to individuals with severe immune system problems; this is not specific to covid vaccination, or the platform by which the antigen is delivered. For a healthy person, the immune system’s reaction will be to develop some degree (possibly none) of immunity to sars-cov-2. Generally speaking one would expect there to be no downside to this; however, dengue virus is believed to exhibit antibody-dependent enhancement, wherein exposure to dengue virus increases the severity of disease when exposed to a different strain of dengue in the future. I am not aware of any credible reason to think this could happen with sars-cov-2, and if it did, it would have been likely to be noticed in phase III trials.

General dangers of the covid vaccine

Having considered the dangers of the active component of the covid vaccines, this leaves us with the more ordinary components. There are many variations from one vaccine to another, and even if I knew all of their ingredients I lack the highly specialized knowledge to evaluate them in detail, so I am forced to speak more broadly. Generally speaking there are three such categories: adjuvants, which amplify the immune response and allow lower dosages to be effective; stabilizers, which slow the decay of the active ingredients and protect against biological contamination; and the delivery platform itself, as described above. For each such chemical that appears in a covid vaccine that is accepted by western regulatory bodies,

The exception is the novel vaccine platforms which may violate point 3 above, so let us discuss this a little further, by considering each relevant platform.

It seems to me that, having examined the safety of each aspect of the vaccines, any plausible means by which they could cause grave harm to healthy recipients has been eliminated. That leaves us with the unknown unknowns, the risk factors that we can not reasonably anticipate. We can conjure an endless list of these possibilities: perhaps a large number of people were involved in cheating on the clinical trials; perhaps there is some additional biological function to mRNA yet undiscovered; perhaps moving to industrial-scale vaccine production will result in the introduction of some toxic contaminant. If some unknown unknown turns out to be dangerous, it is (by definition) going to be one we couldn’t predict, so we need to assess these risks collectively.

A helpful point of comparison for these unknown unknowns is the food we eat. There are all sorts of chemicals we are ignorant of in our food, whether they are added in industrial processing or synthesized in the natural growth of the food or absorbed incidentally from the soil it grew in. Yet, we don’t worry about whether micrograms of some mysterious substance in our food will conceal itself in our body to come out and harm us years later.

I believe that any means by which the covid vaccine could unexpectedly cause danger to a recipient in spite of the thorough testing and regulatory oversight would apply even moreso to the food industry.

Final thoughts

For an American who is not in a strictly isolating quarantine circle and does not have any particular health conditions that preclude them from receiving vaccinations, I judge the insignificant and unknowable dangers of the covid vaccine to be lower than the alternative of no vaccination. For those in strict isolation or who live in countries (such as parts of east Asia) where there no community spread, if there is a vaccine shortage or rationing you may get more benefit by letting others receive theirs earlier.

xkcd.com/2397

We are very lucky that a safe and effective vaccine is or will soon be available. The first vaccine was risky and deeply unpleasant. And yet vaccination was so much better than having no immunity at all that people went to enormous lengths to perform it:

In 1803, the king, convinced of the benefits of the vaccine, ordered his personal physician Francis Xavier de Balmis, to deliver it to the Spanish dominions in North and South America. To maintain the vaccine in an available state during the voyage, the physician recruited 22 young boys who had never had cowpox or smallpox before, aged three to nine years, from the orphanages of Spain. During the trip across the Atlantic, de Balmis vaccinated the orphans in a living chain. Two children were vaccinated immediately before departure, and when cowpox pustules had appeared on their arms, material from these lesions was used to vaccinate two more children.

FAQ

If the vaccine can’t give you covid, why I have heard about people becoming ill after receiving the vaccine? Mild fever and other symptoms are part of the immune system’s reaction to the vaccine. The mRNA covid vaccine in use in the US is absolutely totally incapable of causing covid. Of course, it is possible to be exposed to covid or other illnesses around the same time as receiving the vaccine, so do not ignore any serious health concerns.

Does the vaccine prevent you from carrying sars-cov-2 and infecting others? If you are immune to sars-cov-2, you cannot develop or transmit the infection. Whether the vaccine grants you this immunity may vary on the vaccine and from person to person; it is certainly possible that the vaccine could reduce the severity of illness without producing full immunity. The extent to which this actually happens should be measured in phase III trials.

Is it possible to measure whether the vaccine gave me immunity? Blood tests can measure the presence of antibodies, which are generally indicative of a previous immune reaction, whether caused by a covid infection or the vaccine. These antibodies give an incomplete picture of the body’s immunity, and measurements have a high error rate. I am not aware of any better way of testing for immunity.

Does the vaccine continue to protect even if sars-cov-2 mutates? Antigenic drift refers to mutations in viruses that affect the part of a virus you develop immunity to. Strong antigenic drift in flu viruses cause new vaccines to be produced and administered each year. However, “diversity among influenza A surface glycoproteins is 437-fold greater than that measured in SARS-CoV-2”. Hopefully the lower diversity in sars-cov-2 mutations means that one vaccine provides good immunity to all existing strains.

How long does immunity from the vaccine last? We don’t know.

I have medical condition X, can I get vaccinated? Ask a doctor.

Can the same technology be used to create a vaccine to some/all strains of the common cold? Actually I haven’t heard others asking this – this is my question and I want to know!

2020 December 02

It’s now been one year since I set up this blog, and I still haven’t gotten around to writing what I thought was going to be my first post!

Music

Ricky Montgomery - This December

Mother Mother - Ghosting

Bach - Passacaglia and Fugue in C Minor BWV 582 It took me some searching to find a performance that I liked that also had a “score”. There was quite a bit of variation depending on the organ and the performer. I was surprised to find that I liked the visualization here quite a bit more than a traditional score, especially the inclusion of the theme (as grey bars) during the passacaglia which helps provide context for the variations.

Perturbator - Birth of the New Model

Previously I mentioned Skolem’s paradox as a bizarre quirk of mathematical logic, but since reading Scott Aaronson’s excellent post on the continuum hypothesis I have learned that the Löwenheim-Skolem construction plays a key role in proving that the continuum hypothesis is independent of ZFC!

Observing Ramadan requires fasting from sunrise to sunset. The Burj Khalifa, the tallest building in the world, experiences significantly longer daylight at the top, so observers living above the 80th floor must break fast two minutes later, and observers living above the 150th floor must break fast three minutes later; it is unclear if fast also begins earlier. Observers living in permanent day or night either follow the times of the nearest city with daily cycles or of Mecca; those in orbit follow the times of where they launched from.

Recent work has resulted in the discovery of a “carbonaceous sulfur hydride” system that is capable of superconductivity at 15 C at the soul-crushing pressure of 267 gigapascals. If this pressure is too impractical to apply, let us refer to the announcement earlier this year of “room temperature” superconductivity through the innovative technique of lowering the temperature of the room.

Marcus Garvey supposedly died of a stroke after reading his own unflattering obituary that was erroneously published – possibly the only case of an obituary causing the subject’s death.

Optimizing compilers for C use all kinds of smart tricks to speed up the code they produce – sometimes too “smart”. A clever C program was used to trick such compilers into claiming that they could find a counterexample to Fermat’s Last Theorem. The author writes:

Faced with this incredible mathematical discovery, I held my breath and added a line of code at the end of the function to print the counterexample: the values of a, b, and c. Unfortunately, with their bluffs called in this fashion, all of the compilers emitted code that actually performed the requested computation, which of course does not terminate. I got the feeling that these tools – like Fermat himself – had not enough room in the margin to explain their reasoning.

Videos

Why organs sound scary, as well as an overview of the history of organs, how they operate, and the cultural context of organ in earlier movies.

Line rider animation of Beethoven’s 5th symphony; animation by DoodleChaos.

A problem arises during a piano performance and is handled.

Antiques Roadshow: Chekhov’s Gun

Images

Crystalline bismuth. The appearance is in part due to surface oxidation.

The Rosette Nebula, image found here

One minute after Deep Impact struck comet Tempel 1.

Sand dunes in Rub’ al Khali, the erg covering the lower third of the Arabian peninsula. Random fact: “erg” is sometimes pluralized as “areg”, although “ergs” is much more common.

Partial fractions vs contour integration

2020 November 19

Unlike differentiation, there is no systematic method for integrating any function given in closed form, but rather a library of calculation techniques (i.e. a calculus) that can be applied ad hoc to specific functions. (Although see also the Risch algorithm for a close miss.)

As a simple example, consider

\int \frac 1{x^2 + 1}\ dx = \arctan(x)

which can be solved through various classical techniques, or verified by computing the derivative of \arctan(x). In fact we’ll find a use for this integral in an upcoming entry. However the integral

\int_{-\infty}^\infty \frac 1{x^4 + 1}\ dx

is less obvious, as the corresponding indefinite integral is not easily solvable.

I saw this latter integral presented as an example that is amenable to the use of contour integration methods. However, my lack of familiarity with such method leads me to favor the use of partial fractions for this problem. But when I worked through the problem with partial fractions, it became clear that here the two techniques are really the same in disguise.

Let’s walk through the steps of computing

I = \int_{-\infty}^\infty \frac 1{P(x)}\ dx

using both partial fractions and contour integration. Here we take P(x) to be a monic polynomial of degree at least 2, with no repeated roots and no real roots, although in general the following steps equally apply to any quotient of polynomials. Let

P(x) = (x - r_1) \cdots (x - r_n).

Partial fractions

We need to find \alpha_k such that

\frac 1{P(x)} = \frac {\alpha_1}{x - r_1} + \cdots + \frac {\alpha_n}{x - r_n};

this is called the partial fraction decomposition of \frac 1P. Multiplying both sides by P(x) gives an equality of polynomials in x that needs to hold for all x; we can regard this as a system of n linear equations with n unknowns. However, the easier way to solve this is to evaluate both sides at x = r_k, as that makes all but one term on the right go to zero. We get

\alpha_k = \frac 1{P'(r_k)}

(which you can verify by expanding P'(x)). These \alpha are the residues of \frac 1P at each of its poles.

Now

\begin{aligned}
I &= \int_{-\infty}^\infty \frac 1{P(x)}\ dx \\
&= \lim_{R \to \infty} \sum \alpha_k \log (x - r_k) \biggr\rvert_{-R}^R \\
&= \sum \alpha_k \lim_{R \to \infty} \log \frac {r_k - R}{...

so it remains to calculate the limit of this log expression. As we are taking log of complex numbers, we need to be careful to choose a branch cut of log that is not crossed by the line from r_k - R to r_k + R. The standard choice for the branch cut, the negative real axis, works for this purpose as we have specified that none of the r_k are real. (If any of the r_k are real we would need to deal with integrating through a singularity in \frac 1P.)

We know that

\log\left(m e^{i\theta}\right) = \log(m) + i \theta

so we need to know the magnitude m of \frac {r_k - R}{r_k + R} for large R, and its angle \theta. In the limit of large R, its magnitude goes to 1. The angle depends on whether the imaginary part of r_k is positive. If \Im r_k > 0, then the angle of \frac {r_k - R}{r_k + R} approaches \pi; otherwise -\pi. Thus we have

I = \pi i \left( \sum_{\Im r_k > 0} \alpha_k - \sum_{\Im r_k < 0} \alpha_k \right).

This can be simplified slightly: let us show that the sum of the residues \alpha_k is zero. In the next section we will see an immediate way to prove this. For now, note that in our equation

1 = \alpha_1 \frac {P(x)}{x - r_1} + \cdots + \alpha_n \frac {P(x)}{x - r_n}

that 0 is the coefficient of the x^{n - 1} term on the left, and \sum \alpha_k is the coefficient of the x^{n - 1} term on the right. Then it follows that

I = 2 \pi i \sum_{\Im r_k > 0} \alpha_k.

Contour integration

We can calculate I using the residue theorem, which states that

\oint f(z)\ dz = 2\pi i \sum \text{Res}(f, z_k)

where f is a meromorphic function and the sum on the right is of the residues of f at each of the poles inside of the contour. The definition of residue is the unique value such that the difference

f(z) - \frac {\text{Res}(f, a)}{z - a}

has an antiderivative in a small punctured disc around z = a. (It is fine if f has a pole of order higher than 2, as integrating z^n only creates a branch cut when n = -1 exactly. Thus the residue is the coefficient of the z^{-1} term.)

The residue theorem is a direct consequence of Cauchy’s theorem, which states that the contour integral of a holomorphic function is zero. Suppose we want to use this to compute a contour integral that goes around some poles. Then by Cauchy’s theorem we can write \oint f(z)\ dz as a sum of contour integrals, one for each pole, each of them going in a circle of arbitrarily small radius around that pole. Then, by definition of residue, we can replace these integrals with ones of the form \frac 1{z - z_k} that can be computed easily to give the desired result.

So what are these residues for the function \frac 1P? Unsurprisingly, the residue of \frac 1P at r_k is \alpha_k, as can be seen from the equation

\frac 1{P(x)} = \frac {\alpha_1}{x - r_1} + \cdots + \frac {\alpha_n}{x - r_n}

together with the definition of residue. Then if we choose a contour to integrate around, the residue theorem tells us that

\oint \frac 1{P(x)}\ dx = 2 \pi i \sum \alpha_k

where the sum is taken over \alpha_k such that r_k is inside of the contour.

It remains to choose a suitable contour. First, imagine taking a large circle of radius R around the origin, with R > \max |r_k|, and let J_R = \oint \frac 1{P(x)}\ dx be the value of the integral.

In the limit R \to \infty, the length of the path being integrated along grows like R, but the integrand \frac 1P shrinks like R^{-\deg P} \leq R^{-2}, so \lim J_R \to 0. But by the residue theorem, J_R only depends on which poles are inside the contour, which is independent of R, so J_R = 0. Therefore

0 = 2 \pi i \sum \alpha_k

where the sum on the right is over all \alpha_k. This gives us again our result that the residues have a sum of zero, which we needed in the previous section.

Now we return to computing I and choose a semicircular contour, running along the real axis from -R to R, and then following a semicircular arc in the upper-half plane; let I_R be the value of this integral.

I_R is a sum of two components, for the two parts of the path being integrated along. As with J_R, the semicircular arc component goes to 0 in the limit of large R. And again as before, the value of I_R is independent of R for large R, so

I = \lim_{R \to \infty} \int_{-R}^R \frac 1{P(x)}\ dx = \lim_{R \to \infty} I_R = I_R.

Then we compute I_R with the residue theorem, giving

I = 2 \pi i \sum_{\Im r_k > 0} \alpha_k,

where the sum is over roots r_k in the upper-half plane, in agreement with the result of the computation with partial fractions.

Discussion

Step-by-step, the two methods involve nearly the same operations. With contour integration, we took advantage of Cauchy’s theorem and that \frac 1P is holomorphic to choose contours that are convenient instead of being committed to integrating along the real axis. This made it trivial to find the sum of the residues, and also simplified the task of integrating the functions \frac 1{x - r_k}. When integrating along the real axis, we had to do some geometric reasoning about whether the imaginary part of r_k is positive or negative, but using Cauchy’s theorem we can instead integrate in a circle around r_k, which was elided as “easy” in our sketch of the proof of the residue theorem.

Otherwise, the two methods are identical. I was surprised that to calculate the residues by directly applying the definition of “residue” as given on Wikipedia requires first finding the partial fraction decomposition. Of course, while there are various theorems that hasten the calculation of the residues in practice, these are equally applicable to hastening the partial fraction decomposition.

Let us work through the specific example P(x) = x^{2n} + 1. If \zeta is the primitive 4nth root of unity e^{2\pi / 4n}, then the roots of P are

r_k = \zeta^{2k - 1}

for k= 1, \ldots, 2n, of which the first n have positive imaginary part. As P'(x) = 2n x^{2n - 1}, we get

\alpha_k = \frac 1{P'(r_k)} = \frac 1{2n} \zeta^{-(2k - 1)(2n - 1)} = \frac 1{2n} \zeta^{2n - 1} \zeta^{2k},

so

\begin{aligned}
\sum_{k = 1}^n \alpha_k &= \frac 1{2n} \zeta^{2n - 1} \zeta^2 \frac{\zeta^{2n} - 1}{\zeta^2 - 1} \\
&= -\frac 1n \zeta^{2n + 1} \frac 1{\zeta^2 - 1} = \frac \zeta {n(\zeta^2 - 1)} \...

Finally we get

\begin{aligned}
\int_{-\infty}^\infty \frac 1{x^{2n} + 1}\ dx &= \frac \pi {n \sin(\pi / 2n)} \\
\int_{-\infty}^\infty \frac 1{x^2 + 1}\ dx &= \pi \\
\int_{-\infty}^\infty \frac 1{x^4 + 1}\ dx &= \...

[Update] Representation of voters in the Supreme Court confirmation process

2020 October 26

This is an update to the previous post on the degree of popular support for the politicians who nominate and confirm US Supreme Court justices, following the confirmation of Amy Barrett earlier today.

With now five justices on the court having received below 50% “effective popular vote”, Samuel Alito at 49.6% is the median such justice. By this metric, the majority of current justices were opposed by the majority of votes. For historical comparison, of the 106 former justices, only 3 had confirmations that were even slightly close, all near the civil war: Lucius Lamar (32 - 28 in 1887), Stanley Matthews (24 - 23 in 1881), and Nathan Clifford (26 - 23 in 1857). (I could not find information on which senators voted for or against these justices, much less how many votes these senators received, so the only direct comparison I can make is the number of votes received in the senate.) Recall that most confirmations before 1967 were done by voice vote, so no official vote tally was recorded, but presumably this indicated support was overwhelming.

Here are the updated figures and tables:

Justice Year Nominator Senate ‘Yea’ ‘Nay’ ‘Yea’%
Marshall 1967 Johnson 61.34% 69 - 11 - - -
Burger 1969 Nixon 50.41% 74 - 3 - - -
Blackmun 1970 Nixon 50.41% 94 - 0 - - -
Powell 1971 Nixon 50.41% 89 - 1 - - -
Rehnquist 1971 Nixon 50.41% 68 - 26 - - -
Stevens 1975 Ford 61.79% 98 - 0 - - -
O’Connor 1981 Reagan 55.31% 99 - 0 81938226 0 100.00%
Rehnquist 1986 Reagan 59.17% 65 - 33 52484264 34602506 60.27%
Scalia 1986 Reagan 59.17% 98 - 0 87086770 0 100.00%
Kennedy 1988 Reagan 59.17% 97 - 0 79603513 0 100.00%
Souter 1990 Bush 53.90% 90 - 9 76500215 11844310 86.59%
Thomas 1991 Bush 53.90% 52 - 48 35475831 44253820 44.50%
Ginsburg 1993 Clinton 53.45% 96 - 3 88261651 2034999 97.75%
Breyer 1994 Clinton 53.45% 87 - 9 81479894 6195598 92.93%
Roberts 2005 Bush 51.24% 78 - 22 76870777 43929082 63.63%
Alito 2006 Bush 51.24% 58 - 42 59162228 60126394 49.60%
Sotomayor 2009 Obama 53.69% 68 - 31 86633780 30182701 74.16%
Kagan 2010 Obama 53.69% 63 - 37 75861452 37123012 67.14%
Gorsuch 2017 Trump 48.89% 54 - 45 54760599 76494514 41.72%
Kavanaugh 2018 Trump 48.89% 50 - 48 53364281 76883828 40.97%
Barrett 2020 Trump 48.89% 52 - 48 55669312 68437726 44.86%

The current justices are in bold. The ‘Yea’ and ‘Nay’ columns indicate the total number of votes received by the corresponding senators. William Rehnquist appears twice as he was appointed as associate justice in 1971 and then chief justice in 1986.

Some senators have been appointed to their position, and therefore received zero votes. Here is a list of every such senator that influenced my result:

Justice Year Senator State Vote
Barrett 2020 Kelly Loeffler Georgia Yea
Barrett 2020 Martha McSally Arizona Yea
Kavanaugh 2018 Cindy Hyde-Smith Mississippi Yea
Kavanaugh 2018 Jon Kyl Arizona Yea
Kavanaugh 2018 Tina Smith Minnesota Nay
Gorsuch 2017 Luther Strange Alabama Yea
Kagan 2010 Michael Bennet Colorado Yea
Kagan 2010 Roland Burris Illinois Yea
Kagan 2010 Kirsten Gillibrand New York Yea
Kagan 2010 Carte Goodwin West Virginia Yea
Kagan 2010 Ted Kaufman Deleware Yea
Kagan 2010 George LeMieux Florida Nay
Sotomayor 2009 Michael Bennet Colorado Yea
Sotomayor 2009 Roland Burris Illinois Yea
Sotomayor 2009 Kirsten Gillibrand New York Yea
Sotomayor 2009 Ted Kaufman Deleware Yea
Alito 2006 Bob Menendez New Jersey Nay
Breyer 1994 Harlan Mathews Tennessee Yea
Ginsburg 1993 Harlan Mathews Tennessee Yea
Thomas 1991 John Seymour California Yea
Souter 1990 Daniel K. Akaka Hawaii Nay
Souter 1990 Dan Coats Indiana Yea
Kennedy 1988 David Karnes Nebraska Yea
Scalia 1986 Jim Broyhill North Carolina Yea
Rehnquist 1986 Jim Broyhill North Carolina Yea
O’Connor 1981 George J. Mitchell Maine Yea

As we noted before, California uses a jungle primary system. Senators Kamala Harris and Diane Feinstein were each elected against opponents in the same party as them. In 2016 Harris received 7542753 votes while her opponent received 4701417, and in 2018 Feinstein received 6019422 votes while her opponent received 5093942; in each case, their opponent received a much larger percentage than all Republican candidates in the primary had received combined. This suggests that Harris and Feinstein received fewer votes than would have been expected had California not used a jungle primary system. Harris voted ‘Nay’ for the confirmations of Neil Gorsuch and Brett Kavanaugh, and both voted ‘Nay’ for the confirmations of Barrett. (While Feinstein was in the senate during the confirmations of Gorsuch and Kavanaugh, at the time her most recent election was against a Republican opponent; she had received almost 2 million more votes in that election.)

2020 October 18

Music

Josh Rouse - Quiet Town (youtube, spotify)

Kishi Bashi - A Song for You (youtube, spotify)

Clint Mansell - Moon OST - Welcome to Lunar Industries (youtube, spotify)

Ayreon - Beneath the waves (youtube, spotify)

Borneo is the only island in the world to be divided between three different countries. The three-country cairn, the tripoint boundary of Norway, Sweden, and Finland, is a near example: the marker, located 10 meters from the shore of Lake Goldajärvi, is large enough for several people to stand on. Another debatable example is the island of Cyprus: while internationally recognized as solely the territory of the Republic of Cyprus, in practice control is divided between Cyprus, the Turkish-recognized state Northern Cyprus, the UN, and two British military bases.

Collective Motion of Humans in Mosh and Circle Pits at Heavy Metal Concerts: a paper modeling motions in a mosh pit as undergoing a phase transition between gas-like and vortex-like phases. “Qualitatively, this phenomenon resembles the kinetics of gaseous particles, even though moshers are self-propelled agents that experience dissipative collisions and exist at a much higher density than most gaseous systems. To explore this analogy quantitatively, we watched over 10^2 videos containing footage of moshpits on YouTube.com.”

The Great Panjandrum was a World War II weapon consisting of one ton of explosives mounted on rocket wheels, intended for breaching the fortifications of the Atlantic Wall. While the weapon never entered combat, it did manage to repeatedly endanger the lives of spectators to its many iterations of tests:

Then a clamp gave: first one, then two more rockets broke free: Panjandrum began to lurch ominously. […] Hearing the approaching roar he looked up from his viewfinder to see Panjandrum, shedding live rockets in all directions, heading straight for him. As he ran for his life, he glimpsed the assembled admirals and generals diving for cover behind the pebble ridge into barbed-wire entanglements. Panjandrum was now heading back to the sea but crashed on to the sand where it disintegrated in violent explosions, rockets tearing across the beach at great speed.

The Tunguska event was an explosion in remote Siberia in 1908 commonly attributed to a meteor. While it is the largest impact in recorded history, it left no impact crater, and the cause of the explosion has not been definitively established. This 1973 paper explores the possibility that it could have been caused by a tiny black hole passing through the Earth. The exit would have taken place around the North Atlantic, but there is no known evidence of an exit explosion similar to the entry.

Not only is it easy to compare apples and oranges, but “it is apparent … that apples and oranges are very similar”, according to a 1995 paper by Scott Sandford in the Annals of Improbable Research. A larger, readable version of figure 2 can be found here.

Skolem’s paradox: it is possible to prove within first-order ZFC that there exists an uncountable set; but the Löwenheim-Skolem theorem shows there exists a model of ZFC within which all sets are countable.

While the British government never granted a “licence to kill”, they did issue licenses to crenellate.

Videos

Feeling galvanized to vote in the upcoming election? You should know: the word galvanize comes from the Italian scientist Luigi Galvani due to his experiments in animating dead animals by electrifying them. Just say no to animating the dead to vote! This and other interesting facts I learned from this video on the origin of the word “battery”; lots of other excellent science history can be found on that channel.

Two Mitchell and Webb skits which are about themselves: Behind the scenes: the script and The man with the wig skit.

The evolution of bacteria when exposed to antibiotics.

Images

Rib vortices behind a breaking ocean wave. I wasn’t able to find a satisfactory explanation for what causes them, but here are some more pictures, and below is an excellent video of one:

Boiling Lake, a constantly boiling (or near-boiling) lake in Dominica. The gasses emerging in the middle of the lake come from a fumarole somewhere underwater. The area is hazardous due to noxious, volcanic gas and small jets of invisible steam. Source of the above image and others.

hmmmmmmmmm…!

(Most likely the movie description was edited later. All of the reviews on imdb are bots except one human giving it one star. The only evidence of it on youtube is a trailer that appears to be made for a different movie by a different indie studio.)

Support big name candidates by donating to down-ballot candidates

2020 October 08

originally posted on facebook

Something I wish I had been aware of already, which I learned from this astute article (from Chris K.):

Big name candidates in the US have long been saturated with political donations, and are not legally allowed to redirect funds to other campaigns. As a consequence, by far the most effective way to support big name candidates is to donate to down-ballot candidates with overlapping constituencies. Many of these state-level candidates are at a severe deficit to their Republican opponents!

The article walks through specific examples of this and has links to donate to recommended down-ballot candidates.

While Biden must win to prevent further sliding into an autocracy, 538 gives only a 68% chance of winning the Senate, which we need to do if there is to be any forward progress in the next 2 years. (And I imagine such forward progress is necessary to persuade any of the milquetoast centrists to actually stick with the Democrats 4 years from now.) At this point, investing in the Senate (via donating House and state candidates) seems a much more critical target for funds.

https://idlewords.com/2020/09/effective_political_giving.htm

Representation of voters in the Supreme Court confirmation process

2020 October 06

There is now an updated post following the confirmation of Barrett.

Justices of the US Supreme Court are not selected directly by the people, but nominated by the president and then confirmed by a majority of senators. The president is themself elected by the members of the electoral college, who are elected by the people, as are senators. These layers of indirect selection create the opportunity to distort the representativeness of the end result, as we have become acutely aware of in recent decades.

To measure this distortion, I’ve computed the “implied” popular vote for recent justices by propagating forwards the actual votes cast by Americans through the layers of indirection. This gives two results, corresponding to the nomination and the confirmation. For the nominator, this is simply the popular vote: percentages reported below are the two-party percentage, i.e., votes to third parties are ignored and only the winner and runner-up are compared. For the senate, as confirmation requires as many ‘Yea’ votes as ‘Nay’ votes, I chose to compare the number of votes for senators who voted ‘Yea’ to the number of votes for senators who voted ‘Nay’; votes for senators who did neither were ignored. (Many, but not all, senators who did not vote have their intended votes recorded in the congressional record.) Votes against senators were also ignored; there are several different ways these could be included but I felt it made more sense to not use them. Another alternative would be to consider the number of people each senator represents, rather than the votes they received.

I’ve graphed these percentages for confirmations since 1976, the earliest I have data on the number of votes each senator received. Not included is Robert Bork, whose nomination was rejected in 1987 by 42 - 58. The solid line shows the two-party popular vote for each presidential election; along this line are squares when justices were confirmed. Circular data points are the corresponding implied popular vote for the senators that confirmed them, with the number of ’Yea’s given below.

We see in the figure that nominations became significantly more contentious by 2005; while there have always been contentious confirmation fights, prior to this time most nominees were confirmed without significant dispute, and indeed before 1967 most confirmations were made by simple voice vote. Ignoring Rehnquist’s 1986 nomination for chief justice as he was already an associate justice at the time, we also notice that Republicans appoint justices 50% more frequently than Democrats in the time interval shown; had Obama’s third nomination not been nullified, this would be near parity instead.

Also apparent is that senators voting to confirm Democratic-appointed justices are supported by many more people than those confirming Republican-appointed justices: compare, for example, Elena Kagan who was confirmed in 2010 by 63 - 37 with 67% implied vote to John Roberts who was confirmed in 2005 by 78 - 22 with 64%. In fact, every Democratic-appointed justice currently on the court had more implied vote than every Republican-appointed justice, with four of those five not even reaching 50%. Clarence Thomas is the only current justice appointed by a Republican with more popular support than a Democrat that appointed a current justice, by 0.5%.

I’ve included the full table of each confirmation:

Justice Year Nominator Senate ‘Yea’ ‘Nay’ ‘Yea’%
Marshall 1967 Johnson 61.34% 69 - 11 - - -
Burger 1969 Nixon 50.41% 74 - 3 - - -
Blackmun 1970 Nixon 50.41% 94 - 0 - - -
Powell 1971 Nixon 50.41% 89 - 1 - - -
Rehnquist 1971 Nixon 50.41% 68 - 26 - - -
Stevens 1975 Ford 61.79% 98 - 0 - - -
O’Connor 1981 Reagan 55.31% 99 - 0 81938226 0 100.00%
Rehnquist 1986 Reagan 59.17% 65 - 33 52484264 34602506 60.27%
Scalia 1986 Reagan 59.17% 98 - 0 87086770 0 100.00%
Kennedy 1988 Reagan 59.17% 97 - 0 79603513 0 100.00%
Souter 1990 Bush 53.90% 90 - 9 76500215 11844310 86.59%
Thomas 1991 Bush 53.90% 52 - 48 35475831 44253820 44.50%
Ginsburg 1993 Clinton 53.45% 96 - 3 88261651 2034999 97.75%
Breyer 1994 Clinton 53.45% 87 - 9 81479894 6195598 92.93%
Roberts 2005 Bush 51.24% 78 - 22 76870777 43929082 63.63%
Alito 2006 Bush 51.24% 58 - 42 59162228 60126394 49.60%
Sotomayor 2009 Obama 53.69% 68 - 31 86633780 30182701 74.16%
Kagan 2010 Obama 53.69% 63 - 37 75861452 37123012 67.14%
Gorsuch 2017 Trump 48.89% 54 - 45 54760599 76494514 41.72%
Kavanaugh 2018 Trump 48.89% 50 - 48 53364281 76883828 40.97%

The current justices are in bold. The ‘Yea’ and ‘Nay’ columns indicate the total number of votes received by the corresponding senators. William Rehnquist appears twice as he was appointed as associate justice in 1971 and then chief justice in 1986.

Note that some senators have been appointed to their position, and therefore received zero votes. Here is a list of every such senator that influenced my result:

Justice Year Senator State Vote
Kavanaugh 2018 Cindy Hyde-Smith Mississippi Yea
Kavanaugh 2018 Jon Kyl Arizona Yea
Kavanaugh 2018 Tina Smith Minnesota Nay
Gorsuch 2017 Luther Strange Alabama Yea
Kagan 2010 Michael Bennet Colorado Yea
Kagan 2010 Roland Burris Illinois Yea
Kagan 2010 Kirsten Gillibrand New York Yea
Kagan 2010 Carte Goodwin West Virginia Yea
Kagan 2010 Ted Kaufman Deleware Yea
Kagan 2010 George LeMieux Florida Nay
Sotomayor 2009 Michael Bennet Colorado Yea
Sotomayor 2009 Roland Burris Illinois Yea
Sotomayor 2009 Kirsten Gillibrand New York Yea
Sotomayor 2009 Ted Kaufman Deleware Yea
Alito 2006 Bob Menendez New Jersey Nay
Breyer 1994 Harlan Mathews Tennessee Yea
Ginsburg 1993 Harlan Mathews Tennessee Yea
Thomas 1991 John Seymour California Yea
Souter 1990 Daniel K. Akaka Hawaii Nay
Souter 1990 Dan Coats Indiana Yea
Kennedy 1988 David Karnes Nebraska Yea
Scalia 1986 Jim Broyhill North Carolina Yea
Rehnquist 1986 Jim Broyhill North Carolina Yea
O’Connor 1981 George J. Mitchell Maine Yea

I chose not to do any kind of an ad hoc adjustment to remove this noise. Looking at the table, it seems likely that the main effect of these appointments is to understate the degree of popular support received by Elena Kagan in 2010 and by Sonia Sotomayor in 2009, and to a lesser extent the degree of support for Clarence Thomas in 1991.

Also note that, as California uses a jungle primary system, senator Kamala Harris ran in 2016 against an opponent in the same party as her. She received 7542753 votes while her opponent received 4701417 votes, a much larger percentage than all Republicans in the primary had received combined. The use of the jungle primary system thus likely understated the number of votes Harris would have otherwise received; she went on to vote ‘Nay’ for the confirmations of Neil Gorsuch and Brett Kavanaugh. California senator Diane Feinstein likewise won against a Democratic opponent in 2018, but since then has not voted on any confirmations.

Washington’s jungle primary system has not yet resulted in any general elections between opponents in the same party. Louisiana has long had a complicated jungle primary system that makes it difficult to assess the effect it has on the votes (in some years, there was no general election at all), but the number of votes at stake is much smaller than in California or other states.

When I undertook this analysis, I expected the requisite data to be easily found and so the project would only take a few hours. While the analysis was simple enough, I spent many, many hours dealing with the data. My source (from here) of election data had numerous omissions and errors and did not include any appointed senators, as well as not recording the information needed to determine which senator was replaced in each election. It seems likely there were other errors in the data I did not find. I was able to use the official congressional record for nominations of justices but much of it was not in machine-readable format and had to be manually processed (and there were a few trivial errors). Ultimately it would have been far easier to have just written a scraper to get the data from Wikipedia (which was done to generate some datasets I saw) but I would be uncomfortable using that as a source.

My code is quite messsy due to the repeated changes I had to make as I discovered more problems with my data. Input files are here and here, the former being downloaded directly from the MIT election data linked above and the latter having been manually processed.

2020 August 12

Music

Spring Awakening - Don’t do sadness / Blue wind (youtube, spotify)

Sigur Rós - Glósóli (youtube, spotify)

Arcade Fire - Reflektor (youtube, spotify)

Ayreon - E = mc^2 (youtube, spotify)

Voting by mail? This spreadsheet contains links for each state to information and registering to vote by mail and where to drop off your ballot in person.

Leonidas of Rhodes held the record for most Olympic gold medals for 2159 years, during which approximately 161 Olympic games were held. He had four medals in each of the stadion (200 meter race), diaulos (400 meter race), and hoplitodromos (400 meter race with helmet and shield). The only person to exceed Leonidas’s record is Michael Phelps, who beat it in 2008 and now has 23 Olympic gold medals.

The Catholic Diocese of Orlando is not the largest Catholic diocese in the world, but may be the largest in the universe. “Bishop Borders explained that according to the 1917 Code of Canon Law (in effect at that time), any newly discovered territory was placed under the jurisdiction of the diocese from which the expedition that discovered that territory originated. […] Since Cape Canaveral, launching site for the Apollo moon missions, was in Brevard County and part of the Diocese of Orlando, then in addition to being bishop of 13 counties, he was also bishop of the moon.”

The Iverson bracket is very handy notation that allows for formal algebraic manipulation of propositions within an equation. It is defined by:

 [P] = \begin{cases} 1 & \text{if } P \\ 0 & \text{if not } P \end{cases}

This notation is cleaner than and generalizes the indicator function 1_A(x) = [x \in A] and the Kronecker delta \delta_{ij} = [i = j].

“33% certainty? What tipped you off? Was it the 16 black kings, or the 12 knights on the board?”

As a rare example of proper planning and foresight, the Canadian town of Lemieux was abandoned in 1991 as soil testing revealed it was built atop unstable Leda clay, which can liquify when saturated with water. Indeed, two years later, heavy rains led to a large landslide just outside of the city. The soil testing was prompted by a deadly Leda clay landslide that consumed the Canadian town of Saint-Jean-Vianney in 1971. As a less impressive example of foresight, the Saint-Jean-Vianney landslide was presaged for several weeks by cracks in the streets, houses sinking as much as six inches into the soil, and inexplicable thumping and the sound of running water coming from underground.

Weston, Illinois was a small village embroiled in a legal fight with the county to incorporate, which failed in 1964. The village tried again to incorporate by seeking support from the US Atomic Energy Comission, which led to it choosing in 1966 to be removed to make space for the Fermilab facilities.

The Veneto regional council, located in Venice, flooded for the first time in its history 2 minutes after voting to reject measures to counter climate change.

Images

Convection cells of the sun compared to North America. Brighter regions are hotter.

Remote north-western Australia. Source

Problems not solved by taxing the rich

2020 August 08

Rather than lengthily discussing all the different worthwhile goals that could be accomplished or helped by taxing the rich, I thought it’d be easier to enumerate the social problems that could not be substantially ameliorated by taxing the rich. A complete list of such problems has been compiled below.

Social problems that could not be substantially ameliorated by taxing the rich

Using the rolling shutter effect to time lightning leaders

2020 July 07

Here is an image I captured during a lightning storm just after midnight in June:

The horizontal banding is due to the rolling shutter effect. The camera’s sensor recorded the top portion of the image during the ambient night-time conditions, while the bottom was recorded during a lightning strike (the bolt is not within field of view). The blue and green pixels were added as part of a processing step to identify lightning flashes.

While I have many similar images exhibiting horizontal banding, this is the only one featuring three levels of illumination. I suspect, though I don’t know, that the section of intermediate illumination is due to light from a lightning leader, which is the slower, downward, branching, initial phase of a lightning bolt. When one of the branches of the stepped leader touches the ground, the much faster and brighter return bolt happens, which can be seen at the bottom of that frame and for the top half of the next frame of the video:

Therefore, if I know the speed of the data readout of the camera’s sensor, I can calculate how long the stepped leader lasted. As I could not find technical data on the CMOS sensor of the camera (or of any consumer-grade camera), I decided to measure the readout speed experimentally. By having the camera record video while it is placed on its side – not the bottom – and a rectangular object is dropped through its field of view, it is possible to induce the rolling shutter effect and have the rectangle appear skewed in the recording.

Comparing consecutive frames of the video gives the velocity of the object in units of pixels per frame. One side of the rectangle appears closer to the ground because it is recorded after the other side of the rectangle; the height difference gives the number of pixels the object traveled in the time it took for the sensor to record the width of the object. Scaling up according to the fraction of the width of the screen filled by the object and dividing by the object’s velocity and by the fps of the recording gives the time it takes to readout a single frame, which I estimated to be somewhat more than 10 ms. This is consistent with the upper-limit of 17 ms readout as the camera is capable of recording at 60 fps, and readout time does not depend on frame rate (though for some cameras it may depend on resolution of the video).

Using a value of about 12 ms for the readout time of the sensor, it appears that the stepped leader lasted for about 4 ms: that is, it took 4 ms for one of the branches to navigate from the cloud to the ground. (By comparing with the next frame it seems the return bolt persisted for 31 ms. I’ve found that cloud-to-cloud lightning often persists for multiple frames, so I believe this bolt was cloud-to-ground.) Text sources online say that a typical time is 10 - 20 ms, and the two timed videos I was able to find show a stepped leader taking 14 ms and 0.6 ms, so my timing is at least loosely consistent.

Based on this timing, it seems unlikely that this could be two unrelated bolts: over 20 minutes of video, to see two bolts within 4 ms would take on the order of \sqrt{20 \text{ minutes} / 4 \text{ ms}} \approx 550 lightning strikes, which is far more than I recorded in this time. I also find it unlikely that it represents two return bolts of the same strike. I’ve recorded secondary return bolts to be dimmer and 50 - 150 ms after the initial return bolt, consistent with what I’ve read online; however this doesn’t rule out the possibility of a very fast return bolt or some other scenario entirely.

Because the vertical span illuminated by the lightning leader is on the order of the same height or smaller than the typical cloud-to-ground distance seen by my camera from this vantage point, if the timing of this lightning leader is typical than it would be impossible for my camera to capture a downward-moving lightning leader in motion. The lightning leader moves from the cloud to the ground faster than the camera’s sensor can readout the same span. If I wanted to capture a leader in motion I would need to place the camera upside-down, so that as the leader is going cloud-to-ground the sensor is going ground-to-cloud, and they could meet halfway. Unfortunately the top of my camera is not flat so this is impractical.

Congressional Apportionment I: Observations

2020 June 26

  1. Part I: Observations
  2. Part II: Theory (unfinished)

With the US Senate and electoral college heavily biased against states with larger populations, there sometimes arises the misconception that the apportionment of seats in the US House of Representatives is similarly biased. A brief glance at the number of people in each district for every state shows that there is no such significant bias for or against larger states. (Later, we will consider the matter in more detail and explore the tiny biases that do exist.)

We leave aside questions such as US citizens who have no voting representation in Congress, how districts are drawn within each state, and how the population of the states is determined, and only examine how many seats in the House each state is allocated.

US Congressional apportionment algorithm

A brief overview of US Congressional apportionment and its history can be found in this CRS report. Since 1941, the fixed number of seats in the House has been apportioned amongst the states after each decadal census according to the Hill method of apportionment. Let p_i be the population of the ith state, and for positive integers j define

 \alpha_{i, j} = \frac {p_i^2}{j(j - 1)}.

Then the Hill method apportions n seats by taking the n largest of the \alpha_{i, j}, with state i gaining one seat for each \alpha_{i, j} so taken. As law requires that each state is allocated at least one seat, we require that the \alpha_{i, 1} are all taken before any \alpha_{i, j} with j > 1.

Equivalently, if n_i is the number of seats allocated to state i, then the Hill method minimizes

 \sum_i \frac {p_i^2}{n_i}.

Let p be the total population. Then this is equivalent to minimizing

 \sum_i n_i \left( \frac {p_i}{n_i} - \frac pn \right)^2.

The quantity \frac pn is sometimes called the ideal district size; it equals the population-weighted harmonic mean of the district sizes \frac {p_i}{n_i}, which in general is not equal to the arithmetic mean of the district sizes.

Is the 2010 apportionment biased?

Define the voting power of each person to be the number of House seats their state has, divided by the population of the state; each person in state i has the same voting power v_i = \frac {n_i}{p_i}, which is the reciprocal of the district size within that state. The average person’s voting power is v = \frac np, and so does not depend of the choice of apportionment. In the US after 2010, v = 1.407 \cdot 10^{-6}, corresponding to an ideal district size of \frac 1v = 710767 people.

We consider the 2010 House apportionment, and are interested in whether a person’s voting power v_i depends in some way on the size p_i of their state. A simple test is to do a linear regression of v_i against the independent variable p_i; while there are p = 309183463 data points, each person in the same state has the same data, so we can simply do a weighted linear regression on 50 data points.

However, almost all of the variation in voting power occurs in the smallest states, which are most subject to the restriction that each state has an integer number of seats. The state with the lowest voting power is Montana, having 1 seat for 994416 people and thus 1.006 \cdot 10^{-6} voting power. The next larger state is Rhode Island, having 2 seats for 1055247 people and thus 1.895 \cdot 10^{-6} voting power, the largest of any state and 88% more than Montana.

A linear regression might pick up on these significant variations amongst the small states, but when we speak of bias in House apportionment we are often most interested in the small states collectively compared to the large states collectively. To measure this, we sort the people by the size of the state they are in, and group together those in the first half as the “small state” sample and those in the second half as the “large state” sample. (The median person is in Georgia, which will thus lie partially in the small states and partially in the large states.) Then we find the difference in the average voting power of the people in these two samples: this equals the difference in number of seats those states received, divided by \frac p2.

The results can be seen in the figure:

2010 US House apportionment. For each state, we plot its population p_i and voting power \frac {n_i}{p_i}. The black line shows the average voting power \frac np. In green is the average voting power within the small states and within the large states. In red is a linear regression of voting power against population.

The large states have an average voting power 1.255 \cdot 10^{-8} larger than the small states, an improvement of about 0.9%; we call this difference the voting power gap. The slope 4.885 \cdot 10^{-16} of the regression line is in units of voting power per person (that is, per person squared), so we must multiply by a population to make it the same units as the voting power gap. We will find that multiplying by the population of New York, 19421055, makes a good comparison with the voting power gap, so the scaled voting power slope is 9.486 \cdot 10^{-9}.

Thus we see that, by either measure of bias that we investigated, the 2010 apportionment is biased a very small amount in favor of large states.

Different House sizes

Of course, if the House had only 50 seats, then each state would have one seat, and it would be massively biased in favor of small states. Clearly for small number of seats a bias exists, and for 435 seats it appears to be insignificant. We investigate how the voting power bias changes as the number of seats in the House changes.

Voting power gap (green) and scaled voting power slope (red, scaled by the population of New York) with the Hill method of apportionment for each number of seats from 50 to 600.

We see that the two measurements of voting power bias substantially agree with each other, up to a constant scaling factor. While there is quite significant bias for small states when the number of seats in the House is small, this rapidly decreases with increasing numbers of seats. While it happens that the 2010 census data apportioned with 435 seats favors large states very slightly over small states, this is a bit of an anomaly, as the Hill method tends to favor small states slightly. Looking at seat numbers from 400 to 500, we see that at these sizes with the 2010 census data there is typically a voting power bias of about 10^{-8} in favor of small states, which is a bit less than 1% of average voting power.

This is an advantage of one seat for every 100 million people, which put into absolute terms is about 1.5 seats. As the number of seats is increased further into the thousands and beyond, there continues to be a slight but diminishing bias in favor of small states. As the number of seats increases, the total voting power of all people increases, while the voting power gap decreases, and thus the proportional bias relative to total voting power decreases quite rapidly. That is, even as the number of seats increases, the difference in how many of those seats go to small states or large states decreases.

In part 2, we will consider other apportionment methods and their theoretical properties.

Why poll accuracy does not depend on population size

2020 May 28

A common source of confusion about polls is that poll accuracy depends on the number of people who answered the poll (larger polls are more accurate) but does not depend on population size: a poll of 1000 people in Nebraska has the same error about the typical Nebraskan as a poll of 1000 Americans has for the typical American.

Let’s carefully set up the scenario we are considering. Suppose some unknown proportion p of a population answers “yes” to a yes/no question of interest, and we randomly sample N people from this population and determine their responses. We calculate what fraction \overline p of the responses were yes, and use \overline p as an estimate for p: hopefully \overline p is near the correct value p. While we don’t know the true error \overline p - p, as we don’t know p, a common way of describing the typical error is with the standard deviation \sigma of \overline p, which in our scenario equals

 \sigma = \sqrt{ \frac {p(1 - p)}{N} } \leq \frac 1{2 \sqrt{N}}.

Note that \sigma is highest when p = 0.5, so we can take that as the worst-case in an upper-bound of \sigma. (The standard deviation describes the typical absolute error \overline p - p: the relative error is highest when p is near 0 or 1. For example if p = 0.0001, it would be easy to over-estimate p by a factor of 10 – but that is a tiny absolute error.) Polls typically report a “margin of error” which is equal to 1.96 \sigma and corresponds to the 95% confidence interval. The 1.96 roughly cancels the 2, so one can quickly estimate the margin of error of a poll as the reciprocal of the square root of the poll size: a poll of 1000 people should have about a 3% margin of error.

While we see that the worst-case margin of error depends only on the number of people being polled, many intuitively expect the population size to matter. We give a few such intuitive arguments here, although of course focusing on each one makes its flaws apparent. Then we try to build an intuition for the correct statement, which hopefully yields a better understanding of the difficulties of polling and under what circumstances it can be incorrect.

  1. “It should be more difficult to get information about a larger population, so more poll responses are needed.” It is true that it is more work to get information about a larger population, but rather because each individual response is more work, not because more responses are needed.

  2. “An individual has less chance of being polled, and thus influencing the result, if the poll is of a larger population.” Likewise, an individual has less influence on the larger population’s average opinion.

  3. “All Nebraskans are Americans: so if I need 1000 Nebraskans to learn about Nebraska, and 1000 Americans to learn about the US, why can’t I re-use my Nebraska poll results as a result for the whole US?” Nebraskans are Americans, but they are not randomly sampled Americans. The poll size is fine, but the random selection is not.

  4. “What if I conduct a poll of Americans, and it happens that all my random selections are from Nebraska: surely my results are more informative about Nebraskans than Americans?” It is exceptionally rare for such an event to occur, and the margin of error of a poll only describes its typical error.

  5. “What if I conduct a poll of Americans, and I re-use the results as a ‘poll’ of the poll-respondents: surely my results are more informative about them than about all Americans?” To randomly sample from a population, the population must be a defined group before the sampling process occurs: so “group of people who responded to my poll” is not a population in the sense of statistics.

  6. “How can a poll say something about my opinion if I wasn’t asked?” How indeed? I find this the most compelling incorrect argument. Of course, in a very literal sense this is no objection: the poll results do not claim to say anything about your opinion, but of the average opinion of the whole population. Perhaps we can rephrase this objection as “How can a poll say anything about the average opinion of the group of people who were not asked?”. Our facile response no longer applies, as the-group-of-people-not-polled is so close to the whole population as to have nearly the same average. We could give the same response as in point 5: the-group-of-people-not-polled is not a “population” in the statistical sense. However a better answer, I feel, is that in a certain sense a proper poll does in fact “reach” everyone in the population, whether they know it or not. Hopefully the next section makes this perspective clear.

The common link between the errors with each of these intuitive objections (except point 2) is a misunderstanding of the process of random sampling. We elaborate in the next section.

Random sampling

We are so inundated with poll results that we don’t consider that conducting a poll correctly is very difficult work, as in practice it is impossible to randomly choose someone to poll. Without the ability to truly randomly poll people, pollsters must use poor approximations of randomness to publish any result at all, and thus we have widely varying quality of pollsters according to how many shortcuts they take, what sort of shortcuts these are, and how good pollsters are at adjusting their results to fix the errors these shortcuts introduce.

In practice, the best way to truly randomly select a person from a population would usually be to first make a list of all people in the population, and then choose from this list. However, even the US government in its official decadal census cannot make a list of all people in the US: there were approximately 6.0 million imputations added to the 2010 census, representing people who were not on the census but whose existence was inferred in other ways. In fact, the US uses “randomized” surveys to improve the accuracy of its census, and based on these surveys estimates that 16.0 million people were omitted by the 2010 census: some number of these omissions “may be attributed” to the 6.0 million imputations, but how many is unknown.

This is the key point: a randomly selected person from a group of people must have the potential to have been any member of that group. So to select a random American, the pollster must engage in some process that could, in theory, have resulted in the selection of any American. This is an enormous and insurmountable challenge for a commercial pollster aiming to conduct multiple polls every week, as even the US government’s once-a-decade attempt to make a list of all Americans still fails to reach at least 16 million people.

Thus a poll of the US is much harder than a poll of Nebraska, because the former needs the potential to reach any American, rather than any Nebraskan.

Maybe, if polling a random person within a state isn’t as much work, we can make polling a random American easier by first choosing a random state, and then polling someone in that state. Of course, the states vary in population, so the randomly chosen state should be weighted based on their populations. But how do we know how many people are in each state? Ultimately, knowing the number of people in a state relies (directly or indirectly) on some kind of census or poll or analogous process previously conducted in that state. (This hypothetical illustrates some of the ways pollsters are able to partially re-use previous work to improve accuracy of future polls.)

Polling accuracy

Returning to the original scenario, the chance p that a selected pollee’s opinion is “yes” in the poll does not depend on the number of people who have opinions – that is, the population size. Each additional response has a probability p of being “yes”, and thus gives the same amount of information about the value of p. And regardless of the size of the population, finding the value of p tells the same amount: for a larger population, knowing p tells us less information about each individual in the population.

For the purpose of completeness, we give a brief outline of a proof that the typical error \sigma_N of a poll of size N scales like \frac 1{\sqrt N}. Suppose \overline {p_1}, \overline{p_2} are the averages of two polls each of N people, and \overline {p} = \frac {\overline{p_1} + \overline{p_2}} 2 is the average of them taken as aggregate as a single poll of 2N people. If p is the true probability of a person responding “yes”, then we have

 4 (\overline p - p)^2 = (\overline p_1 - p)^2 + (\overline p_2 - p)^2 + 2 (\overline p_1 - p) (\overline p_2 - p).

The term on the left is always positive, with a typical value of about 4 \sigma_{2N}^2. Similarly, the first two terms on the right are always positive with typical values of about \sigma_N^2 each. As \overline p_1 - p is symmetrically distributed around 0, and the two polls are independent of each other, the last term is equally likely to contribute a positive or negative value to the equation. Thus we have that 4 \sigma_{2N}^2 = 2 \sigma_N^2, or 2 \sigma_{2N}^2 = \sigma_N^2, so that \sigma_N must scale like \frac 1{\sqrt N}.

(Another way to show that last term does not contribute is to define \overline p_1' = 2p - \overline p_1 and \overline p' = \frac {\overline p_1' + \overline p_2} 2; by symmetry the first is distributed like \overline p_1, and then as the two polls are independent the second is distributed like \overline p. Now repeat the calculation with these definitions: you get the same equation, but the sign of the last term is negative. Adding the two equations gives the desired result. However if one is willing to do all that, one might as well just use the definition of variance and give a formal proof.)

Note on very small populations

What if the population is so small that the “poll” covers the whole population – surely the error is zero then?

In our above discussion, we’ve implicitly assumed that each poll respondent is found independently of the others, so there is a small chance that two of the responses on a poll were given by the same person. Under this assumption, accuracy truly does not depend on population size, and at small populations it simply becomes very likely that some people are polled multiple times. For large populations, the chance of polling the same person becomes tiny.

Real-world polls, whenever feasible, will attempt to make sure that the same people are not polled multiple times, and thus will have slightly higher accuracy at very small populations. This is only relevant when the population size is very close to the poll size, at which point it might be more apt to label the process an incomplete census instead of a random poll.

The unknown story of early German space exploration

2020 May 23

While Wernher von Braun is best known for his development of the V-2 rocket, his interest in rockets since the age of 13 had been to make space travel a reality, not to develop tools of war. After hearing that the V-2 rockets had struck London, he said “the rocket worked perfectly, except for landing on the wrong planet”. But while von Braun was content to work in earnest for the German military, not everyone working at the German rocket lab in Peenemünde was so eager, and the now-forgotten Geschenk Gaulmann continued work on the development of rockets for space exploration. Even as the military demands of World War II grew ever more pressing, Gaulmann maintained with undiminished optimism the imminent possibility of manned space flight to the moon and even the other planets, almost two decades before the US launched the Mercury program.

Layout of the V-2 rocket.

Only one rocket, the C-4, was ever built based on Gaulmann’s ideas, who was just 19 at the time he joined the Peenemünders. The C-4 (nicknamed “das Maul”, whether for the gaping maw of its engine bells, or its ravenous appetite for fuel and money I can only guess) was intended to be able to reach the moon, although it is unclear at this point if there was any intention or thought regarding landing on it instead of merely crashing into it. A number of innovations – a boldly large many for such an inexperienced rocket engineer – were made over its predecessors, most notably that it was the first modern-style multistage rocket to be built. This idea for a multistage rocket came from “founding father of modern rocketry” (and mentor of von Braun and Gaulmann) Hermann Oberth, who had come up with it in his youth but had at the time lacked the resources to realize it. On account of his mentor, his venturesome exploits in rocketry, and untimely end at a young age, Gaulmann is sometimes referred to as the “kid of modern rocketry”.

The first stage of the C-4 was based on the design of the then-proven V-2, though much larger and with four of the V-2’s engines instead of one. The four engines were fueled by a single turbopump, so it seems unlikely they could have ever reached the necessary fuel rates for proper performance. Gaulmann seemed to make some concession to the far greater difficulty of reaching the moon, but rather than the modern solution of more intricate staging and more efficient engines and materials, opted for the approach of more and more exotic fuels. The V-2 was fueled by “B-Stoff” (75% ethanol and 25% water), and its oxidizer was liquid oxygen (“A-Stoff”). Instead, the fuel for the C-4 was C-Stoff, a highly toxic and hypergolic mixture of methanol, hydrazine hydrate, water, and potassium tetracyanocuprate (I). C-Stoff had been engineered to work with the oxidant T-Stoff, which is 80% hydrogen peroxide with water. While T-Stoff itself is highly dangerous (“special rubberized suits were required when working with it, as it would react with most cloth, leather, or other combustible material and cause it to spontaneously combust”), this was apparently insufficient for Gaulmann who blithely substituted it with N-Stoff, pure chlorine trifluoride. The chemical is colorfully described in Derek Lowe’s Things I Won’t Work With who writes “during World War II, the Germans were very interested in using it in self-igniting flamethrowers, but found it too nasty to work with” and then quotes John Clark’s “Ignition!” for more details:

It is, of course, extremely toxic, but that’s the least of the problem. It is hypergolic with every known fuel, and so rapidly hypergolic that no ignition delay has ever been measured. It is also hypergolic with such things as cloth, wood, and test engineers, not to mention asbestos, sand, and water – with which it reacts explosively. It can be kept in some of the ordinary structural metals – steel, copper, aluminium, etc. – because of the formation of a thin film of insoluble metal fluoride which protects the bulk of the metal, just as the invisible coat of oxide on aluminium keeps it from burning up in the atmosphere. If, however, this coat is melted or scrubbed off, and has no chance to reform, the operator is confronted with the problem of coping with a metal-fluorine fire. For dealing with this situation, I have always recommended a good pair of running shoes.

So far, so good. The payload of the first stage was then, instead of a warhead, the second stage, with the warhead’s detonation mechanism repurposed to time the ignition of the second stage. (Later the US would do similarly in the V-2 sounding rocket program which replaced warheads of captured V-2s with scientific instruments.) However, the weight limitations on the second stage were so severe that after trimming it down practically to the bare metal Gaulmann even removed the oxidizer, relying on an air-breathing engine which we would now recognize as a ramjet. (In retrospect an aluminum can filled with C-Stoff is hardly so different from the warhead it replaced.)

Given the small size of the target and the difficulties of aiming the comparatively short-range V-2, Gaulmann endowed das Maul with three rows of six aerodynamic fins on the first stage and a further two rows of four fins on the second stage. Thus, compared to the V-2 and its four fins, the C-4 was a towering bladed monstrosity. Gaulmann had the nosecone painted bright red.

It may seem bold to try to fly a rocket to the moon with an air-breathing engine, but at the time of Gaulmann’s fateful experiments no scientific observations of the Earth’s upper atmosphere had been conducted, and it was believed by the Germans to extend much higher than we now know today. The C-4 was to expend its fuel while within the atmosphere, reaching sufficient velocity to fly ballistically to the halfway point from the Earth to the moon, from which it was expected to fall (due to the moon’s gravity) the rest of the way. Whether or how it was thought to match the moon’s orbital velocity is not known.

Construction of the C-4 was rushed and safe practices curtailed as the Gaulmann’s superiors in the military became impatient and unimpressed with the project. The propellent, which ordinarily would have been siloed off-site, was instead housed in a small shed near the launch platform which had previously been used by the locals as a lepidopterarium. Das Gifthaus (the poison house), as the shed came to be bluntly referred to, was littered with scores of dead moths from the noxious fumes which leaked terribly. The leaks were much aggravated by a hole Gaulmann had driven through the Gifthaus’s side for easy transfer of propellent, right through the giant mural of a moth that had adorned the lepidopterarium.

The cherry on the top of the inauspicious C-4 was its choice of payload, an unlucky horse given to Gaulmann by his grandparents for this purpose. Allegedly the horse earned this treatment by its aggressive tendency to bite anyone who would continue looking at it after it bared its teeth in warning – an ill-omen ever if there was one. Officially the choice was made as the horse massed at the calculated maximum payload size of the rocket, but I find it telling that after investing so much into the construction of the rocket, the German military was unwilling to fully commit to the folly by paying for a literal dead weight to go on top.

When it came time to test the rocket, most observers situated themselves in the regular observation tower, but Gaulmann decided to get a closer vantage point in the Gifthaus, taking advantage of the hole punched in the side. Thus in January of 1944, Gaulmann’s rocket – the rocket with the amibition to reach the moon, although in truth it lacked the delta-v to even enter low Earth orbit – was test launched. The C-4, in a fit of nominative determinism, exploded instantly. (A merciful ending for the horse, at least.) The fireball ignited the nearby Gifthaus, whose secondary explosion killed Gaulmann. Upon hearing about the death and destruction of Geschenk Gaulmann and his Maul, von Braun remarked (see e.g. from source, and also):

Lochen Sie nicht einem Gifthaus an der Motte.

from which of course we get the popular saying

Don’t look a gift horse in the mouth.

The WHO-China joint covid-19 report

2020 March 03

originally posted on facebook

The best news I’ve seen recently is the report of the WHO mission to China to study covid-19. My two key takeaways from this and other expert opinions:

  1. The decline in cases in China is genuine and the outbreak is under control there. This is in part because China has employed approximately 10000 epidemiologists to trace contacts, and more than 40000 excess health care workers in the Wuhan area. Compare these figures to a cumulative total of 80000 people who have become ill with covid-19 in China! Note that of the 11 new cases confirmed in China outside of Hubei yesterday, 7 were imported from Italy.

  2. It is absolutely still possible for covid-19 to be contained without becoming a pandemic.

In light of this, I am all the more disappointed in the governments of Iran and the US which have failed to monitor the outbreak, failed to contain the outbreak, and covered up the extent of the outbreak. If covid-19 becomes a pandemic, it will be because of a failure to respond in a timely manner, and not because it was inevitable.

“Much of the global community is not yet ready, in mindset and materially, to implement the measures that have been employed to contain COVID-19 in China. These are the only measures that are currently proven to interrupt or minimize transmission chains in humans. Fundamental to these measures is extremely proactive surveillance to immediately detect cases, very rapid diagnosis and immediate case isolation, rigorous tracking and quarantine of close contacts, and an exceptionally high degree of population understanding and acceptance of these measures.”

https://www.who.int/docs/default-source/coronaviruse/who-china-joint-mission-on-covid-19-final-report.pdf

Nominate Marie Yovanovitch for the Profiles in Courage award

2020 February 08

Something you can do: nominate Marie Yovanovitch for the JFK Profiles in Courage award. Ordinarily awards are for elected officials but they do make exceptions. Nominations are accepted until February 15. You are welcome to mimic what I wrote, below, so long as you do not copy it verbatim.

I nominate Yovanovitch for her anti-corruption activities and her testimony before Congress.

Yovanovitch fought corruption during her service as ambassador to Ukraine, making her a target of harassment from US and Ukrainian officials who were benefiting from this corruption. While at a ceremony honoring a murdered Ukrainian anti-corruption activist she received information about tangible threats to her safety and was evacuated to the US. Despite these ongoing threats and receiving specific orders to the contrary, she testified before Congress at the impeachment inquiry even when many other US officials refused to do so. Her testimony was vital to the public interest but caused her to face further intimidation, including from Trump, and led to her being forced out of the foreign service in January 2020 after an exemplary, award-winning, 30 year career.

A book review review review

2020 January 23

Recently I read with interest Scott Alexander’s book review review of Dormin111’s book review of Lenora Chu’s book Little Soldiers, and I wanted to share my impressions of Alexander’s book review review for the benefit of prospective readers.

Alexander’s book review review opens with the highlights of the “plot” of the book review, which summarizes the “plot” of the non-fiction book, including such tantalizing details as:

When Lenora sat in on a kindergarten class, she witnessed an art lesson where the students were taught how to draw rain. The nice teacher drew raindrops on a whiteboard, showing precisely where to start and end each stroke to form a tear-drop shape. When it was the students’ turns, they had to perfectly replicate her raindrop. Over and over again. Same start and end points. Same curves. For an hour.

before Alexander segues into a comparison of the experiences described in depth by Chu with analogous systems in other countries. While the normative purpose of a book review review would be for the benefit of prospective readers of the book review, the book review review places its greater emphasis on this latter comparison and the author’s eventually unsuccessful attempts to elucidate the costs and benefits of running an economy by a hypothetical mash-up of Otto von Bismarck and Voldemort: in fact, he even explicitly says “But I want to use these excerpts as a jumping-off point”. The putative book review review serves more as a framing device for his own interest in continuing and contributing to the discussion begun in the book and book review, much as this book review review review is largely a framing device for exploring recursive sentence structures.

For me, the most salient experience in reading this book review review is not to spark interest or discussion in the specific book review being reviewed, but rather the exploration of the genre of book review reviews that I had not encountered before. My first point of comparison upon reading this book review review was book reviews, such as Alexander’s excellent book review of David Fischer’s Albion’s Seed, which I excerpt here:

INTERESTING QUAKER FACTS: […]

  1. They were among the first to replace the set of bows, grovels, nods, meaningful looks, and other British customs of acknowledging rank upon greeting with a single rank-neutral equivalent – the handshake.

  2. Pennsylvania was one of the first polities in the western world to abolish the death penalty.

  3. The Quakers were lukewarm on education, believing that too much schooling obscured the natural Inner Light. Fischer declares it “typical of William Penn” that he wrote a book arguing against reading too much.

If you have not read this book review of Albion’s Seed, I strongly suggest you put down the book review review and this book review review review to read it and learn about the cultural history of the early immigrants to the US and the residues of their influence in modern society.

However, on reflection, a much more natural comparison for book review reviews as a literary art form would be the genre of fictitious book reviews, such as Jorge Luis Borges’s Pierre Menard, Author of Don Quixote:

Up to this point […] we have the visible part of Menard’s works in chronological order. Now I will pass over to that other part, which is subterranean, interminably heroic, and unequalled, and which is also – oh, the possibilities inherent in the man! – inconclusive. This work, possibly the most significant of our time, consists of the ninth and thirty-eighth chapters of Part One of Don Quixote and a fragment of the twenty-second chapter. I realize that such an affirmation seems absurd; but the justification of this “absurdity” is the primary object of this note. (I also had another, secondary intent – that of sketching a portrait of Pierre Menard.)

Or perhaps, for an example that has been formative for me, Borges’s An Examination of the Work of Herbert Quain:

An indecipherable assassination takes place in the initial pages; a leisurely discussion takes place toward the middle; a solution appears in the end. Once the enigma is cleared up, there is a long and retrospective paragraph which contains the following phrase: “Everyone thought that the encounter of the two chess players was accidental.” This phrase allows one to understand the solution is erroneous. The unquiet reader rereads the pertinent chapters and discovers another solution, the true one. The reader of this singular book is thus forcibly more discerning than the detective.

Borges uses reviews of fictitious books as a framing device to present his clever ideas for structures of books without the herculean work of writing such a book or compelling the reader to suffer through the repetition and filler necessary to realize such a structure as actual text. I regret I find that the non-fiction^3 book review review discussed here does not compare favorably with these examples from Borges of fiction reviews of fictitious books, although that is likely a consequence of my choosing for comparison the highlights of the genre as written by an author I greatly enjoy.

The book review review’s review of the book review, as well as its discussion of the topics in the book and book review, may have been impaired by the author’s not having read the book that was reviewed by the book review that was reviewed by the book review review. How can he assess whether child torture has an effect on adult creativity when his understanding of Chu’s unique perspective may have been tinted by a game of Telephone? And, central to the purpose of a book review review, how can he assess whether the book review may have itself been impaired in some way in its assessment of the book? However, it is difficult for me to assess this potential impairment as I have read neither the book review nor the book.

8 / 10 would read again

The trial in and of the Senate

2020 January 21

Following the impeachment vote last month in the House, today the trial will proceed in the Senate.

With the Senate standing at bat, the House has lobbed the slowest, easiest pitch, the most blatantly guilty criminal, directly over the center of home plate and said: I dare you to miss.

For who is really on trial in the coming weeks is not Trump, but the Senate itself as a democratic institution. With any child able to see Trump’s guilt, what remains to be decided is if the Senate will be complicit. Will the senators perform their sworn duty to the constitution? Or will the Senate vote for its own irrelevancy?

To acquit would be an indelible stain on the Senate, the blackest mark on its legacy, proof that the Senate can be corrupted to its core in service of dictatorial ambitions.

America watches to see if the Senate will hit the pitch, or throw away its bat.

Follow RSS/Atom feed or twitter for updates.