Far-Ultraviolet Light in Public Spaces to Fight Pandemic is a Good Idea but Premature

Also posted here on LessWrong.

Tl;dr: Far-ultraviolet light has potential as a human-safe germicide, but its safety is not established. In particular, evidence that it is not carcinogenic exists for only one of two mechanisms for ultraviolet carcinogenicity. In addition, use of far-ultraviolet light in public spaces to prevent the spread of SARS-COV-2 or other pathogens leads to a host of other concerns that need to be addressed.

Introduction

In 2017, Nature published a paper that investigated the possibility of using far-UVC light to combat a future influenza pandemic. The paper went mostly unnoticed by non-academics, as is the norm for technical journals, but now with the novel coronavirus and the first pandemic of its kind in 100 years, the public at large is paying attention to ideas from the frontiers and fringes of biology and medicine. Last month, far-UVC’s safe germicidal potential was the subject of a post by Roko Mijic and Alexey Turchin on LessWrong. They call the use of far UVC in public spaces “one of the most promising and neglected ideas for combating the spread of covid-19,” and lament “Why hasn’t this already been considered by relevant authorities? Far-UVC appears in a literature review by WHO, but it is not currently being acted upon as the amount of evidence in favor of safety and efficacy is small.”

I’ve spent the last few weeks educating myself on the literature surrounding far-UVC’s safety, and I’ve come to a clear conclusion. Is the use of far-UVC to combat pandemics in general a good idea? Yes. Should research on it be expanded? Yes. But using far-UVC in public spaces to combat COVID-19 would be way way way premature.

First, a couple of disclaimers:

Disclaimer 1: I am not a biologist or a doctor. I don’t have anything near a professional’s expertise on human biological questions. There may be inaccuracies or misunderstandings throughout this post, although of course I’ve done my best.

Disclaimer 2: My method for research tends to be browsing Wikipedia to find general information, and using Wikipedia’s citations and external links to find more rigorous discussion of specific information. I try to be wary of Wikipedia’s shortcomings—I read talk pages and check citations—but even so, bias and inaccuracy on Wikipedia will inevitably seep into my perspective. I’ll leave it up to the reader to decide exactly how this research method affects my credibility.

Basic Biophysical Argument for Safety and Efficacy

Ultraviolet (UV) light is electromagnetic radiation that is shorter wavelength (higher energy) than visible light and longer wavelength than X-rays. The UV light that can be found on Earth is broken into three subcategories: UVA, closest to visible, (315-380 nm), UVB (315–280 nm), and UVC (280-200 nm). Although the sun emits light in all three UV categories, as well as visible and infrared (IR), not all the light reaches us on the surface. UVC is absorbed by the ozone layer, and the only UVC light we experience comes from artificial sources.

UVC light has been used as a germicide since the mid-20th century, but not in public spaces. Nucleic acids (DNA and RNA) strongly absorb UVC light, which means that when UVC light enters a cell, photons will hit genetic material, damaging or destroying it. This makes UVC light a strong germicide, but it also means that it is highly carcinogenic, cataractogenic, and toxic to human cells. Accordingly, its use as a germicide is relegated solely to environments like water sanitation systems where humans won’t be exposed.

However, there is still hope that UVC could be used safely in human environments in the future. There is some evidence that a certain band of UVC light, “far-UVC” (200-220 nm) is safe for humans while remaining toxic to pathogens.

The basic biophysical argument for why far-UVC might be safe hinges on the fact that mammals are much bigger than bacteria and viruses. This band of UVC light is absorbed by proteins, as shown in the following figure from one of the first papers to formulate the idea.

Figure: “Mean wavelength-dependent UV absorbance coefficients, averaged over published measurements for eight common proteins”

In essence, protein can block far-UVC light so that it does not reach DNA. Mammalian cells tend to be 10-25 μm in diameter, while bacteria tend to be 1 μm and viruses even smaller. Because of this size difference, far-UVC light has to pass through more protein before it gets to a mammalian nucleus, and accordly it should be much weaker by the time it hits mammalian DNA. In addition, on most parts of the body, we are protected by an outer keratin-rich (keratin is a protein) layer 10-40 μm thick called the stratum corneum. Cells in the stratum corneum are somewhere philosophically between dead and alive—they maintain homeostasis and complex intercellular environments, but they lack DNA, so they are safe from cancer. Because it passes first through the protein-rich stratum corneum, far-UVC light should be greatly attenuated before it even reaches the cell membranes of vulnerable cells.

Of course, if we’re going to be using far-UVC light around humans, we need more than just a biophysical argument. We also need empirical evidence. So what does the empirical evidence suggest about safety? The Nature article cites three studies that experimented with far-UVC light on human cells, on lab grown human skin, and on live mice. These studies show promise for far-UVC as a safe germicide, but they’re far from fully establishing safety.

Safety Concerns for Individuals

What happens to a person who has been repeatedly irradiated by far-UVC light? What are the health risks? What do we know? What don’t we know?

Cancer

UV light is famously carcinogenic, so cancer is a central concern when it comes to assessing far-UVC’s safety. Accordingly, the three safety papers mainly seek to assess cancer risk. Of course, cancer can be slow to develop, so it isn’t possible to irradiate test subjects and count cancer cases within a reasonable timeframe. Instead, cancer must be indirectly assessed.

Cancer is caused by genetic mutations—we’re as certain about that as we are about anything in human biology—so by measuring DNA damage you can get some sort of measure of carcinogenicity.

So that’s what the authors did. They measured DNA damage under 207 nm light in lab grown human skin in vitro, under 207 nm light in mice in vivo, and expanded their results to 222 nm light on both human skin and mice. Their results were certainly promising, but the work is far from sufficient for fully demonstrating cancer safety.

There are two separate mechanisms for DNA damage from UV light, and the safety studies really only address one of the two mechanisms. We’ll discuss them separately:

Direct DNA damage

Direct DNA damage occurs when photons are absorbed by DNA. The excited DNA breaks the bonds between the nucleotide bases, and the bonds can reform with adjacent bases instead of opposite bases, disrupting the double helix structure in a type of lesion called a pyrimidine dimer.

UVB and UVC light can both interact directly with DNA in this way. This is the mechanism for UVC’s germicidal action, but at lower intensities, instead of lethal DNA destruction, lesions can turn into mutations. The body reacts to this kind of damage by killing and shedding damaged skin cells in the form of sunburn.

I feel pretty confident that this kind of damage does not happen in mammalian skin from far-UVC light. First, the basic biophysical argument is strong. Few photons should reach the nucleus (and number of photons should basically determine number of lesions). The light needs to pass through the keratin-rich (and therefore far-UVC absorbing) stratum corneum before even reaching the relevant parts of the epidermis.

The empirical evidence is also compelling. Experimentally, in vitro, as expected, irradiating lab-grown human skin with standard germicidal UVC light caused a huge number of pyrimidine dimers—standard germicidal UVC light is highly carcinogenic. Irradiating the human skin model with far-UVC, however, caused no statistically significant increase in these types of DNA lesions:

Figure: Induced yield of two types of pyrimidine dimers, from fluences of standard germicidal UVC and from far-UVC light.

Additionally, in live mice, there was no evidence of sunburn in mice exposed to far-UVC—suggesting that direct DNA damage must be minimal. The skin of unirradiated mice and the skin of mice irradiated with far-UVC looked the same, while in mice irradiated with standard germicidal UVC, the skin was visibly altered and had a thickened epidermis.

Figure: A.) Representative cross-sectional images of mouse skin. B.) Average epidermal thickness for non-irradiated mice, mice under standard germicidal UVC, and mice under 207 nm UVC.

Indirect DNA damage

Indirect DNA damage occurs when photons are absorbed by other molecules in the cell, and these molecules react to form free radicals and other reactive species, which in turn react with (other molecules which react with other molecules … which react with) DNA, causing mutations via an oxidative stress mechanism. UVA, UVB, and UVC light can all cause indirect DNA damage. This kind of damage causes skin cancer, but importantly it does not activate the same defenses as direct DNA damage—no sunburn.

The biophysical reasoning that suggests that far-UVC doesn’t cause direct DNA damage doesn’t apply neatly for indirect damage: Far-UVC is quickly attenuated in the outer layer of the skin, but how far can reactive chemical species formed near the surface propagate? Could they make their way down to vulnerable cells in the epidermis? As far as I can tell, the answer to this question is unknown. Indirect DNA damage is only relatively recently understood, completely unrecognized in 1980 and remaining somewhat controversial up through the early 2000’s—it wasn’t until 2009 that the WHO recognized tanning beds as a definite cancer risk—so there’s still a lot of uncertainty. What is known is that in general, chemicals can diffuse through the skin, and some of the chemical species we’re worried about are stable in the body. More research is needed to rule out this possibility.

In addition, the empirical evidence for far-UVC’s safety from direct DNA damage does not apply to indirect DNA damage. The mouse’s lack of sunburn in the in vivo study is meaningless as indirect DNA damage does not cause sunburn. The lesions they look for, pyrimidine dimers, are specific to direct interactions between photons and DNA. Indirect DNA damage causes different lesions.

Cancer Safety Conclusion

Although the results are promising, cancer safety has not been fully established. Direct DNA damage is minimal, but indirect DNA damage is a huge open question.

Non-cancer cell damage

Another potential point for concern is non-cancer cell death. The in vitro study found that significant fluences of 207 nm light kill 80% of exposed human fibroplasts (dermis cells):

Is this concerning? Maybe not. Except on mucous membranes and open wounds, exposed cells will be dead-ish (part of the stratum corneum) to begin with. Still, more investigation is needed: Is it okay to repeatedly destroy the surface layer of cells on our eyes? It may not be a problem, but I’d want at least a couple expert opinions if not a safety study before exposing my eyes to something like that.

And the fibroplast cell death also raises the question: What is killing the cells? The authors kind of gloss over this point—they cite another paper and say it’s probably mostly cell membrane damage. Before far-UVC is widely implemented, we need to be more confident that it is in fact cell membrane damage and not something more nefarious.

Limited Scope of Safety Studies

It’s also important to note the reason why the safety studies were conducted: The authors envisioned using far-UVC to fight antibiotic-resistant bacterial infections during surgery. They thus assumed a surgical environment, which means that applicability to public spaces is limited:

The mouth is covered by a face mask in surgery. Safety has not been established for the parts of the inside of the mouth that don’t have the stratum corneum. If we’re all wearing face masks, then this isn’t a problem, but if we’re imagining far-UVC light can let things go back to “normal,” then we need to think about the safety of our mouths. The insides of our mouths of course won’t be as exposed as our skin (the exact level of exposure depends on the positioning of the germicidal lamp, the reflectivity of surfaces, and the tics and facial posture of the person in question), but they will be exposed enough that we should know more about far-UVC’s cancer risk on mucous membranes.

And what about exposed wounds? Once again, safety has not been established for cells not covered by the stratum corneum. In the surgical environment, the nurses and doctors will not have exposed wounds. The patient’s decreased risk of surgical site infection is likely worth the unknown risks of far-UVC light on exposed flesh. But what if I’m walking around in public spaces with a skinned knee?

Finally, the safety studies focus entirely on mammals. In the surgical environment, humans are the only relevant entities that need to be protected. Many public spaces are open to pets, livestock, and urban wildlife. Even if you only recognize the value of animals’ lives with respect to what they can do for humans, many people keep reptiles or birds that they love, many people eat birds and fish and insects, and we rely on various organisms from across the animal kingdom for ecosystem stability. We should probably try to avoid causing a skin cancer epidemic in non-mammalian clades.

Community-level Concerns

Even if far-UVC is completely safe for humans and other macroscopic organisms, the potential for widespread use of far-UVC leads to a number of other concerns that need to be addressed before such a solution is implemented.

Acquired Resistance

In general, germicides should be used conservatively because of the potential for acquired resistance. Medicine is an evolutionary race to nowhere, with pathogens evolving to survive whatever we use to fight them. UVC light is no exception. As discussed in a literature review, one research group managed to teach E. coli to better survive UVC irradiation.

In the case of the lab-created E. coli acquired resistance, the degree of resistance was fairly weak. Lethal doses of light were still very possible. It’s not currently known whether or not full resistance by microorganisms is possible or likely. More experimentation will offer future scientists a clearer picture.

In the meantime, we should reserve UVC light for cases with high potential benefit and lower potential for acquired resistance. Ubiquitous use of far-UVC light in public spaces has the potential to teach resistance to all future pathogens, so that when the next epidemic or pandemic comes along, we’ll be completely neutered.

Is a More Sterile World a Healthier World?

UVC light kills more than just pathogens. It kills all microorganisms indiscriminately. We don’t know what would happen if we killed all bacteria in our public spaces. It could lead to problems. Bacteria play an important role in a lot of ecological processes like the decomposition of organic waste. And if it turns out that it is a bad idea to destroy all microorganisms in our public spaces, it’s not necessarily something we can come back from. An established colony of beneficial or harmless bacteria can protect against the growth of harmful bacteria. If you kill your gut bacteria with antibiotics you risk getting a harmful new microbiome. Could the same be true at a grocery store?

The Law of Unintended Consequences

Even if we can establish safety for the concerns I’ve raised above, we can never be sure that we’ve thought of everything that can go wrong. In environmentalism we recognize the “Law of Unintended Consequences:” We are very very far from understanding the world perfectly, so big technological shifts will always have unforeseen effects.

Of course, the law of unintended consequences is not a reason to hold back on change entirely. We can never know the consequences of our actions fully, so if we always avoided acting on uncertainty, we would never do anything at all. But it is possible to mitigate the potential adverse effects. In general, it is better to roll out something like this in a smaller environment where it has high potential to help (like in surgical rooms). Safety results can be assessed in those small environments before we expose the public at large.

The Stickiness of Social Change

Everything I’ve said so far is a concern, but desperate times call for desperate measures. Could the use of far-UVC be worth it, if limited to the worst of the COVID pandemic? This is the wrong question. We need to ask: If we start using far-UVC in public spaces during the pandemic, realistically, will we stop using it when the pandemic is over? Things like this tend to have staying power.

This potential for staying power is especially dangerous when paired with the preceding three concerns. The risk of acquired resistance increases with use, and we don’t have laws that prevent misuse—when far-UVC as a safe germicide becomes more accepted, it may, like antibiotics, be adopted by factory farms, increasing even more the probability of new resistant pathogens.

Killing all microbes in public spaces for a longer period of time may have worse consequences than limited use during a pandemic. Scientific understanding of the microbiome is still pretty young, but we do know that these beneficial microbes are exchanged between individuals. Ubiquitous far-UVC light would end microbiome exchange in public settings. As far as I can tell, we have no idea what the consequences of this might be.

We don’t know what the unintended consequences of introducing far-UVC light to public spaces might be, but we do know that the discovery of negative consequences often doesn’t end the use of a new technology. For example, in the 1970’s and ‘80’s, it was assumed that UVA light was safe. UVA does not cause direct DNA damage, and indirect damage was not yet discovered. Accordingly, starting in the late ‘70’s, tanning salons that irradiated users with UVA light (causing a “safe” tan without a sunburn) opened up all across Europe and the US. We now have known for more than 10 years that UVA light causes cancer. And tanning beds are still around! In most of the US, tanning beds are not only completely legal, but also accessible to minors. One study found that hundreds of thousands of skin cancer cases per year are associated with the use of tanning beds (they did not give an estimate of how many of these cases end up being fatal). Incorrectly stating that a technology is safe can lead to huge numbers of preventable deaths, even after the mistake has been corrected.

I don’t want to have a lengthy discussion of ethics here. This post is intended to be more an analysis of safety and potential negative consequences. Still I need to at least bring it up:

What about human rights? Is it okay to irradiate people without their consent? How do you obtain meaningful consent for something like this?

Of course, we irradiate people without their consent all the time with radio and wifi and cell phones, but those are lower energy waves far far less likely to be dangerous. Far-UVC, even if it is non-carcinogenic and doesn’t penetrate the stratum corneum, does definitely affect our bodies: It kills our skin microbiome. Just going off of my gut instinct, I’m ethically fine with wifi, while far-UVC,in a hypothetical future where safety is more established, seems much more ethically questionable.

In the US, we put fluoride in our water. If you don’t want to drink fluoride you have to significantly inconvenience yourself to avoid it. Ethically, does our approach to fluoride work for far-UVC? In the US, we don’t vaccinate people without consent, even though the unvaccinated damage herd immunity. We don’t put vaccines in the water. Is far-UVC in public spaces more like fluoride or more like vaccines?

Conclusion

I know I’ve seemed very negative about far-UVC for the past three thousand words, but I actually am very excited about its potential. It is because it is exciting that I think this kind of safety investigation is necessary.

We should not be using far-UVC as a germicide in public spaces any time soon. A better goal might be smaller-scale implementation within medical wards to prevent hospital spread of coronavirus or other contagious illnesses, but we’re still a long way off from even that. There are a lot of hurdles far-UVC still needs to clear before we decide it is safe. It may well clear those hurdles. I hope it does.

Should investigations into far-UVC be continued? Absolutely. Should the investigations be expanded? As someone who is not broadly knowledgeable about the frontiers of medicine, I have no idea whether far-UVC is being neglected relative to other promising technologies at similar stages of development. Still, the coronavirus pandemic has demonstrated the life-saving value of this kind of research. Public funding into this kind of research should be expanded in general, so that by the time the next pandemic hits we can know better what technologies are safe and effective.

Main Sources

Inspirations

Welch, D., Buonanno, M., Grilj, V. et al. Far-UVC light: A new tool to control the spread of airborne-mediated microbial diseasesSci Rep 8, 2752 (2018). https://doi.org/10.1038/s41598-018-21058-w

Roko Mijic, Alexey Turchin. Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other PandemicsLessWrong. March 2020.

Safety

Buonanno, M. et al. 207-nm UV light – a promising tool for safe low-cost reduction of surgical site infections. I: in vitro studiesPLoS One 8, e76968 (2013).

Buonanno, M. et al. 207-nm UV Light-A Promising Tool for Safe Low-Cost Reduction of Surgical Site Infections. II: In-Vivo Safety StudiesPLoS One 11, e0138418 (2016).

Buonanno, M. et al. Germicidal Efficacy and Mammalian Skin Safety of 222-nm UV Light.Radiat. Res. 187, 483–491 (2017).

Germicidal UVC Literature Review

Dai T.H., Vrahas M.S., Murray C.K., Hamblin M.R. Ultraviolet C irradiation: an alternative antimicrobial approach to localized infections? Expert Rev. Anti Infect. Ther.2012;10:185–195.

Rabbit

Okay, so here’s the thing about rabbits: They’re not much good at dancing. How do I know this? I dated one once. I know what you’re thinking: “Not all rabbits are like your ex. It’s not okay to generalize about a group based on one bad experience, you specieist bigot.” Well, alright then. You date a rabbit then.

No? You see, exactly. You don’t want to date a rabbit. Who’s specieist now, asshole?

Oh. You would date a rabbit, but none live in your urban neighborhood? Okay then. Let me tell you how it would go:

At first it’s great. The rabbit is cute with his long ears and pink nose. And the way he looks at you just makes you feel so special.

You catch sight of him for the first time while taking an afternoon stroll through the park near your high school. It’s October of your senior year, and the ground is littered with orange and red maple leaves, so that the whole world looks warm like the embers in a fireplace late at night after everyone has gone to sleep. Jason Greene has asked you to Homecoming, and you’re pretty pleased with yourself.

The white rabbit with the brown mark jumps out of the yellow brush and fixes you with that look you’ll come to know and love. It says curiosity and fear both at once. And he stands there for just a second or two, before hopping back into the brush.

And you’re intrigued. Of course you’re intrigued. How could you not be? With those soft ears and that pink nose, and that special way he looked at you?

So you go straight to the grocery store, and you buy a bushel of carrots, and a head of lettuce, and the next day you’re back at the park. And the day after that. And the day after that.

And eventually he reappears. You’re sitting on a bench near where you first caught sight of each other, and out pops his head from the tall grass just ten or so yards away, and a short while later, out pops the rest of him. You can tell it’s the same rabbit from the distinctive brown marking on his left flank.

So you hold out the carrots in one hand, and the lettuce in the other, but he doesn’t approach, just sits there on his haunches, giving you that look – you know, the one he gave you before, that special look that says curiosity and fear both at once.

So you drop the lettuce and the carrots both on the ground, and you take five steps back, and he scampers forward and pokes at them with his little pink nose, and starts munching (turns out he prefers the lettuce to the carrots).

And the next day, the same thing happens, and the same thing the day after that, and the day after that, and the day after that, but each time you’re backing away a little less, and he seems a little less afraid, until finally you’re not backing away at all: You’re just sitting there side by side as he eats his lettuce. And not long after that, he’s eating the lettuce right out of your hand, and letting you pet him, and stroke him, and even hold him, until one day you put your lips against his furry back, and you whisper “I love you.”

So now I guess you’re going steady. You see him every day. You touch each other. You kiss. He eats his lettuce. You share an intimate silence.

And it’s not always easy. Communication, especially, is difficult. But you struggle through, because it’s worth it, because you’re in love.

Outside the park, life proceeds pretty much as normal. It’s April, and a couple of guys have tried to ask you to prom, but of course you want to go with your rabbitfriend.

But he can’t ask you to go. He probably doesn’t even know what prom is. So you just have to take him, and trust that he’d want to do it for you, if only you could explain to him what exactly prom was, and why exactly it matters.

So when the day of the prom arrives, you show up at the park bench as usual, only this time you’re wearing your prom dress. It’s light blue and just a bit frilly, and you feel like a princess.

You give your prince his lettuce, and sit side by side as he eats, and when he’s done, you gently lift him up and start carrying him to your parents’ car.

You can feel his little heart fluttering against your fingers, but he doesn’t fight you, and you’re in the car on your way to the Hilltop ballroom, him in the passenger’s seat, and he’s giving you that same old look: that mixture of curiosity and fear, only this time you also sense something else. Trust.

So you arrive, and you scoop him up out of the passenger’s seat, and you grab the prom tickets from the glove compartment, and you start walking – trying not to run – towards the ballroom. Your heart’s beating fast, his is beating faster. You want to turn around and run away. You do turn around. You walk back to the car. Did you remember to lock it? Yes you did. Take two. You hold your rabbit close to your chest. You wish you’d managed to find a rabbit-sized tux. You hope it’s okay that he’s nude. They probably won’t mind. He is a rabbit after all. You walk not-too-fast not-too-slow up the stairs and through the double doors. You hand over your tickets to an old woman at the folding table that’s serving as a makeshift kiosk. You’re pretty sure she gave you and your rabbit an odd look, but she doesn’t say anything – just waves you in.

Prom doesn’t go well. Of course it doesn’t go well. What did you expect? They say high school’s rough for gay kids. Well. Try being a rabbit lover.

But the worst part isn’t the ridicule of your bigoted and drunk peers. And it’s not the mixture of pity and disgust that you see coming from the chaperones. No, the worst part is the behavior from your little prince himself. He doesn’t like the dark, and he doesn’t like the loud noise, or the people pressing from all sides. You try moving gently to the beat, imploring silently, that he just try his best to feel the music, but it’s just not working, and he’s scratching at you, and twisting in your arms, and you try to cling onto his little body, but Jason Greene bumps against your elbow, and your rabbit drops to the floor, and he’s off, darting between dancing feet, and you’re going after him, pushing and shoving through the crowd. You slip in a pool of rabbit urine, and you grab at someone’s prom dress, almost ripping it, barely keeping upright.

And you lose sight of him.

It takes you three hours to find him. Prom is over. The chaperones have kicked everyone out. They’re all either at home, or at their afterparties, or else in collectively rented hotel rooms, losing their virginities. The chaperones saw the tears in your eyes and let you stay though. They took pity on you.

It’s 1 am when you find him, sleeping behind a drinking fountain. When you lift him up, he only fights you a little. You drive back to the park in silence. You only have to pull over once to wipe away your tears. The moment you open the passenger door, he’s out, and running away as fast as his little legs can carry him. It’s only once you make it home, parked in the driveway in front of your house, that you finally let yourself break down.

He’s there the next day. Same time, same place. But he doesn’t approach you. Doesn’t even approach when you drop the lettuce and take a step back. Doesn’t approach after five steps back.

“I’m sorry,” you say.

Your rabbit just looks at you blankly.

“Please,” you start to say, but your throat constricts and you can’t get the words out.

You take a deep breath.

“Please,” you try again. “I love you.”

But your rabbit says nothing. Just fixes you with a look, different from the look from before.

Accusing. Betrayed.

You stand there in silence, just looking at each other for what feels like minutes. Then you try again. “Please. I’m sorry I – ”

Your little prince turns around and hops into the tall grass.

The next day the lettuce is still lying limp where you left it. Your rabbit is nowhere to be seen.

Fuck bunnies.

Follow-up on the Bolivian Coup

In fall 2019, after accusations of election fraud, the Bolivian police removed support for president Evo Morales, and interim president Jeanine Áñez was installed in his place.

I wrote about the event at the time, focusing less on the election and resignation itself, and more on the question of epistemology in a hostile environment.  An uncritical read of the news at the time would suggest that the forced resignation was just—Morales manipulated election results and was being appropriately deposed.  However, a rational actor should be critical of the news: American news is systematically biased in favor of United States special interests and seemingly bases its assessment of an election’s legitimacy not on democratic principals but on whether or not the elected leader supports US influence in the region.  Because of this news bias, I was agnostic about the exact situation in Bolivia, but was willing to call what happened a “coup” because I knew that a similar event in a US client state would be considered a coup. Consistency is important, I concluded. Inconsistent standards do not lead to democracy, inconsistent standards serve the the interests of whoever gets to set the standards—in this case US elites.

I wasn’t really comfortable with this take at the time.  It’s important to apply consistent standards across elections, but of course “consistent standards” does not mean blindly supporting any state that the US opposes—I don’t want to make the same mistake as the American communists who assumed that the Cambodian genocide was a fabrication.  I was anxious that in calling Morales’ resignation a “coup,” I was ignoring anti-Morales evidence, and denouncing real grass-roots opposition to an illegitimate leader who was set on becoming president-for-life.

The information that has come out since has assuaged my anxiety and confirmed my initial instincts; if anything, at the time of the original post, I was insufficiently pessimistic about the accuracy of the US mainstream news media.  The interim president, Áñez, has not behaved like a temporary president whose job is to oversee new fair and free elections.  Instead she has pushed a right-wing Christian agenda.  Upon declaring herself interim president, she held a giant bible above her head, shouting “The Bible has returned to the presidential palace.”  She immediately replaced the entire cabinet and the top military leaders with white Christian conservatives. And she even preemptively granted amnesty to military members who use force to quell protests.

Since then, the new Bolivian government has charged 40 former government officials with sedition and subversion, and government prosecutors have moved against the most popular (socialist) candidate for the coming election.

Yesterday, the Washington Post published an article by researchers at MIT calling into question the accusations of election fraud.  The main proponent of the fraud accusation was the US-backed Organization of American States (a group that has historically opposed leftism in the Americas):  The OAS audited the election and found what it called “clear manipulation,” based in part on the statistically unlikely jump in Morales’ support between the preliminary result tally and the official vote count.1The preliminary results showed Morales with a plurality of votes, but without a sufficient lead to avoid a run-off election. When the official results came out, they had him leading by the 10 percentage points necessary to win outright.  The OAS recognized the plurality as legitimate, but not the 10 percent margin.   As the researchers at MIT point out, the OAS neglected to take into account the fact that votes can vary by time of day.  Using a more sophisticated statistical analysis, the researchers found no evidence of election tampering.

In short, there was probably no election fraud in Bolivia.

The researchers cogently conclude by pointing out that the standards the OAS used to judge the Bolivian election would also suggest that United States elections are illegitimate:

Previous research published here in the Monkey Cage finds that economic and racial differences make it difficult to verify voter registration in the United States, resulting in higher use of provisional ballots among Democrats — and greater support for Democratic candidates among votes counted after Election Day. Under the OAS criteria for fraud, it’s possible that U.S. elections in which votes that are counted later tend to lean Democratic might also be classified as fraudulent. Of course, electoral fraud is a serious problem, but relying on unverified tests as proof of fraud is a serious threat to any democracy.

Of course, this hypocrisy extends beyond the OAS to United States news corporations in general. In addition to the jump in Morales’ vote tally, news at the time focused on the interrupted results transmission, and the delay in the release of the official results, even though the interruption and the delay was consistent with Bolivian election protocol.  Contrast this with the mainstream media’s response to the recent delay in the Iowa caucus results.  In both cases, the delay caused doubt in the electoral process.  Both delays led to equally (in)valid conspiracy theories of election fraud.2For example, “Anti-Sanders billionaires, behind the app that delayed Iowa’s voting results”  In the case of the Iowa caucus, although I’ve seen conspiracy theories all over leftist youtube and reddit, mainstream news sources have ignored conspiracy theories and reported only the official reasons for the delay. In the case of the Bolivian election, US media failed to disclose the official reason for the delay, and uncritically reported a conspiracy theory as putative fact.  And, like the OAS’s abuse of statistics, this kind of misreporting is a serious threat to democracy.

What puts the icing on the cake for this Washington Post article, is the way the article itself reflects the hypocrisy of the mainstream media.  The text of the article, written by the MIT researchers, is fine.  It’s narrow in scope, criticizing only the OAS’s statistical analysis, and not going into the organization’s backing or history.  It doesn’t comment on whether the OAS’s bad analysis is a dangerous but honest mistake or willful negligence or intentional disinformation.  It doesn’t comment on the mainstream media’s reaction to the audit.  It only critiques the audit itself.  The title of the article, presumably written by a copy-editor, is another story:  “Bolivia dismissed its October elections as fraudulent. Our research found no reason to suspect fraud.”

Titles are important.  People browsing the news might not even read the article in question, and will take away whatever information the title contains.  Of course, we can’t expect a title to be very nuanced, or even to be a good synopsis of an article, but it should at least be accurate. “Bolivia dismissed its October elections as fraudulent.”  The article is about the opinion of OAS, not the opinion of Bolivia.

This title, although it recognizes that the evidence for election fraud was faulty, still frames the resignation of Morales as the legitimate will of the Bolivian people.  “Bolivia dismissed….”  Bolivia is not a unified entity with a clear single opinion, and the diagnosis of election fraud was controversial in Bolivia.  And even if we accept the synecdoche as valid, the protests against the election did not happen in a vacuum.  Opponents of Morales cited the election audit conducted by a US-backed organization.  The eventual undemocratic interim presidency of Jeanine Áñez happened with foreign support.  Bolivia dismissed the election results?  No.  The international community dismissed the election results.

Chances are, this title was not written with malice.  Whoever wrote it was trying to give a quick, clear, and intriguing summary of the article.  “The OAS dismissed…” wouldn’t be a viable title.  Most people don’t know what the OAS is.  Disinformation does not require malintent to spread—it can happen via accumulated random seemingly small acts of negligence.  The copy-editor is exposed to the same media bias as everyone else, and that bias makes its way subtly into the titles of articles.  In an honest media landscape, information like the recent critique of the OAS audit would cause self-reflection about failed journalistic responsibility, but the stochastic propaganda machine that is the Washington Post ignores the role that the US media played in the Bolivian coup.

Footnotes   [ + ]

1. The preliminary results showed Morales with a plurality of votes, but without a sufficient lead to avoid a run-off election. When the official results came out, they had him leading by the 10 percentage points necessary to win outright.  The OAS recognized the plurality as legitimate, but not the 10 percent margin.
2. For example, “Anti-Sanders billionaires, behind the app that delayed Iowa’s voting results”

The Beatles, Bungalow Bill, and Ridicule of Real People in Art

The Continuing Story of Bungalow Bill” is a song by the Beatles released in 1968 as a track on the White Album.  Its deliberately sloppy recording and somewhat juvenile lyrics make it feel like a bit like a children’s campfire song, which is maybe why it was one of the early Beatles songs I got into, back in middle school, when I was first starting to explore music on my own.

Now, as an adult, I have a newfound appreciation for the song, as I have come to understand the darker subtext.  The song, written at a time of increasing public outrage at the genocidal acts committed by the United States military in Vietnam, ridicules a certain type of American Orientalism: Bill, a white “all-American bullet-headed Saxon mother’s son” who adopts superficial aspects of (British-)Indian culture, goes out tiger hunting.  When he and his entourage are startled by a tiger, Bill channels Captain Marvel, and shoots and kills it.  The song uses derision to point out the hypocrisy of heroic narratives surrounding imperialism—the hero adopts a shallow and condescending form of cultural appreciation, and saves the locals from some perceived threat, be it a tiger or the scourge of international communism.  But when the children challenge Bill’s act of tiger murder, he can’t stand up for himself, and instead he hides behind his mother, making it clear that his seeming heroism is a thin facade.  At heart, he is insecure, vulnerable, and childish.

So does this character deserve ridicule?  Yes.  When people act self-important in harmful ways, ridicule is an effective way to break that self-importance, which can be helpful for the self-important person, and, more importantly, communicates the harmfulness of the behavior to others who might be influenced by it, as well as communicating recognition of harm to the victims of the behavior.  And the type of attitude adopted by Bill is legitimately very harmful.  The song pokes fun of a relatively low-harm story involving the death of a tiger, but the whole story should be understood under the broader context of Americans in Asia during the 1960’s: Americans in Vietnam were there under a veneer of friendship and appreciation of the South Vietnamese people, but a lack of cultural understanding,a false narrative of heroism, and a fear of the Viet Cong led to the murders of millions of civilians, whom the US was ostensibly there to protect.

So yes, this type of attitude is worthy of ridicule.

But here’s where it gets more complicated:  Bungalow Bill isn’t just a character.  He’s a real person.  And, although the Beatles changed his name to make the song less personal, of course the tiger hunter in question still recognized the song as being about him.  And does the real life person deserve ridicule?  I don’t know.  I don’t know the whole story.  He certainly doesn’t deserve a song ridiculing him by the most famous band in the world.

Is this song a satirical social commentary, or is it a cruel personal attack?  Can it be both?

This was my real experience today, listening to, and appreciating “The Continuing Story of Bungalow Bill” as social commentary, then looking it up and learning it was about a real person.  But it touches on a broader question that I have about media:  When is publicly oriented ridicule of a real person—and I mean a private person, famous powerful people are a completely different case—okay?

I don’t want to definitively answer this question—I don’t think it’s possible to reach a simple overarching answer.  I want to treat “The Continuing Story of Bungalow Bill” as a case study.

And I think the answer is that “The Continuing Story of Bungalow Bill” is not about the true story.  The Beatles used a real experience as a jumping off point, but the song is intended as a social commentary that goes beyond the one experience.  The ridicule is not directed at the real individual, instead it’s directed at a real behavioral pattern.

This answer makes me uncomfortable, because in some sense, I’m saying that any hurt feelings from the personal nature of the song are merely collateral.  If the Beatles wanted to tell a general story, maybe they shouldn’t have told a personal story about a real person.  This is an appealing line of thought, but it’s ultimately wrong.  If we can’t base fiction on truth from our personal lives, then what are we left with?

I think this is part of why the death of the author is such an appealing framework.  Art seeks to say something about life, the human experience, and the world we live in.  Seeking to understand how the author relates to the work can bring our attention to real people who don’t deserve infamy, and it make the artistic messages seem petty and small.

Top Books of 2018 (retrospective)

I took notes on all the books I read in 2018; these are the books that (with the benefit of hindsight) I liked the most.

In no particular order:


A Primate’s Memoir by Robert Sapolsky

I know Professor Robert Sapolsky from his human behavioral biology lecture series on YouTube.  Coming to understand ourselves as beings crafted by biological processes is a very important project, and one so riddled with potential pitfalls that I end up feeling disdain for basically everyone who attempts it, including myself (I’m sorry Tim Urban).  Sapolsky’s work is terrific both for its attempts to build up some degree of human biological understanding, and for the analytical tools he uses to critique that understanding.

Sapolsky’s memoir is, unsurprisingly, much less educational and less philosophical than his lectures, but his insight is still present in the way that he understands his own life and the world around him.  And Sapolsky has led a very interesting life.  His memoir details his time doing field work in Kenya, alternating between chapters about baboons, and about his interactions with the people of Kenya, as he increasingly understands the culture and geopolitics of the region.

A Primate’s Memoir (by far the least heavy book on this list) is engaging, interesting and insightful throughout, and at times hilarious and surprisingly moving.


The Sympathizer and Nothing Ever Dies by Viet Thanh Nguyen

I read the novel The Sympathizer in preparation for traveling to Vietnam, and liked it enough that upon finishing I immediately started reading Nguyen’s follow-up nonfiction book Nothing Ever Dies.  The Sympathizer tells the story of a communist mole in the US-allied South Vietnamese army who emigrates to the United States and continues to spy long after the American War in Vietnam is over.  The novel criticizes the depiction of the war in the international media, and explores the impact of the side-lining of Vietnamese narratives in favor of stories that focus on American heroes and anti-heroes.

Nothing Ever Dies is a more analytical exploration of the same ideas presented in The Sympathizer.   Nguyen traveled back and forth between Vietnam and the United States, visiting museums, monuments, graveyards, and so on, in an attempt to understand the cultural memory of the war.  Throughout the book, he grapples with important questions pertaining to healing and justice and forgiveness for atrocious crimes like those committed during the American War.

These two books were important to me, both for their discussion of how to deal with the fact that humans sometimes do really horrible things, and for the way that Nguyen interacts with information.  These books lastingly changed the way that I understand media and the world around me.  Everything that humans create—whether it is a novel, a monument, a museum, a restaurant, or a scientific body of research—can and should be understood through a literary lens.  Stories are fundamental to the way that we see and express the world around us, and understanding how stories work can lead to surprising insights anywhere.


An Indigenous People’s History of the United States by Roxanne Dunbar-Ortiz

The perspectives of American Indians are largely neglected in American history—including in more left-leaning spaces.  In school, I mostly learned about American Indians pre-contact, under the unspoken implication that the cultural heritage of the United States is America’s indigenous peoples.  Discussion of contact between white settlers and American Indians was mixed with some narratives of peace, some narratives of conflict and aggression on both sides, and some narratives of genocide; but all of these narratives of contact treated American Indians as a relic of the past, from when America was wild.  We’re civilized now, so of course indigenous people no longer exist.

In reality, America’s indigenous peoples still exist, and the conflict between them and the United States is ongoing, with continued poverty and cultural trauma in Native American communities, continued abridging of the rights of sovereign indigenous nations by the US federal and state governments, continued treaty violations by American companies, and even continued race-based genocide.

This book is a really good overview of US-indigenous conflict from contact to today.  Dunbar-Ortiz challenges common cultural narratives and historical assumptions, including many, like the notion of American multiculturalism, that are celebrated in left-leaning communities.


Eichmann in Jerusalem: by Hannah Arendt

Hannah Arendt, German born Jewish American philosopher, reported on the 1961 trial in Jerusalem of Adolf Eichmann for The New Yorker, and three years later published expanded versions of her articles in a book.  The book (subtitled A Report on the Banality of Evil) largely consists of a psychological profile of Eichmann: Eichmann, who was one of the major organizers of the Holocaust and who was found to be essentially psychologically normal by multiple psychologists, argued throughout his interrogations and trial that although he was involved in the organization of death camps, he was not guilty of any crimes because he was just following orders, or doing his job, or acting in accordance with the moral system dictated by Nazi Germany.

Arendt uses Eichmann as a case study to criticize the tendency to look for depth in evil actions.  Evil does not arise from inner twisted-ness.   Instead evil happens when natural human moral instincts become secondary to ideology.   Nazi Germany celebrated the ability to place Nazi party goals above the natural aversion to murder.  Eichmann didn’t want to kill people.  He enabled the murder of millions of people because it was his patriotic duty.

In this way, Arendt argues, evil is extremely banal.  It doesn’t come from some inner depth, but instead from a lack of depth.  Eichmann didn’t take pleasure in the suffering and pain of others.  Instead he just neglected to ask whether or not what he was doing was good.

This perspective on evil is, I think, very important.  I’m sure that some of the evil in the world is committed by people like Darth Sidious who find glee in the suffering of others.  And of course people who are angry, or afraid, or jealous (or any other negative emotion) sometimes do terrible things. But  a huge amount of the evil in the world comes from people who are seeking to better themselves within a system that rewards immoral action, or even from people with genuine humanitarian goals who are uncritical of the institutions that claim to further those goals.

But this perspective on evil is also not a novel perspective, at least not for me.  I grew up in a world that had had access to Arendt’s work for 40 years.  I learned about the Milgram experiment in multiple high school and college classes.  I knew that normal people can do horrible things in the right setting.  What elevates Eichmann in Jerusalem for me is Arendt’s discussion of the obvious follow-up questions.

Eichmann in Jerusalem is hugely controversial, in part because many read it as an acquittal of Eichmann’s personality.  If Eichmann is normal—if his actions didn’t stem from some moral flaw in his inner being—then how can he be guilty?  Arendt addresses this question.  She asks, what if our actions stem more from setting and circumstance than from some fact about our inner selves?  What if Eichmann is correct in his assessment that most people in his position would have done the same thing?  How can justice operate in such a world?  How can we condemn a normal person?  But then, how can we not condemn someone who was instrumental to the murder of millions of innocent people?

Most discussions of the banality of evil are detached and academic.  What do we know about human psychology?  When do people do bad things?  And these conversations are important.  But they don’t make me experience the horror that comes with understanding that Eichmann was a real human person.  Arendt (herself a Jew who fled Nazi Germany) forces us to simultaneously consider both the humanity of Eichmann and the humanity of the millions of people whose murders Eichmann orchestrated.  The effect, as she leads us to the conclusion that he needs to be condemned in spite of his normalcy, is deeply, deeply, upsetting.

This book hurt me in a way that no other piece of media ever has.  But the wound it left is a good wound.  Eichmann in Jerusalem forces us to confront the fact that we are responsible if moral negligence allows us to lead comfortable lives enabled by the suffering of others.  And hopefully by understanding our moral responsibility, we can become better people.

Everybody Hates Los Angeles

Every December, I head up to Northern California to see family and friends, and mostly I love catching up with everybody, and feasting, and singing, and other holiday things, but I also have to brace myself for a particular type of interaction: “How’s Los Angeles?”  “You haven’t become an LA native, have you?”  “They haven’t converted you, have they?”  And then when they notice some small aspect of Southern Californian culture in my mannerisms, there’s uproar.  One time, I made the mistake of calling Interstate 880 “the 880,” and I didn’t hear the end of it for days.  Don’t get me wrong, I don’t mind being teased.  Teasing is an integral part of how I relate to other people.  But in this case, there’s something much more sinister under the surface.

If you were listening in on my holiday conversations, you might think that Northern California and Southern California have a sort of friendly rivalry—we poke fun at aspects of each other’s cultures; we argue about which part of California is better; Northern Californians call Southern Californians mainstream, and Southern Californians call Northern Californians dirty hippies.  This rivalry, however, is one directional.  Southern Californians are in general completely oblivious to its existence.  And what’s more, it’s not actually friendly.  I grew up in Northern California; adults know how to hide their bigotry under a facade of humor, but kids are more sincere. Northern Californians actually just hate Los Angeles.

People in Los Angeles are shallow.  They’re vain.  They’re materialistic.  And it’s not just Northern Californians who feel this way.  A friend in New York told me that his peers sometimes talk about how fake Angelenos are.  And I’ve seen the same sentiment echoed in online spaces.  Some of my favorite internet personalities have mentioned their disdain for Los Angeles culture, in an offhand sort of way, as if LA’s shallowness is an obvious fact, it doesn’t merit discussion.

So, here’s a question: None of these people have spent much time in Southern California.  Why do they think they know what Angeleno culture is?

I. Why do people think they know what they think they know about Los Angeles?

Look.  I’m not innocent here.  As I said, I grew up in Northern California.  I grew up hating on LA.  I moved to Southern California for Caltech, not for Los Angeles, and I remained skeptical of Los Angeles as a city for an embarrassingly long time.

So why did I think I knew what I thought I knew about Los Angeles?

Hollywood.

A huge number of movies and TV shows take place in or near Los Angeles, and these movies tend to be paint the city in certain inaccurate ways.

II. How do movies depict Los Angeles?

II.1 Los Angeles equals Hollywood

First, and least nefarious, a lot of the media that takes place in Los Angeles is about people who work in (or who want to work in) entertainment.  Of course, this tendency is only natural— movies and TV are, at least in part, a tool for self expression by the creators, so the characters will inevitably reflect the real people who writers and directors interact with on a day to day basis.  There’s nothing inherently wrong with depicting actors and their struggles; I like well-crafted films about the interpersonal drama that arises when strong personalities get together to put on a show (Birdman is amazing) (I also like novels with lonely nerdy protagonists who find solace in books); but it’s not an accurate depiction of Los Angeles.

In reality, only about 5% of private sector workers in Los Angeles work in the entertainment industry (and most of those jobs are less glamorous roles less likely to be depicted in the movies), compared to, for example, 10% of the New York City workforce works in finance.  The entertainment industry is economically important in Los Angeles, and of course it’s culturally relevant, just like finance in culturally relevant to New York, but it’s far from culturally central.

Thinking back, before moving here I must have known (or at least I would have realized if I thought about it) that entertainment couldn’t be all that central to Los Angeles culture: I had seen movies that showed footage of the Los Angeles urban sprawl.  I knew that Los Angeles was a big metropolis, and that the entertainment industry must be small by comparison.

Los Angeles urban sprawl viewed from behind the Hollywood sign.

But that kind of rational assessment of the relative size of city and entertainment industry isn’t how people interact with stories.  I saw Los Angeles depicted in media, and the Los Angeles I saw was full of movie-stars and aspiring actors and failed actors and washed up entertainers.  And it was only natural that this bled into my conception of what Los Angeles was.  Los Angeles, I learned, was full of people who are constantly competing for clout and fame.

II.2 Beauty and related concepts

People like to look at pretty things.  People like to look at pretty things, so it only makes sense that actors are significantly more good-looking than the average person.  And in addition to actors’ generally symmetrical faces and their facial features that correspond to current beauty standards, studios also employ professional stylists and make up artists, so that the people we see on our screens are always clean, well-groomed, and a picture of perfect health.

And this prettiness and cleanliness extends beyond the characters.  Interior spaces are often very attractive, and usually, if not tidy, at least physically very clean.  And again, this makes sense.  People prefer looking at attractive and clean spaces.  I don’t watch TV for a perfect recreation of reality.  I don’t need to see the grime that accumulates in actual living spaces.  I’m happy watching characters interact in an immaculate kitchen.

A stillframe from Buffy the Vampire Slayer (S2E11, “Ted”).  Look at that stove top.  It’s perfectly white.  Spotless.  And they’ve just finished cooking on it.

And this beauty and cleanliness can also extend into the personal lives of the characters.  Of course many films and TV shows tackle very serious themes, but the most widely marketable entertainment is less challenging.  People like to watch TV to relax.  We don’t always want our shows to address financial problems, or trauma, or mundane health issues.  Sometimes we want intrigue and petty interpersonal drama.  We want a beautified, more exciting, and easier version of life.

There’s nothing inherently wrong with this kind of beautification.  I don’t always want to interact with art that challenges and deeply moves me; that would be exhausting.  But when a huge amount of beautified media depicts a particular city, it can rub off on our impressions of that city.  For a more naive (less cynical) person, this kind of media might create the sense that Los Angeles is a place where no one has real problems, and where everything is easy.  I was a Smart Critical Thinker, so I knew that such perfect places couldn’t exist—instead of leaving me with a positive impression, this beautification created a sense of artifice (of course, it’s fiction; it’s literally artificial), which in turn imbued my impression of Angeleno culture with that same sense of artifice: People in Los Angeles are fake, I learned.  They care deeply about surface level appearances and avoid real emotional expression.

II.3 Actual bigotry

Los Angeles is an ethnically diverse city, but the people depicted in movies and on TV tend to be white.  Los Angeles has a large number of Latinx communites, some of which have been in the area since before California was part of the United States, but you would never get a sense of that history from TV.  Los Angeles also has a huge Armenian American community (as of the 1990 census, Los Angeles was home to the largest population of Armenians anywhere in the world outside of Armenia), and this Armenian American population is very visible to anyone who lives in Los Angeles, but again you wouldn’t know about it if you only learned about Los Angeles from the movies.

(Film critic Thom Andersen in his 2003 video essay Los Angeles Plays Itself points out that many movies ostensibly take place in specific Los Angeles majority-minority neighborhoods, but these movies tend to be filmed in upper-middle class majority white neighborhoods, and tend to have a mostly white cast.  I can’t personally speak to how widespread this phenomenon is, and I don’t know to what extent it has continued into 21st century media, but I can say that I know what it’s like to find piece of media that depicts a boring white-bread community with your diverse hometown’s name slapped onto it, and it’s infuriating.)

One thing that the entertainment industry gets right about Los Angeles (and maybe it’s a self-fulfilling prophecy) is that the city is a destination for disaffected youth who for whatever reason feel the need to leave home and seek acceptance or fame or artistic fulfillment in the City of Angels.  What the movies generally don’t show you is that these youths are very often either gay or transgender (or both).  Los Angeles is not as gay as San Francisco, but it has large gay and transgender populations, and these populations are visible to anyone actually interacting with the city.  The queerness of Los Angeles should not be surprising if we stop to think about it—of course gay and transgender teenagers are much more likely to have serious problems in their home-lives that push them to seek community elsewhere—but for me, entering Los Angeles with preconceptions shaped by the media, it was unexpected.

This white-washing and straight-washing of Los Angeles is what gets me actually upset about the media’s depiction of the city.  I can roll my eyes at people who say that Angelenos are fake or clout-obsessed, but the expectation that Angelenos are homogeneous and boring—the erasure of diverse and vibrant communities—is actually harmful.  I don’t really want to get into a discussion of exactly why representation matters, let’s just accept that representation matters.  Representation is a pathway to political empowerment and change, and the real people of Los Angeles need that kind of empowerment just as much as ethnic, gender, and sexual minorities need it in any other city.

III. What is Los Angeles actually like?

Look, I can’t tell you what Los Angeles is like.  A full description of a multifaceted city is beyond the scope of single blog post.  And I’m also not really an expert on Los Angeles.  I live here, but I still feel culturally like an outsider.  I don’t have deep roots in any Los Angeles communities. But here’s what I can say:

Even more so than in most cities, a single unified view of Los Angeles is necessarily short sighted.  There is no single city center that forms the cultural or commercial heart of the city.  But don’t mistake this lack of center for a lack of community. The sprawled nature of the city means that it’s more a collection of overgrown overlapping towns than a single unified entity, and accordingly Los Angeles is a network of smaller cultural hubs.

So with this limitation in mind, it’s not going to be possible for any one piece of media to definitively capture Los Angeles as a whole.  I’m sure that there are lots of films that attempt to depict some of the many unseen faces of Los Angeles.  I’m not a film buff, so I can’t give much of a list, but I will say that Tangerine, which tells the stories of two transgender prostitutes and an Armenian taxi driver, feels like a much more accurate portrait of the city that I personally interact with than anything else I’ve seen.  

Top Books of 2019

The best books I read this year, in no particular order:

The Amazing Adventures of Kavalier & Clay by Michael Chabon

This novel tells the story of two Jewish cousins who become successful comic book authors in the 1940’s and ’50’s.  It explores the antifascist origins of the superhero genre, as well as questions surrounding the role of art in society.  I loved this book—it left me with a new appreciation for superheroes and comics in general—and it does more than just explore comics and comic history.  It also moved me on a personal level over and over, reducing me to tears at several points in the story.

Hell and Good Company: The Spanish Civil War and the World it Made by Richard Rhodes

In April this year, after watching Pan’s Labyrinth, I started to feel embarrassed about how little I knew about the Spanish Civil War.  Reading this book was an attempt to rectify my ignorance.  On some level this book was unsatisfying:  It left me with only slightly more knowledge about what exactly happened in Spain from 1937 to 1939: It’s not a military history book.  Instead, it paints a human history of the war, detailing the personal experiences and motivations of the soldiers, journalists, nurses, and others who were there.  I have always had strong pacifist instincts, and while I understood intellectually that war might sometimes be justified, on an emotional level I felt that war was always wrong. This book made me feel that sometimes violent action is morally good.

A Disability History of the United States by Kim E. Nielson

Before reading the title of this book, I didn’t even realize that disability was a historical lens.  This book, in addition to tracking the rights of disabled people through US history, helped me understand how cultural values dictate how we understand our strengths and weaknesses.  I heard somewhere that “History is the most revolutionary science because it forces us to understand that things could be different.”  For me, this book was an illustration of the truth of this quote.  This book taught me that there were many other ways that people interact with their bodies, and it helped me with the struggle of learning to accept the inevitable decline that comes with getting older.

The Red and the Black by Stendhal

It’s hard to pin down what this book is about.  The story follows a young peasant named Julien Sorel living in 1820’s France as he pursues love and wealth and honor.  It’s an exploration of the human psyche, and the gulf that exists between what we expect to make us happy and what actually makes us happy.  Or it’s a critique of 1820’s French politics, and the ways that the new social order corrupts personal endeavor into serving the ends of powerful people.  Or it’s an examination of vanity, and the ways that concern about how we are perceived can consume and destroy us.  Or it’s an explication of the narrowness of the human mind, and the way that our personal sociological theories inform interpersonal behavior and dictate relationships.

The Red and the Black is one of the most intellectually stimulating novels I’ve read in a long time.  It gave me new tools to think about love, politics, friendship, self worth, happiness,economics, careers, and many other facets of the human experience.

My pastiche of The Red and the Black.

Manufacturing Consent: The Political Economy of the Mass Media by Edward S. Herman and Noam Chomsky

I’ve been a fan of Noam Chomsky for a long time, but I didn’t get around to reading what is arguably his most important work until fairly recently.  This book proposes a propaganda model for the US mass media, outlining the ways that ostensibly independent news sources are beholden to powerful entities like corporations and the US government.  Although the United States government rules with the consent of the voters, the US is not a true democracy, Chomsky argues, because that consent can be manufactured by media control.  Elections are only free insofar as the press is free, and in the United States the press is not free.

Part of why I didn’t read this book until recently was because I was already familiar with Chomsky’s ideas. I assumed that Manufacturing Consent would be redundant with what I already knew.  I was pleasantly surprised.  This book is really good.  Chomsky is trained in the sciences (he’s one of the founders of cognitive science), and he and Herman attempt to explore their propoganda model with scientific methods and rigor.  The result is that the book provides not just a lens through which to understand the media (which is what I generally expect from books of this nature), but a concrete sense of where and how much the lens matters.

My post on Bolivia, which was informed by Manufacturing Consent.

Read Manufacturing Consent for free here.

Honorable Mention: The Basque History of the World: The Story of a Nation by Mark Kurlansky

The Basques are one of the oldest peoples of Europe, living with a continuous identity, culture, and language—in the Pyrenees on the border of modern-day France and Spain—since well before the Roman Empire.  This book explores the question of how a group can manage to survive for so long.  It offers insight on how to preserve tradition and heritage while also being forward-thinking and progressive.  And it challenged my understanding of nationalism and ethnic identity.

Read The Basque History of the World for free here.

Posting “every day” Conclusions

In November, I decided to experiment with posting more often.  I said “every day,” but the spirit of the experiment was more just to write more often, and to post things when they weren’t as close to what I consider done: Prioritize quantity over quality.

What did I learn?

First of all, of course, posting more often meant lower average post quality.  The quality of the prose and writing structure suffered some, but less than I expected.  The topics of the posts were more eclectic, as posting more often meant forcing myself to push through an idea I wasn’t really sure was worth writing about.  And the ideas were less thought through.  If I’m not pressed for time, I’ll write something up, read it a few days later, and potentially basically rewrite it to incorporate the insight I gained from the process of writing the first draft.  Instead, I was posting these first drafts without the extra thought.  This meant that my posts were less likely to be insightful, and more likely to be analytically flawed or just factually wrong.

But these downsides had dual upsides.  One important lesson I learned from posting “every day” was that I’m not actually very good at knowing what posts will be the most worthwhile before I write them.  One of the posts I’m most proud of, about colonialism and fire policy, was written only because I wanted to force myself to write.  I was casting around for a topic, figured fire policy was easy because I already knew quite a bit, and only realized halfway through writing it that what I was saying would be probably completely new and insightful for a lot of people.

So ultimately, I guess the actionable lesson here is that I should be taking more risks and expending less effort per topic.  If I have an idea, try writing it up, and see where it takes me.  If it ends up being a one-off post that goes nowhere, that’s okay.  If I end up building on it, that’s awesome.  And I need to be less afraid of being wrong.  Of course accuracy is important, but at some point, the marginal accuracy increase isn’t worth the marginal cost of deliberating longer.

Of course, this kind of a change will mean a lower average post quality, but hopefully it will also mean a greater number of posts that surpass the threshold that I would consider “good.”

 

Marketing Shapes How We Interact With Our Bodies

Health is one of those nebulous concepts that seems straightforward and obvious, but then on closer investigation is very difficult to pin down.  Of course, some health judgements are easy, but many aren’t possible to make without aesthetic judgements that are person- or culture-specific.  How important is physical capability?  How important is longevity?  Are athletes (people who are at higher risk for heart disease)  healthier or less healthy than non-athletes?  Are people who use wheel chairs inherently less healthy than people who can walk?  How important is beauty?  Is horrible acne a health problem?  Is bad body odor (in the absence of other symptoms) a health problem?  What about happiness?  Is happiness a part of health?  If so, how do we define emotional well-being?  Is someone who experiences a lot of joy, often in inappropriate situations healthier than someone who experiences much less joy?

I’m not interested in answering any of these questions right now.  I’m just trying to demonstrate that health is subjective: Health depends on cultural values and aesthetics—we can’t have a discussion of health without also having a discussion of what aspects of the human experience are valuable.  So, where do these values and aesthetics come from?  Well, um. A lot of them come from powerful people who want to make money in the health industry.

To some extent, advertising is about sharing information, so that people who would want a product—if only they knew about it—will know that it exists, will want it, and will buy it.  But people don’t have immutable desires, so advertising is also about shaping what people want.  If you can shape the public’s utility functions, then you can make people want to buy whatever you have to sell. In the health industry, shaping utility functions means manipulating the public conception of what it means to be well.

A couple of examples:

I. Herpes

Herpes is extremely common.  Up to 90% of the adult population has some form of herpes.  It’s also extremely stigmatized.  One 2007 poll placed it in second to HIV for the most stigmatized STI.  Herpes is also mostly harmless, in most people causing only a mild itchiness at times when our immune system is down, like when we have the common cold (hence the term “cold sores”).  In rare cases, the virus is inconclusively linked to much more serious illnesses.

Where does the herpes stigma come from?  Herpes stigma arose in association with disease awareness campaigns conducted by Burroughs Wellcome, a pharmaceutical company that had developed an anti-viral herpes treatment.  Within a decade, herpes went from itchiness to disease.

This shift in public perception is based neither on facts nor on misinformation, but instead on aesthetic preferences.  Is communicable itchiness a health problem, or an aspect of the human condition?  I don’t think there is a correct answer to that question.  It’s not a question of essential health, but instead a cultural question about how we interact with our bodies.

II. Depression

Prozac was the first SSRI to hit the market, and made a huge difference for a lot of people who were suffering from depression.  For Prozac’s pharmaceutical company, Eli Lilly, the mission wasn’t as straightforward as connecting patients with a preexisting depression diagnosis with a new medication.  They also wanted new diagnoses.  They wanted to create a depression drug market.

Prozac was released with unprecedented “revolutionary” levels of dedicated marketing.  Eli Lilly needed to communicate that depression was something that normal people could experience—decreasing the stigma of psychiatric treatment and therefore expanding their market.  And they needed to spread the idea that chronic sadness was something that could and should be treated as a biochemical problem.  By adjusting the levels of neurotransmitters in our brains, we can become healthier and happier—and investors in Eli Lilly can become wealthier.

Arguably, the narrative that the Prozac marketing team pushed was a very healthy narrative for society to receive.  Here‘s a New York Times opinion piece that argues exactly that.

Not only was it suddenly O.K. to be taking an antidepressant, for many it became a badge of honor. Its marketing let everyone know, “hey, depression isn’t a personal failing or due to poor morals or bad parenting. It’s a biochemical thing that a medication can help with.”

More recently, I’ve seen a lot of push back against the depression-as-illness narrative.  Mood issues definitely can be caused by biochemical imbalances, but when we treat chronic sadness as necessarily medical, we sublimate community issues like oppression or loneliness or economic strife into a collection of maladies that afflict separate individuals.  And we treat these community issues with individual drug prescriptions, instead of with social change.

As with herpes, I don’t think there’s a fundamentally correct answer to the question “Is chronic sadness a health problem?”  Some people really benefit from thinking about depression under the health-problem umbrella.  Others view chronic sadness as a healthy response to a bad situation.  Of course the answer will be largely dependent on the specifics of the sufferer and the sufferer’s situation.  But it’s also necessarily a question of a culturally determined ideal.  How much sadness does a healthy person experience?  What is health?  These are exactly the types of nebulous questions that marketing is good at targeting.  Prozac shifted the public conception of a healthy person toward a more joyful person.

III. Conclusion?

Marketing shapes how we conceive of own bodies.  I don’t really have a coherent argument about this being a bad thing, but it makes me uncomfortable.  A conception of health that is shaped around enriching pharmaceutical companies probably isn’t a good conception of health.  Right?  I don’t know..

 

American Meritocracy is a Sham (Higher Education)

I.

Intergenerational socioeconomic mobility in the US is very low relative to our peer countries.  Why is that?  Is it because America is a real meritocracy where the poor stay poor due to their inferior moral character?  Or is it because the American economic system, while it pretends to be meritocratic, in fact systematically favors the children of rich parents over the children of poor parents? (Hint: it’s the latter.)

There’s a lot to be said about how poverty causes malnutrition and stress in children, which make learning more difficult.  And there’s a lot to be said about how public schools in poorer neighborhoods often receive less funding, again making learning more difficult.  The system makes it harder for children of poor parents to achieve merit.  But that’s not what I want to talk about.  I want to talk about the fact that even when children of poor parents manage to demonstrate merit in spite of the difficulties, the deck is still stacked against them.

Higher education is a major gatekeeper for higher paying professions.  The Pew Research Center found that a college degree is “one of the most effective assets available for experiencing upward economic mobility,” (they also found that higher education protects against downward mobility).  Therefore, meritocratic entrance to higher education is important for socioeconomic meritocracy in America as a whole.

So, is entrance to higher education meritocratic?

No.  The very rich, the rich, and the upper middle class all have significant advantage over the rest of America in college admissions.

II.

Suppose you’re a parent and you want to ensure that your child gets into the university of your choice. What are your options?

Option 1: Legal bribery

If you’re very very rich, you can bribe your way in.  For many billionaires, this strategy consists of a one-time donation to a single school, but it can also come in the form of repeated million dollar donations to a wide array of elite schools.  This type of bribery is completely legal, and of course, tax deductible.

Option 2: Illegal bribery

If you’re not rich enough to bribe your school of choice with a “charitable donation,” maybe you can bribe an individual.  This year saw the largest case of college admissions fraud ever uncovered, with more than 50 indicted co-conspirators, and allegedly 750 total families.  Of course college admissions fraud was not unheard of before; what made the 2019 case special was the broad scope.  There’s a constant trickle of legal cases involving bribes paid by parents to coaches or admissions officers without a massive conspiracy or middle man.  Here’s one from 2018, and one from 2017.

Option 3: Legal bribery again

Maybe you’re not wealthy enough to buy a building for a few million dollars, and you’re not wealthy enough (or desperate enough) to pay 100 thousand dollars to convince a coach or admissions officer to get your child into college.  You can still bribe your way in.  In fact, if you’re upper middle class, you’re bribing schools whether you want to or not.

Over the years, college admissions officers have repeatedly come forward to blow the whistle on the classist nature of college admissions.  Colleges, although they often purport to be need-blind, routinely accept lower achieving students from wealthy backgrounds because these students will pay full tuition.  One admissions officer in a recent New York Times article on the subject says “I call them the C.F.O. Specials, because they appeal to the college’s chief financial officer. They are challenging for the faculty, but they bring in a lot of revenue.”

In essence, rather than uphold the meritocratic values they supposedly stand for, colleges accept bribes in the form of tuition.  Parents communicate their ability to pay via markers like geographic location, choice of high school (public or private), their child’s SAT score (which can be drastically improved with an expensive tutor), and their child’s participation in an elite sport.

III.

Of course, this kind of bribery isn’t good for our schools.  For example, the aforementioned New York Times article brings up the fact that unprepared affluent students have a negative effect on faculty morale.  But ironically, these kinds of admission practices actually make the school look better to prospective students.

The US News College Rankings make a huge difference to prospective students—for prestigious universities, moving up the ranking brings in more and better applicants—and the US News Rankings favor schools with wealthier students.  School wealth directly affects the US News Ranking (and of course, schools are wealthier if they accept bribes).  Faculty salary (presumably largely a function of the school’s financial situation), also plays a role in the ranking.  And, finally, student standardized test scores (again, wealthy students are far more likely to have high SAT scores due to access to expensive tutoring) account for more than three quarters of the “student excellence” part of the US News rankings formula.

IV.

So, given the role that higher education plays in eventual socioeconomic status, I think we can pretty definitively say that America is not a meritocracy.   Meritocracy is a lie that keeps the upper class and the upper middle class in power.

So, my basic call to action is.  Like.  Don’t pretend that the system is meritocratic.  In our personal lives, we shouldn’t assume that people who went to prestigious universities are better or more intelligent than people who didn’t.  Because college admissions are demonstrably un-meritocratic.

Beyond the personal, is there hope?  Can we reform the system into something more meritocratic?  Probably yes.  As I discussed in a previous post, most of America’s peer nations have higher economic mobility, which would seem to indicate that a different but similar system can work.  More on this later, probably.

Is a meritocratic system desirable?  More on this later, probably, too.