Top Books of 2019

The best books I read this year, in no particular order:


The Amazing Adventures of Kavalier & Clay by Michael Chabon

This novel tells the story of two Jewish cousins who become successful comic book authors in the 1940’s and ’50’s.  It explores the antifascist origins of the superhero genre, as well as questions surrounding the role of art in society.  I loved this book—it left me with a new appreciation for superheroes and comics in general—and it does more than just explore comics and comic history.  It also moved me on a personal level over and over, reducing me to tears at several points in the story.


Hell and Good Company: The Spanish Civil War and the World it Made by Richard Rhodes

In April this year, after watching Pan’s Labyrinth, I started to feel embarrassed about how little I knew about the Spanish Civil War.  Reading this book was an attempt to rectify my ignorance.  On some level this book was unsatisfying:  It left me with only slightly more knowledge about what exactly happened in Spain from 1937 to 1939: It’s not a military history book.  Instead, it paints a human history of the war, detailing the personal experiences and motivations of the soldiers, journalists, nurses, and others who were there.  I have always had strong pacifist instincts, and while I understood intellectually that war might sometimes be justified, on an emotional level I felt that war was always wrong. This book made me feel that sometimes violent action is morally good.


A Disability History of the United States by Kim E. Nielson

Before reading the title of this book, I didn’t even realize that disability was a historical lens.  This book, in addition to tracking the rights of disabled people through US history, helped me understand how cultural values dictate how we understand our strengths and weaknesses.  I heard somewhere that “History is the most revolutionary science because it forces us to understand that things could be different.”  For me, this book was an illustration of the truth of this quote.  This book taught me that there were many other ways that people interact with their bodies, and it helped me with the struggle of learning to accept the inevitable decline that comes with getting older.


The Red and the Black by Stendhal

It’s hard to pin down what this book is about.  The story follows a young peasant named Julien Sorel living in 1820’s France as he pursues love and wealth and honor.  It’s an exploration of the human psyche, and the gulf that exists between what we expect to make us happy and what actually makes us happy.  Or it’s a critique of 1820’s French politics, and the ways that the new social order corrupts personal endeavor into serving the ends of powerful people.  Or it’s an examination of vanity, and the ways that concern about how we are perceived can consume and destroy us.  Or it’s an explication of the narrowness of the human mind, and the way that our personal sociological theories inform interpersonal behavior and dictate relationships.

The Red and the Black is one of the most intellectually stimulating novels I’ve read in a long time.  It gave me new tools to think about love, politics, friendship, self worth, happiness,economics, careers, and many other facets of the human experience.

My pastiche of The Red and the Black.


Manufacturing Consent: The Political Economy of the Mass Media by Edward S. Herman and Noam Chomsky

I’ve been a fan of Noam Chomsky for a long time, but I didn’t get around to reading what is arguably his most important work until fairly recently.  This book proposes a propaganda model for the US mass media, outlining the ways that ostensibly independent news sources are beholden to powerful entities like corporations and the US government.  Although the United States government rules with the consent of the voters, the US is not a true democracy, Chomsky argues, because that consent can be manufactured by media control.  Elections are only free insofar as the press is free, and in the United States the press is not free.

Part of why I didn’t read this book until recently was because I was already familiar with Chomsky’s ideas. I assumed that Manufacturing Consent would be redundant with what I already knew.  I was pleasantly surprised.  This book is really good.  Chomsky is trained in the sciences (he’s one of the founders of cognitive science), and he and Herman attempt to explore their propoganda model with scientific methods and rigor.  The result is that the book provides not just a lens through which to understand the media (which is what I generally expect from books of this nature), but a concrete sense of where and how much the lens matters.

My post on Bolivia, which was informed by Manufacturing Consent.

Read Manufacturing Consent for free here.


Honorable Mention: The Basque History of the World: The Story of a Nation by Mark Kurlansky

The Basques are one of the oldest peoples of Europe, living with a continuous identity, culture, and language—in the Pyrenees on the border of modern-day France and Spain—since well before the Roman Empire.  This book explores the question of how a group can manage to survive for so long.  It offers insight on how to preserve tradition and heritage while also being forward-thinking and progressive.  And it challenged my understanding of nationalism and ethnic identity.

Read The Basque History of the World for free here.

Posting “every day” Conclusions

In November, I decided to experiment with posting more often.  I said “every day,” but the spirit of the experiment was more just to write more often, and to post things when they weren’t as close to what I consider done: Prioritize quantity over quality.

What did I learn?

First of all, of course, posting more often meant lower average post quality.  The quality of the prose and writing structure suffered some, but less than I expected.  The topics of the posts were more eclectic, as posting more often meant forcing myself to push through an idea I wasn’t really sure was worth writing about.  And the ideas were less thought through.  If I’m not pressed for time, I’ll write something up, read it a few days later, and potentially basically rewrite it to incorporate the insight I gained from the process of writing the first draft.  Instead, I was posting these first drafts without the extra thought.  This meant that my posts were less likely to be insightful, and more likely to be analytically flawed or just factually wrong.

But these downsides had dual upsides.  One important lesson I learned from posting “every day” was that I’m not actually very good at knowing what posts will be the most worthwhile before I write them.  One of the posts I’m most proud of, about colonialism and fire policy, was written only because I wanted to force myself to write.  I was casting around for a topic, figured fire policy was easy because I already knew quite a bit, and only realized halfway through writing it that what I was saying would be probably completely new and insightful for a lot of people.

So ultimately, I guess the actionable lesson here is that I should be taking more risks and expending less effort per topic.  If I have an idea, try writing it up, and see where it takes me.  If it ends up being a one-off post that goes nowhere, that’s okay.  If I end up building on it, that’s awesome.  And I need to be less afraid of being wrong.  Of course accuracy is important, but at some point, the marginal accuracy increase isn’t worth the marginal cost of deliberating longer.

Of course, this kind of a change will mean a lower average post quality, but hopefully it will also mean a greater number of posts that surpass the threshold that I would consider “good.”

 

Marketing Shapes How We Interact With Our Bodies

Health is one of those nebulous concepts that seems straightforward and obvious, but then on closer investigation is very difficult to pin down.  Of course, some health judgements are easy, but many aren’t possible to make without aesthetic judgements that are person- or culture-specific.  How important is physical capability?  How important is longevity?  Are athletes (people who are at higher risk for heart disease)  healthier or less healthy than non-athletes?  Are people who use wheel chairs inherently less healthy than people who can walk?  How important is beauty?  Is horrible acne a health problem?  Is bad body odor (in the absence of other symptoms) a health problem?  What about happiness?  Is happiness a part of health?  If so, how do we define emotional well-being?  Is someone who experiences a lot of joy, often in inappropriate situations healthier than someone who experiences much less joy?

I’m not interested in answering any of these questions right now.  I’m just trying to demonstrate that health is subjective: Health depends on cultural values and aesthetics—we can’t have a discussion of health without also having a discussion of what aspects of the human experience are valuable.  So, where do these values and aesthetics come from?  Well, um. A lot of them come from powerful people who want to make money in the health industry.

To some extent, advertising is about sharing information, so that people who would want a product—if only they knew about it—will know that it exists, will want it, and will buy it.  But people don’t have immutable desires, so advertising is also about shaping what people want.  If you can shape the public’s utility functions, then you can make people want to buy whatever you have to sell. In the health industry, shaping utility functions means manipulating the public conception of what it means to be well.

A couple of examples:

I. Herpes

Herpes is extremely common.  Up to 90% of the adult population has some form of herpes.  It’s also extremely stigmatized.  One 2007 poll placed it in second to HIV for the most stigmatized STI.  Herpes is also mostly harmless, in most people causing only a mild itchiness at times when our immune system is down, like when we have the common cold (hence the term “cold sores”).  In rare cases, the virus is inconclusively linked to much more serious illnesses.

Where does the herpes stigma come from?  Herpes stigma arose in association with disease awareness campaigns conducted by Burroughs Wellcome, a pharmaceutical company that had developed an anti-viral herpes treatment.  Within a decade, herpes went from itchiness to disease.

This shift in public perception is based neither on facts nor on misinformation, but instead on aesthetic preferences.  Is communicable itchiness a health problem, or an aspect of the human condition?  I don’t think there is a correct answer to that question.  It’s not a question of essential health, but instead a cultural question about how we interact with our bodies.

II. Depression

Prozac was the first SSRI to hit the market, and made a huge difference for a lot of people who were suffering from depression.  For Prozac’s pharmaceutical company, Eli Lilly, the mission wasn’t as straightforward as connecting patients with a preexisting depression diagnosis with a new medication.  They also wanted new diagnoses.  They wanted to create a depression drug market.

Prozac was released with unprecedented “revolutionary” levels of dedicated marketing.  Eli Lilly needed to communicate that depression was something that normal people could experience—decreasing the stigma of psychiatric treatment and therefore expanding their market.  And they needed to spread the idea that chronic sadness was something that could and should be treated as a biochemical problem.  By adjusting the levels of neurotransmitters in our brains, we can become healthier and happier—and investors in Eli Lilly can become wealthier.

Arguably, the narrative that the Prozac marketing team pushed was a very healthy narrative for society to receive.  Here‘s a New York Times opinion piece that argues exactly that.

Not only was it suddenly O.K. to be taking an antidepressant, for many it became a badge of honor. Its marketing let everyone know, “hey, depression isn’t a personal failing or due to poor morals or bad parenting. It’s a biochemical thing that a medication can help with.”

More recently, I’ve seen a lot of push back against the depression-as-illness narrative.  Mood issues definitely can be caused by biochemical imbalances, but when we treat chronic sadness as necessarily medical, we sublimate community issues like oppression or loneliness or economic strife into a collection of maladies that afflict separate individuals.  And we treat these community issues with individual drug prescriptions, instead of with social change.

As with herpes, I don’t think there’s a fundamentally correct answer to the question “Is chronic sadness a health problem?”  Some people really benefit from thinking about depression under the health-problem umbrella.  Others view chronic sadness as a healthy response to a bad situation.  Of course the answer will be largely dependent on the specifics of the sufferer and the sufferer’s situation.  But it’s also necessarily a question of a culturally determined ideal.  How much sadness does a healthy person experience?  What is health?  These are exactly the types of nebulous questions that marketing is good at targeting.  Prozac shifted the public conception of a healthy person toward a more joyful person.

III. Conclusion?

Marketing shapes how we conceive of own bodies.  I don’t really have a coherent argument about this being a bad thing, but it makes me uncomfortable.  A conception of health that is shaped around enriching pharmaceutical companies probably isn’t a good conception of health.  Right?  I don’t know..

 

American Meritocracy is a Sham (Higher Education)

I.

Intergenerational socioeconomic mobility in the US is very low relative to our peer countries.  Why is that?  Is it because America is a real meritocracy where the poor stay poor due to their inferior moral character?  Or is it because the American economic system, while it pretends to be meritocratic, in fact systematically favors the children of rich parents over the children of poor parents? (Hint: it’s the latter.)

There’s a lot to be said about how poverty causes malnutrition and stress in children, which make learning more difficult.  And there’s a lot to be said about how public schools in poorer neighborhoods often receive less funding, again making learning more difficult.  The system makes it harder for children of poor parents to achieve merit.  But that’s not what I want to talk about.  I want to talk about the fact that even when children of poor parents manage to demonstrate merit in spite of the difficulties, the deck is still stacked against them.

Higher education is a major gatekeeper for higher paying professions.  The Pew Research Center found that a college degree is “one of the most effective assets available for experiencing upward economic mobility,” (they also found that higher education protects against downward mobility).  Therefore, meritocratic entrance to higher education is important for socioeconomic meritocracy in America as a whole.

So, is entrance to higher education meritocratic?

No.  The very rich, the rich, and the upper middle class all have significant advantage over the rest of America in college admissions.

II.

Suppose you’re a parent and you want to ensure that your child gets into the university of your choice. What are your options?

Option 1: Legal bribery

If you’re very very rich, you can bribe your way in.  For many billionaires, this strategy consists of a one-time donation to a single school, but it can also come in the form of repeated million dollar donations to a wide array of elite schools.  This type of bribery is completely legal, and of course, tax deductible.

Option 2: Illegal bribery

If you’re not rich enough to bribe your school of choice with a “charitable donation,” maybe you can bribe an individual.  This year saw the largest case of college admissions fraud ever uncovered, with more than 50 indicted co-conspirators, and allegedly 750 total families.  Of course college admissions fraud was not unheard of before; what made the 2019 case special was the broad scope.  There’s a constant trickle of legal cases involving bribes paid by parents to coaches or admissions officers without a massive conspiracy or middle man.  Here’s one from 2018, and one from 2017.

Option 3: Legal bribery again

Maybe you’re not wealthy enough to buy a building for a few million dollars, and you’re not wealthy enough (or desperate enough) to pay 100 thousand dollars to convince a coach or admissions officer to get your child into college.  You can still bribe your way in.  In fact, if you’re upper middle class, you’re bribing schools whether you want to or not.

Over the years, college admissions officers have repeatedly come forward to blow the whistle on the classist nature of college admissions.  Colleges, although they often purport to be need-blind, routinely accept lower achieving students from wealthy backgrounds because these students will pay full tuition.  One admissions officer in a recent New York Times article on the subject says “I call them the C.F.O. Specials, because they appeal to the college’s chief financial officer. They are challenging for the faculty, but they bring in a lot of revenue.”

In essence, rather than uphold the meritocratic values they supposedly stand for, colleges accept bribes in the form of tuition.  Parents communicate their ability to pay via markers like geographic location, choice of high school (public or private), their child’s SAT score (which can be drastically improved with an expensive tutor), and their child’s participation in an elite sport.

III.

Of course, this kind of bribery isn’t good for our schools.  For example, the aforementioned New York Times article brings up the fact that unprepared affluent students have a negative effect on faculty morale.  But ironically, these kinds of admission practices actually make the school look better to prospective students.

The US News College Rankings make a huge difference to prospective students—for prestigious universities, moving up the ranking brings in more and better applicants—and the US News Rankings favor schools with wealthier students.  School wealth directly affects the US News Ranking (and of course, schools are wealthier if they accept bribes).  Faculty salary (presumably largely a function of the school’s financial situation), also plays a role in the ranking.  And, finally, student standardized test scores (again, wealthy students are far more likely to have high SAT scores due to access to expensive tutoring) account for more than three quarters of the “student excellence” part of the US News rankings formula.

IV.

So, given the role that higher education plays in eventual socioeconomic status, I think we can pretty definitively say that America is not a meritocracy.   Meritocracy is a lie that keeps the upper class and the upper middle class in power.

So, my basic call to action is.  Like.  Don’t pretend that the system is meritocratic.  In our personal lives, we shouldn’t assume that people who went to prestigious universities are better or more intelligent than people who didn’t.  Because college admissions are demonstrably un-meritocratic.

Beyond the personal, is there hope?  Can we reform the system into something more meritocratic?  Probably yes.  As I discussed in a previous post, most of America’s peer nations have higher economic mobility, which would seem to indicate that a different but similar system can work.  More on this later, probably.

Is a meritocratic system desirable?  More on this later, probably, too.

The Military Coup in Bolivia (How the Media Lies to Us About South America)

In general, I want to avoid current events on this blog, but in this case, I feel I need to speak up.

(Here’s a youtube video that I watched before doing my own research.  I can’t speak to BadEmpanada’s general credibility, but in this specific video he’s factually accurate, and I mostly agree with his analysis.)

I. What Happened

Bolivian elections are a runoff system.  The president is elected directly by the people in up to two rounds.  In the first round, several candidates face off against each other.  If one candidate gets more than 50% of the vote, or beats the runner up by more than 10%, then that candidate is the winner.  Otherwise, the top two candidates face off in a second round of voting.

The first round took place on October 20th.  With 83.8% of votes counted, Evo Morales (incumbent, socialist, opposed to US influence) led against the conservative former president Carlos Mesa, 45.3% to 38.2%.  At this point, the transmission from the vote counting was interrupted for 24 hours.  When the results were eventually released, Morales had 47.07% of the vote, compared to Mesa’s 36.51%.

This late rally in Morales’s vote count does not rule out the possibility of a fair election—Morales is particularly popular among rural Bolivians whose votes are generally counted last.  However, the interruption in the vote transmission was highly irregular, and that interruption along with the narrow margin of victory led many (the Bolivian opposition party, OAS, the EU) to call for the second round to take place anyway.

Protests (which were backed by the Bolivian police) against the election result started on October 21st, and became increasingly violent over time.  An article in the Washington Post  describes the chaos,  “Protesters ransacked and burned the homes of senior members of Morales’s Movement for Socialism party and, in at least one instance, kidnapped a relative.”

Meanwhile, the Organization of American States (OAS) (edit: OAS has historically opposed leftism in the Americas, and it receives much of its funding from the USA) audited the election, and found “clear manipulations,” suggesting that the win by a 10% margin was illegitimate.  Early on Sunday (yesterday), Morales agreed to a re-election that would comply with OAS guidelines, including an overhaul of the Supreme Electoral Tribunal whose members would be chosen by parliament.  From the same Washington Post article:

Morales said early Sunday that he would accept the recommendation of the OAS to replace the electoral commission and hold new elections. He also suggested he might not stand for reelection in that vote.

“For the moment, candidacies should be secondary,” he said. “The priority is to pacify Bolivia, to go to a dialogue, and to agree on how to change the Supreme Electoral Tribunal working with the Legislative Assembly.”

Later that same day, the military removed support for Morales, and Morales resigned, condemning his forced resignation as coup and stating:

“We resign because I don’t want to see any more families attacked by instruction of Mesa and [opposition leader Luis Fernando] Camacho. This is not a betrayal to social movements. The fight continues. We are the people, and thanks to this political union, we have freed Bolivia. We leave this homeland freed.”

II. Analysis

(First, some concessions: The details of the October election are suspicious.  A fair re-election is desirable.  And whoever wins that election should be president.)

Mainstream English language news media has mostly refrained from calling this event a “coup,” opting for descriptions like “Bolivia’s Morales resigns amid scathing election report, rising protests” (the title of the Washington Post article where I got much of my information).  So why do I feel comfortable calling it a coup?

When I read through the media depictions of the Morales’ resignation, I feel uncertain about who is in the right.  The details are complicated: Morales likely rigged the election, but the opposition has resorted to violence… if I knew nothing about the American media, I would probably be agnostic in my judgement of these events.

But I don’t know nothing about the American media.

II.i Convincing arguments aren’t always true

Imagine an AI that has been trained on human rhetoric.  The AI starts with some predetermined conclusion (chosen by an external agent), and then collects evidence and forms an argument in favor of that conclusion.  This AI is very good at what it does.  An overwhelming majority of people (say 99%) who hear the AI’s argument are convinced of the pre-chosen conclusion.

Now, imagine you’re being exposed to the AI’s argument.  What is the correct course of action?  Because the AI can easily convince a listener of any conclusion, you know that the convincing-ness of the AI’s argument has no bearing on whether the conclusion is actually correct. The best course of action is to ignore the AI’s argument and stick with your prior.

(I think I got this thought experiment from Slate Star Codex, but I can’t find it. Here‘s a different SSC post discussing essentially the same thing)

A more real-world corollary of this thought experiment is: If a person or institution is known to be convincingly incorrect in some particular direction (let’s call that direction “right”), a rational actor who is listening to that person or institution, who draws an initial conclusion without considering the bias of the information source, should assume that the truth lies somewhere in the opposite direction (to the left) of said initial conclusion.

If I know that English language media has a particular bias, I need to counteract that bias before reaching my final conclusion.

II.ii What bias does the English language media have?

In the book Manufacturing Consent, Noam Chomsky and Edward S. Herman lay out a propaganda model of American mass media.   Chomsky later described this model succinctly with the analogy, “propaganda is to a democracy what the bludgeon is to a totalitarian state.”  In a country like the United States with a government that maintains its legitimacy through public approval, control is maintained through control of information.  Yes the government rules with the consent of the people, but consent can be manufactured through propaganda.

Of course, the American media is not owned or operated by the state, and the US has possibly the strongest free speech protection in the world, which raises the question: If the media is independent, how does the government ensure that it publishes propaganda?

The answer is that the government and the media are beholden to the same larger corporate system.  Most news organizations are for-profit institutions that are either part of a large conglomerate, or privately owned by a powerful individual (for example the Washington Post is owned by Jeff Bezos).  Furthermore, most of the revenue for these mass media companies comes from advertising, meaning that advertisers hold sway over what kinds of ideas are presented.

These perverse incentives mean that the media is not beholden to the truth, or to the well-being of the public, but instead serves the interests of the powerful elite.  Journalists with opinions that are more convenient to the elites are more likely to be hired.  Stories that serve the elite are more likely to be headlined.  Dissident thinkers, on the other hand are less likely to be hired, and stories that are inconvenient to the central narrative are relegated to the less read sections of the newspapers.  When a dissident thinker achieves any sort of popularity, they receive flak from the rest of the media.

Through this media control, as well as through lobbying, special interests are able to influence elections, which ultimately means that they control the government.  The government, like the media, then, acts in accordance with these special interests. Therefore, the government and the media are mostly in alignment, and accordingly the “independent” media pumps out pro-government propaganda.

(Here‘s an animated video that sensationally explains the model in greater detail over dramatic music.  It’s a mostly accurate representation of Chomsky’s ideas, although it’s incomplete, and doesn’t delve into any of the evidence that backs up the model.)

In Manufacturing Consent, Chomsky and Herman attempt to test the propaganda model using scientific tools.  The propaganda model predicts general alignment between media narratives and pre-existing government policy, while the more standard model of media as a check to government predicts the opposite.  Chomsky and Herman perform case studies, using quantitative and qualitative data, to assess how each model holds up.

One of their findings is that the American mainstream media overwhelmingly paints US client states as legitimate, and states opposed to US influence as illegitimate.  Chomsky and Herman compare the media treatment of the 1984 Nicaraguan general election, held by an anti-US government, with the treatment of the US sponsored Latin American elections in El Salvador and Guatemala.  Their findings are summarized in the book’s introduction:

In El Salvador in the 1980s, the U.S. government sponsored several elections to demonstrate to the U.S. public that our intervention there was approved by the local population; whereas when Nicaragua held an election in 1984, the Reagan administration tried to discredit it to prevent legitimation of a government the administration was trying to overthrow. The mainstream media cooperated, finding the Salvadoran election a “step toward democracy” and the Nicaraguan election a “sham,” despite the fact that electoral conditions were far more compatible with an honest election in Nicaragua than in EI Salvador. We demonstrate that the media applied a remarkable dual standard to the two elections in accord with the government’s propaganda needs

In brief, the set of standards that the media uses to decide whether an election is fair and free depends on whether the election serves US interests.  For just one example, voter turnout in the US sponsored elections of El Salvador and Guatemala was interpreted by the media as support for the election.  In Nicaragua, where the turnout was also large, the media instead focused on coerced participation. Of course coerced participation should be a matter of concern in all elections.  Chomsky and Herman illustrate the hypocrisy:

…the elections in El Salvador were held under conditions of military rule where mass killings of “subversives” had taken place and a climate of fear had been established. If the government then sponsors an election and the local military authorities urge people to vote, a significant part of the vote should be assumed to be a result of built-in coercion. A propaganda model would anticipate that the U.S. mass media make no such assumption, and they did not.

In El Salvador in 1982 and 1984, voting was also required by law. The law stipulated that failure to vote was to be penalized by a specific monetary assessment, and it also called on local authorities to check out whether voters did in fact vote. This could be done because at the time of voting one’s identification card (ID, ddula) was stamped, acknowledging the casting of a vote. Anybody stopped by the army and police would have to show the ID card, which would quickly indicate whether the individual had carried out his or her patriotic duty. Just prior to the March 1982 election, Minister of Defense Garcia warned the population in the San Salvador newspapers that the failure to vote would be regarded as an act of treason…. Given the climate of fear, the voting requirement, the ID stamp, the army warning, and the army record in handling “traitors,” it is evident that the coercive element in generating turnout in Salvadoran elections has been large….

In Nicaragua, while registration was obligatory, voting was not required by law. Voter-registration cards presented on election day were retained by election officials, so that the failure to vote as evidenced by the lack of a validated voter credential could not be used as the basis of reprisals.  Most of the voters appeared to LASA observers to be voting under no coercive threat—they did not have to vote by law; they were urged to vote but not threatened with the designation of “traitors” for not voting; there were no obvious means of identifying nonvoters; and the government did not kill dissidents, in contrast to the normal practice in El Salvador and Guatemala. In sum, Nicaragua did not have a potent coercion package at work to help get out the vote—as did the Salvadoran and Guatemalan governments.

II.iii So what does all of this tell us about Bolivia?

Whenever we see a story similar to what has happened in Bolivia, one of our first instincts should be to check whether the leader in question opposes US influence, and the answer to that question should influence our interpretation of the information.  We know that the media is biased toward portraying US client states as legitimate, and socialist states as illegitimate.  And we have to accept that we’re human and prone to being swayed by our constant media exposure.  Therefore, we have to adjust our own conclusions to counteract media bias.

This does not mean that we should ignore the news, or jump to the conclusion that any government being condemned in the US media is good.  Instead it means that we need to be vigilant against known media biases that infect our beliefs.  (Notice, for example, that the headlines on recent news about Bolivia have rarely highlighted the violence of the protests, whereas any violent protest against a US client state is sensationalized.)

I’m not really informed enough to have a clear judgement on who is in the right in Bolivia.  Election rigging (edit: if the election was rigged) is definitely bad.  Military takeovers are also usually bad.  My sense here is that deposing Morales was more anti-democratic than democratic, but I don’t know for sure.

But I am confident that the news media would not hesitate to call a similar resignation a coup if it happened in a US-sponsored state.  Therefore, I call the Bolivian resignation a coup in order to maintain consistency.  It is more important to be consistent than to be correct in any individual instance.  The mainstream media will try to push us toward believing that such-and-such election is a step toward democracy.  Or such-and-such government is anti-demorcatic.  And they will do it convincingly.  If we try to draw conclusions based only on the evidence that the media provides us, without correcting for its known biases, then we fall victim to a system that is doing its best to use our voting power to support special interests.  Supporting states that the media portrays as democratic, and opposing states that the media portrays as anti-democratic does not serve democracy.  Instead, it serves a group that is pushing for the global supremacy of American elites.

American Meritocracy is a Sham (Class Mobility)

American hierarchy justifies itself with a narrative of merit.  People who work hard and take smart risks, who have a knack for knowing what goods and services are wanted where, naturally rise up the ranks, become wealthy and powerful, and are able to use their wealth to do social good (and to further enrich themselves in the process).

Setting aside the question of whether this meritocratic system is a good way to organize society,  let’s address a different question: Does the US actually resemble the meritocratic system it claims to emulate?

(The answer is no. America is not a meritocracy.)

Class Mobility

The most basic argument against this notion of American meritocracy is the lack of inter-generational economic mobility in the US.  A person with poor parents in the US has a 47% chance of being poor as an adult.  In Canada, that number is only 19%.  Here’s a figure showing this number for seven more countries.  As you can see, the US is much less mobile than comparable nations.

From Wikimedia Commons, ” The results of a study on how much of the advantages of having a parent with a high income are passed on to the next generation. The fraction indicates how many children of poor parents grow up to be poor adults; higher numbers mean less intergenerational economic mobility.” Created by BoogaLouie, 2012.

On the upper end of the wealth spectrum, information is harder to come by, but as far as I can tell, in America, the (familial) rich stay rich.  According to respected economist Richard Reeves

There is intergenerational ‘stickiness’ at the bottom of the income distribution; but there is at least as much at the other end, and some evidence that the U.S. shows particularly low rates of downward mobility from the top.

This lack of social mobility seems to suggest that wealthy Americans are wealthy more as a result of familial wealth than because of their own merit (in the words of Elon Musk “Probability of progeny being equally excellent at capital allocation is not high”).

Still, one could argue that merit, in fact, is heritable: Low social mobility is the system working as it should; the poor stay poor because they’re lazy,; the rich stay rich because they work hard; Canada’s higher mobility is evidence of injustice (affirmative action gone awry); and so on.   Social Darwinist arguments like these are repugnant and incorrect, but they do explain America’s lack of social mobility as well as any other theory, so different evidence is necessary to refute them.  I’ll address this in another post.

Edit: A case against social Darwinism

Who do we blame for California’s wildfires?

With fires raging throughout California, people are being displaced, personal property is being destroyed, and it’s tempting to try to find someone or something to blame.  Fox News anchor Tucker Carlson, and right-wing Youtuber Dave Rubin blame the fires on California’s “going woke.”  They don’t provide a coherent argument to back up their assessment.

Others are blaming the fires on climate change (and on a system that enriches individuals at the cost of severe environmental degradation), or on PG&E’s poor infrastructure maintenance (and on a system that privatizes profits from essential utilities while failing to force private companies to actually provide those same essential utilities we pay them to provide).

Both of these takes are, I think, pretty fair.

One source of blame that I haven’t seem much discussion of is a particular mixture of colonialism and bad science.  California is, of course, a settler colonialist state.  US Americans have only controlled the region known as California for the last 175 years, give or take—in ecological terms a very finite amount of time.  When colonizers come to a new region, they bring certain ideas about how nature should behave, along with land management technology that was developed for their native region.  They then go about trying to force the natural world to conform to their expectations, so that they can live familiar lives, eating familiar crops, in this new and vastly different place.

This is a general phenomenon.  Recent imperial powers have done it (I’ll probably post at some point soon about British colonial river policy), and much older expansionist states have done it too (ancient Roman or Hellenistic settlement was generally followed by river delta growth as their environmental policies caused increased erosion upstream).

The settlement of California by US Americans meant both the displacement of indigenous peoples (who up until this point had been using their own land management technologies) and the implementation of land-management technologies that had never been tested in any environment similar to California’s semi-arid chaparral.

Chaparral, if left without human intervention, will go through relatively frequent burn and regrowth cycles (recurrence interval of about a decade).  It’s clear to anyone who knows much about Californian ecology that the plants are specifically adapted for frequent fires.  Our redwoods, for example, are unusually fire resistant, and they use fire as a cue to germinate, so that the seedlings won’t be suffocated by underbrush.

How did American Indians deal with this type of ecosystem?  They basically took over the natural cycle, with controlled burns that cleared undergrowth without the risk to human life that comes along with a real wildfire.

For the first several decades of US American presence in California, the indigenous nations continued their land management practices.  Then, in 1911 the US federal government banned controlled burning on public lands, and started stamping down on “renegade Indian[s]” who were setting fires through “pure cussedness.”  US fire policy became a strict policy of fire suppression. In the past 100 years, without fires to clear out plant growth, the under-story of California’s forests has grown thick, choking out large fauna like the black bear.  Giant Sequoias have stopped germinating.  Our undeveloped land is covered in dry brush.

Throughout this whole period, American Indians have voiced their anxieties about the potential for wildfire. They’ve pointed out that the state of California is basically covered in kindling. They’ve been ignored.  When they’ve tried to continue their old land management practices, even on their own land, they’ve been arrested, fined, or both.

And now, finally, California is having huge destructive wildfires, basically every year.

The United States policy of fire suppression is wrong.  Morally wrong.  And also factually wrong: The policy is based on factually incorrect assumptions.  The US acted, and continues to act, with the assumption of technological and scientific superiority.  Fire is clearly dangerous; fire suppression is clearly a good idea; the US is bringing development and progress to California, and these American Indians who think that fire management should be done differently clearly have no idea what they’re talking about.

Powerful institutions, like the US federal government, are able to grant themselves academic legitimacy.  The US federal government is able to act under a veneer of scientific correctness, even though of course there is nothing scientific about ignoring place-specific ecology.  This veneer of science is part of what allowed (and continues to allow) the US to justify its subjugation of indigenous nations.  But it comes with unforeseen costs: This veneer of scientific legitimacy, and the coupled denigration of American Indian knowledge, enabled a policy of land management that has caused displacement, destruction, and loss of life.

The Ironic Cultural Misrecollection of Sherlock Holmes

A couple years ago, sick of all the heavy literature I had been reading, I picked up the complete Sherlock Holmes by Sir Arthur Conan Doyle, expecting, in essence, a fun romp.  Sherlock Holmes was, I thought, the original puzzle-fiction: solvable who-dunnit (or how-dunnit) mysteries that encourage the reader to exercise their reasoning skills and that maybe teach something about perception applicable to everyday life.  I did not expect themes that resonated with me deeply.  Is Sherlock Holmes the best work of rationalist fiction ever written?  Objectively yes.  I’ve read all the fiction, and Sherlock Holmes is the best.

The primary criticisms I’ve heard of the original Sherlock Holmes are based on a particular conception of what the work should be, rather than what it actually is.  It’s not good puzzle fiction, and it’s not good genre mystery: The solutions to the mysteries are often things that the reader could never have guessed.  And it’s not good “let’s admire the prowess of this fictional human mind” fiction either—it’s easy to write fiction with a character who makes accurate inferences; it’s harder to write a character who is actually compellingly intelligent.  Compelling intelligence requires accurate inferences that the character reaches through valid and non-obvious reasoning, and it requires a world that punishes characters for fallacious reasoning (Death Note and Worm are both good examples of fiction with characters who meet these requirements for compelling intelligence).  Holmes’ inferences often seem outlandish and unlikely, but are overwhelmingly correct.  Therefore, on the surface at least, he’s not really a compellingly intelligent character.

The idea that we should be interacting with Sherlock Holmes according to genre mystery or compelling-intelligence standards is, I think, based on a cultural misrecollection of what Sherlock Holmes is.  Sherlock Holmes is not a puzzle, and it’s not wish-fulfillment, and it’s not instruction on how to use deductive reasoning to wow your peers and solve crime.  It’s an exploration of the power of scientific reasoning, and a prescient look at the implications applying rational modes of thought to societal problems.

The basic question that Sherlock Holmes attempts to answer is “What if we used science to solve crimes?”  The answer to this question is that we’d probably solve more crimes. Holmes is good at what he does.  Hypothesis testing combined with deduction is an effective way to know things, and if you introduce science somewhere where it’s never been applied before, of course you will quickly make intellectual progress.  Other characters, who don’t understand the power of science, then are constantly amazed by Holmes’ intellectual abilities, and seem to view him as possessing some sort of superhuman intelligence.  Because the stories are narrated by Watson, this sense of Holmes as superhuman is impressed upon the reader.  He’s always a  few steps ahead.  He understands what’s going on well before anyone else.  He holds all the cards.

Sometimes it seems like this superhuman aspect of Holmes is all our cultural memory of the character retains from the original stories.  Take for example, Holmes’ catchphrase in many lesser adaptions, “Elementary, my dear Watson.”  What does this catchphrase communicate?  Holmes is intellectually superior.  Watson struggles through some difficult intellectual question, and reaches a non-obvious conclusion, but Holmes was there ages ago.  To Holmes these conclusions are basic; they’re so evident from available information, that they hardly merit even saying.

This conception of knowledge is anti-scientific.  Knowledge doesn’t come from isolating yourself from the outside world and thinking really hard about available information.  Knowledge comes from collecting and assimilating information and then collecting more information.  The idea that Holmes can just look at a situation and know infinitely more than anyone else exemplifies the limiting ways we collectively think about intelligence.

The original Sherlock Holmes actually pushes hard against this superhuman conception of science and intelligence.  Conan Doyle’s Holmes never says “Elementary, my dear Watson.”  He calls a conclusion “elementary” once, in “The Crooked Man,” but with the opposite intended implication.

“Excellent!” I cried.

“Elementary,” said he. “It is one of those instances where the reasoner can produce an effect which seems remarkable to his neighbour, because the latter has missed the one little point which is the basis of the deduction. The same may be said, my dear fellow, for the effect of some of these little sketches of yours, which is entirely meretricious, depending as it does upon your retaining in your own hands some factors in the problem which are never imparted to the reader.”

This type of comment from Holmes on Watson’s depiction of him is not rare in Conan Doyle’s stories.  The stories portray the awe that intelligence can inspire in us.  And then they go on to tell us over and over again that intelligence should not be impressive.  Knowledge does not come from superior intuition.  It comes collecting information.  Watson’s instinct to marvel at Holmes intellect is wrong, and it works to limit his own intellectual ability.  The belief that knowledge comes from within rather than from without in the best case stifles curiosity in favor of futile contemplation, and in the worst case stifles thought in favor of  intellectual learned helplessness.

Holmes goes on to tell us a healthier way to think about knowledge:

“Now, at present I am in the position of these same readers, for I hold in this hand several threads of one of the strangest cases which ever perplexed a man’s brain, and yet I lack the one or two which are needful to complete my theory. But I’ll have them, Watson, I’ll have them!” His eyes kindled and a slight flush sprang into his thin cheeks. For an instant only. When I glanced again his face had resumed that red-Indian composure which had made so many regard him as a machine rather than a man.

To Holmes, all things are understandable.  Events that seem surprising or nonsensical are not incomprehensible; they’re just not yet understood.  They should be taken as indications that there is more to learn.  It is this attitude that encourages Holmes to constantly collect information, and this attitude is what ultimately enables his seeming super human intellect.

It is with this perspective that we should understand the literary value of Holmes’ seeming failure to conform to standards of compelling intelligence.  Holmes draws nonsensical conclusions that turn out to be correct not because of superior intuition, but because he has information that the reader doesn’t.  The work implores us to extend this understanding of Holmes to intelligent people in the real world.  When someone is able to quickly reach conclusions that to us seem out of reach, it’s not because of some naturally vastly superior intellect.   It’s because they have information and experience that we don’t.

With this understanding of the work, the cultural memory (or at least the portion of cultural memory I received based on general exposure and a few modern adaptations) of Sherlock Holmes is an ironic vindication of its message.  The original Sherlock Holmes warns us that we have a tendency to think about intelligence and knowledge in this anti-scientific innate-ability way.  And true to this warning, we’ve collectively forgotten that Holmes is effective not because of an innate ability but because of his application of scientific principals to new problems.  Instead, in shows like BBC’s Sherlock, he stands for exactly what he wanted to stand against: He personifies the ideal of Smart Person with Unattainable Super Human Intellect.

Spiritual Connection to Nature is Punk AF

In her biography of Alexander Von Humboldt (influential 18th/19th century scientist who has been largely neglected by history), Andrea Wulf convincingly argues that Humboldt’s study of American biology was an important factor in the formation of a pan-American identity that led to Latin American independence from Spain.  Up until Humboldt’s observations indicated otherwise, Europeans generally believed (without evidence) that New World flora and fauna, when compared with those of the Old World, were smaller, weaker, and generally inferior.

Colonial control depends on maintaining colonists’ attachments to a faraway motherland over a (arguably much more natural) sense of local community, and one tool employed to maintain this artificial sense of attachment, in the case of the American colonies, was denigration of the land itself.  European land, with its big and strong plants and animals is better, and so an attachment to a European country is superior to an attachment to any America-based community.   Humboldt’s refutation of European biome superiority allowed Americans to develop a spiritual connection to the land that then enabled the formation of real sense of local identity and pride, which in turn led to the social cohesion necessary for revolution.

In this way, spiritual attachment to land can be radical and anti-imperial. (Of course, attachment to land can also lead to horrible oppressive behavior—no aspect of the human experience with so much power can be purely good.)  This quality of spiritual attachment remains true today.  Community gardens are punk and anti-authoritarian.  City parks help foster a sense of love and municipal kinship.  And for me personally, the time I felt most patriotic was while watching the sunset in Yosemite.

Creation Story

The table with the big computer at the elementary school’s science expo is relatively empty. The grad students running it are sitting there feeling slightly awkward. They thought this was going to be for high schoolers.

They’ve created a computer simulation of the universe. Not the real universe of course, but a 3+1 dimensional universe with quantum fields instead of super quantum physics. It’s a little beyond the cognitive abilities of second graders.

They’ve tried to put the thing in terms the kids will understand. “Here, look. See this loop? In our ten dimensions, it can’t form a knot, but because it’s confined to just three dimensions it can never come undone.” The kids aren’t biting.

A kindergartener lingers by the table. He’s enchanted by the bright colors of the three dimensional screen wriggling and expanding in 4 dimensional space.

“Do you want to destroy the universe?” asks the male grad student. “Click here.” He places the mouse pointer on a black hole, and points the kid toward the mouse.

The kid looks terrified. “Destroy the universe? For real?”

“No, not for real. Only pretend,” says the female grad student.

She considers her words. Philosophically, is this simulation a universe? Perhaps something lifelike could exist within the up spins and down spins of the quantum computation. Maybe there’s even something there resembling consciousness? But then, what worth does a consciousness have if it’s only aware of an artificial world? She smirks at the thought of the sad half-existence of a hypothetical three dimensional being.

She amends her statement. “Well, only in the computer.”

The kid is mollified. He clicks.