Online Polarization: Suffocated By My Overgrown Green Beard

[Content note: this post contains brief discussion of a suicide attempt]

In 2007, Eliezer Yudkowsky wrote an essay in which he argues for a social physics understanding of group extremism.  Why do cults often become stronger after the failure of a prophecy?  How do online forums become echo chambers?  Yudkowsky makes a physical analogy to evaporative cooling:

Take a can of air duster sitting at room temperature.  Inside the can are a collection of molecules, flowing past each other randomly at varying speeds, all held together in the liquid phase by electromagnetic forces and by the pressure of the can.  Now open the nozzle and spray out some air.  There’s a pressure drop in the can, and the liquid inside starts quickly evaporating: Particles on the surface of the liquid escape the electromagnetic hold of the other particles, and become gaseous.  But not all particles have an equal chance of escaping.  Instead, particles with higher kinetic energy (i.e. faster moving particles) escape preferentially.  This leads to a lower average kinetic energy in the liquid, or in other words, the molecules in the liquid are moving slower and the temperature of the liquid drops.  The can of air duster becomes cold to the touch.

Similarly, Yudkowsky argues, the inciting event of a failed prophesy can cause a religious group to eject the less devoted believers, causing the group beliefs to become more uniform and more extreme.

In Festinger, Riecken, and Schachter’s classic When Prophecy Fails, one of the cult members walked out the door immediately after the flying saucer failed to land. Who gets fed up and leaves first? An average cult member? Or a relatively skeptical member, who previously might have been acting as a voice of moderation, a brake on the more fanatic members?

This analogy is, I think, spot-on.  Or, as spot-on as these kinds of social physics analogies can be.  That is, reductive to the point of being a little insulting, but still holding some sort of general truth.  An analogy between temperature and ideological diversity makes a lot of sense, and I’ve seen this kind of evaporative cooling of group beliefs play out over and over again online. With the advent of some really objectionable social-justice-oriented content, for example, a lot of online atheist spaces became places to critique bad social justice content, which in turn highlighted political differences between different atheists, alienating the more left-leaning group members, and moving the group as a whole to the right.1 Similarly, /r/TopMindsOfReddit used to be a fairly politically neutral place to poke fun at conspiracy theorists, but, with the rise of Trump and the increasing relevance of right-wing conspiracy theories, anti-right-wing content became more prevalent, causing right-leaning members to leave the subreddit. The subreddit is now decidedly left/liberal.

This kind of “evaporative cooling of group beliefs” is not always a bad thing. Right-leaning online atheist spaces have become vitriolic and anti-science, but /r/TopMindsOfReddit has remained a pretty decent (i.e. factually accurate and open to internal disagreement) online community throughout their shift to the left.  One shortcoming of the evaporative cooling analogy is that it would seem to imply that groups that undergo this process necessarily become less diverse.  Kinetic energy is simple and one-dimensional: if you kick out all the high energy particles, you’ll get a smaller range of kinetic energies.  Ideology, on the other hand, is complex and fractal.  If you kick out all the right-leaning (or left-leaning) group members, you open yourself up to a deeper leftist (or rightist) conversation, potentially ultimately increasing diversity of thought. 2

So we need to ask the question: Why does this ideological evaporation process sometimes lead to toxic, crazy, uniform, cult-like belief, and sometimes lead to a healthy and diverse (but shifted) belief?

The “green beard effect” is a concept in evolutionary biology that allows for gene-based kin selection:  A gene, or a group of linked genes, can increase its evolutionary success if it facilitates cooperation among individuals who have that gene.  If I have a gene that causes a conspicuous trait, like green facial hair, and also causes me to sacrifice myself to help others with that conspicuous trait, then my own individual reproductive success may be decreased, but my sacrifice increases the gene’s success.   If green facial hair is a good indicator that another individual shares the green beard gene, then my sacrifice helps my green beard gene proliferate.3

Green beards are not necessarily common in biology.  Single genes, or single sets of linked genes tend to be too simple and too messy to code both for a conspicuous marker, and for cooperation based on the conspicuous marker.  Still, there are examples of green beard traits in a wide variety of organisms, from yeast to ants to rodents.

Where green beards are really commonly found is in memetics.  Memetic evolution, or the evolution of ideas through Darwinian selection with “memes” as the basic unit of information instead of “genes,” is more susceptible to green beard effects because memeplexes (that is, sets of linked memes) tend to be much more complex and coherent than sets of linked genes, making the requisite coding of both a conspicuous marker and preferential treatment of those with the conspicuous marker much more likely.  As a result, green beards are widespread in human ideology: Religions often incorporate easily identifiable dietary restrictions, political ideologies call for surface-level changes in language, and subcultures outwardly signify with style of dress (fittingly, certain more left-leaning subcultures today actually dye their hair, lending more literal meaning to the green beard).45

So, when are green beard markers relevant?  If I live in a small enough community that I know my entire family tree, then I don’t need any help from green beards to decide who I’m willing to sacrifice how much for.6   Similarly, within small communities, when I personally know other community members, I can identify my ideological kin based on a robust understanding of their point of view, instead of relying on fallible markers like hairstyle and word choice.  If I know my my neighbors well, then green beards are completely irrelevant.  Ideological green beards become important when we’re interacting with strangers.  And the internet has increased our ideological interaction with strangers at least tenfold.

Speaking from personal experience, internet communities are far more prone to toxic echo-chamber effects than are in-person communities.  And there are a lot of reasons for this, including the much discussed fact that people feel emboldened when they’re behind a screen and anonymous—if I make myself look horrible by being cruel to strangers on reddit, that cruelty doesn’t follow me around.  Reputation is a useful tool for norm enforcement, and reputation barely exists online.

But what is often overlooked is the social physical “evaporative cooling” outcome of the lack of positive reputation online.  As I’ve already noted, there is nothing wrong with wanting to talk to people who you know share some of your priors.  There is obvious value to online forums where people who all agree that unregulated capitalism is the best economic system can discuss the best way to provide universal needs like roads and fire fighting.  There is obvious value to online forums where people who all agree that there is an omnipotent, omnibenevolent, and omniscient God can discuss the problem of evil.  There is obvious value to online forums where people who all agree that human activity has an alarming effect on Earth’s climate can discuss the relative benefits of carbon taxes and alternative energy research.  Of course, it is a problem if we are completely isolated from other points of view, but not every conversation about (right-)libertarianism needs to tolerate liberal points of view, not every conversation about Christianity needs to tolerate atheists, and not every conversation about climate change needs to tolerate climate “skeptics.”  Forums do not become cult-like echo chambers just because they exclude certain points of view.  Rather, they become toxic and cult-like when they start preventing insiders from thinking for themselves.  They become echo chambers when it becomes impossible to disagree with the party line.

In 2015, a tumblr artist, whose restyling of cartoon characters was deemed problematic, attempted suicide following targeted harassment by members of the Steven Universe fandom.  Even after the artist announced (in a video filmed in the hospital) what had happened, she continued to face harassment from extremist Steven Universe fans who accused her of faking the suicide attempt and hospital visit.  In many ways, this campaign of harassment was an unfortunately typical example of internet cruelty, but what makes it odd is that Steven Universe is a children’s TV show that celebrates diversity, kindness, and acceptance.  How does a community based around such loving values become so vitriolic?

Several years ago, I was an active participant on a few feminist subreddits.  Feminist subreddits are by and large a very difficult place to have a good conversation.  They upvote and downvote so much that it feels like you’re talking in front of a crowd who cheers or boos everything anyone says.  They do not tolerate dissent.  And the moderators tend to be very ban-happy.  I remember one interaction7 with another redditor (who I’ll call Tony), who was arguing that pro-consent messaging often vilifies male sexuality.  Of course Tony was massively downvoted, and I replied to his comment and argued that his data was cherry-picked.  Shortly thereafter I got a personal message from Tony saying that he had wanted to reply to my comment but he was banned for “trolling.” He seemed genuinely distraught about the way other redditors had responded to his comment, and at this point I checked his user page and I learned that he was a young (high school aged) guy with a large generally feminist-aligned posting history.

This encounter was a bit of a red flag for me for the way feminist subreddits conduct conversations.  For one, Tony seemed thoughtful and like he really cared about women’s rights, and alienating thoughtful and caring young men is pretty contrary to the goals of feminism (at least the strains of feminism that I agree with).  For another, he and I ended up having a very productive conversation in private messages—I convinced him that the types of vilification he was worried about were less common than he initially thought, and he convinced me that this kind of vilification is worth worrying about even if the extreme cases are rare—and it is unfortunate that this conversation was not allowed to happen on a public forum where other people might have benefited from it.

So, what social dynamics caused Tony to get banned?

For reasons I will not interrogate within this blog post, online feminist forums are frequented by a lot of people who seem to whole-heartedly disagree with the central tenets of feminism.  As a result, maintaining a focused conversation requires ejecting certain participants—an intervention that comes in the form of top-down censorship by moderators and in the form of peer-level community censure.  Yudkowsky cautions against these kinds of interventions, saying that although it is true that you have to “exclude trolls” to some extent in order to have a good conversation,

It’s the articulate trolls that you should be wary of ejecting, on this theory—they serve the hidden function of legitimizing less extreme disagreements…. If you have one person around who is the famous Guy Who Disagrees With Everything, anyone with a more reasonable, more moderate disagreement won’t look like the sole nail sticking out.

But I don’t really agree.  It is possible to articulately argue that patriarchal family structures are morally good, and that our government, media, and culture should seek to strengthen heterosexual marriages between a male breadwinner and a female housewife; but somebody who consistently argues against women in the workplace obviously does not belong on a feminist forum, no matter how “articulate” those arguments are.  The presence of a “Guy Who Disagrees With Everything” on an anonymous forum does not legitimize smaller disagreements.  Instead it creates an environment where users view small disagreements with suspicion.  In a less anonymous setting, other participants could have identified Tony as a good faith actor based on his feminist comment history, but in an environment without reputation, where the Guy Who Disagrees With Everything is all around, users have to rely on extremely local green beards to filter out bad actors, and people like Tony who seem to echo an anti-feminist talking point are ejected.

Like Tony, I’ve faced community censure on a subreddit when I’ve been misidentified as an outsider.  A left-leaning political humor subreddit was discussing the censorship of hate speech on social media platforms, and a highly upvoted comment argued that because social media platforms are private companies, free speech arguments are irrelevant.  I replied to this comment, attempting to argue that free speech is relevant because social media corporations are so powerful.  I knew that I was going against the grain of the conversation, so I tried to signal my group membership.  I called social media companies “psychopathic corporations motivated only by profit,” in essence, stroking my bushy verdant anti-capitalist facial hair.8  But my comment was generally downvoted, and to be honest, I didn’t do a very good job clearly expressing my opinion in this comment, so the downvoting was pretty fair.  What was interesting about this online interaction was that I got a highly upvoted reply that called me anti-feminist and homophobic.  I’ve never been called homophobic before or since, and I think the reason I haven’t been called homophobic is that I’m not homophobic.  My comment didn’t say anything about gay people, and in fact, part of the reason I’m concerned about online corporate censorship is because both YouTube9 and TikTok10 have histories of suppressing queer content.  Evidently I didn’t flaunt my green beard hard enough though, or perhaps I chose the wrong green beard to draw attention to.  If we had been interacting in a less anonymous forum, the other participants would have known that we are generally in agreement on gay rights issues, and perhaps I would have been asked to explain my perspective further.  Instead, they got to go on with their day thinking that anybody who is critical of corporate censorship is a huge bigot.

And to be clear, I’m not innocent here.  I am sure I have misidentified many well-meaning redditors as bigoted trolls.   I usually don’t stick around long enough to figure out that I’m wrong, but there was one time recently when a seemingly racist comment got me so annoyed that I forced myself to take some time to gently explain why AAVE is not “improper grammar,” but instead a variety of English that is just as sophisticated and valid as academic English.  In writing my comment, I was hoping to perhaps convince some onlookers.  Mostly I just didn’t want to leave such a wrong statement dangling and unaddressed.  But the original commenter actually responded to my comment, thanking me, and saying that they had spent about an hour reading about AAVE and it had opened their eyes to how certain types of grammar policing can be really racist.  I was surprised, and I was embarrassed about how surprised I was.  In principle, I know that not everyone is always on the same page as me.  Sometimes I know more about linguistics than someone else, and if a person has not yet deconstructed certain linguistic supremacist narratives, it doesn’t mean that that person is full of hatred or incapable of learning.  In principle, I know that people can be very wrong in small local ways without having a hugely misguided point of view, but somehow I had forgotten.

A lack of positive reputation, especially when people (rationally or not) are suspicious of other points of view, forces a reliance on green beard markers.  On social media platforms where users have constant brief one-off ideological interactions with strangers, the green beard effect dominates.  Anyone who expresses a view that is outside the group norm is an outsider who should be banned or downvoted or shouted down.  And in accordance with Yudkowsky’s “evaporative cooling of group beliefs,” this kind of group norm enforcement spirals into toxic echo chambers where beliefs become more and more uniform as dissenters are ejected, and members (in fear of rejection) become less and less open to changing their minds.  People who don’t share your green beards are the enemy.  Somebody whose fan art depicts a Steven Universe character as thinner than she appears in the show is not only fatphobic—she is the enemy.  She deserves to be harassed, and when she says she was hospitalized for a suicide attempt, she is lying, because she is evil.

Image: “skinnywashed” Rose Quartz fan art vs how she appeared on Steven UniverseImage source.

And this is where I want to say the solution is obvious.  We should give people the benefit of the doubt.  We should talk to people who we disagree with and maybe we’ll find common ground.  But no.  This is terrible advice, because although I’ve had some good conversations with people who at first seemed to be trolls, it has more often gone the opposite direction.  A lot of people on the internet who seem to be sexist or homophobic or racist actually are sexist and homophobic and racist, and they are trying to upset you and waste your time.

Perhaps there is another solution to online polarization then.  Our reliance on green beard markers is caused by a lack of positive reputation, so maybe the answer is to re-introduce some degree of positive reputation.  Small reputation-based barriers to entering forums seemingly to do a lot to protect the space from bad faith actors, and therefore allow users to engage with each other without suspicion.  r/FeMRADebates, for example, manages to maintain a mostly polite conversation and a huge diversity of opinion by requiring that commenters have an account that is at least 30 days old and has at least 100 “karma.”11 Wikipedia also uses reputation to maintain pages that are more controversial or prone to vandalism, granting these pages “semi-protected” status, which prevents new and anonymous users from making edits.  Solutions like these give me hope.  People can be good if only we make small simple changes that facilitate natural human goodness.

  1. This video essay by YouTuber Big Joel covers this progression in more detail.[]
  2. It is, of course, important that we engage with a range of perspectives, and talk to people whose ideas we find challenging.  That said, it is sometimes more intellectually useful to talk to people who already share many of our beliefs.  When we talk to people who we mostly disagree with, we challenge our beliefs, and when we talk to people who we mostly agree with, we expand our beliefs.  An intellectually healthy world should have plenty of both types of conversation.[]
  3. This gene-level view of evolution has some pretty disturbing and likely untrue implications (e.g. racism is merely a result of natural kin-selection), so I want to explicitly note here that this view of evolution is incomplete.[]
  4. To be clear, my goal here is not to argue that these types of green beard markers are good, or bad, or even arbitrary.  Sometimes they do seem arbitrary (e.g. the Jewish ban on eating animals with cloven hooves), but sometimes they’re closely associated with other values and beliefs of the memeplex (e.g. Buddhist vegetarianism).[]
  5. Is the call to memetics really necessary here?  Maybe not.  There are other explanations for how these ideological green beards could arise.  Suffice to say, whether or not memetic evolution is at play, ideologies have visible markers that facilitate cooperation within that ideology. []
  6. A fully gene-level understanding of evolution would dictate that green beard effects would still be relevant in this scenario, as under the gene-level view, green beards are not about facilitating kin selection, but instead about a gene facilitating its own selection.  But evolution is a mix of gene selection, individual/kin selection, and group selection, and it seems to me that individual/kin level selection would become more dominant in a world where kin selection was easier .[]
  7. This interaction, as well as the two other reddit interactions in this post have been slightly modified for simplicity, and to preserve the anonymity of my reddit account.[]
  8. []
  9. https://www.vice.com/en/article/vvjqqj/youtubes-restricted-mode-is-a-disaster-for-queer-youth[]
  10. https://slate.com/technology/2019/12/tiktok-disabled-users-videos-suppressed.html[]
  11. Karma is reddit’s system for quantifying how popular your posts are, roughly equal to the cumulative number of upvotes your posts and comments have gotten, minus the number of downvotes.[]

Leave a Reply

Your email address will not be published. Required fields are marked *