162 stories

Not your Tibetan Buddhism

1 Share

Behind the beatific image of Tibetan Buddhism lies a dark, complicated reality. But is it one the Western gaze wants to see?

By Mark Hay

Read at Aeon

Read the whole story
6 days ago
Share this story

5 Ancient Stoic Tactics for Modern Life


Stoicism emerged as a philosophy, a way of life — similar to a religion, really — most famously in ancient Rome somewhere around 50-100 AD (even though it was Greeks who pioneered the thinking).

Two millennia later, the philosophy is enjoying a revival of sorts, and it’s not hard to understand why.

The primary goal of ancient Stoicism was to figure out the best way to live; as modern philosopher Lawrence Becker writes: “Its central, organizing concern is about what one ought to do or be to live well — to flourish.” And this question of how to live is perhaps humanity’s most enduring — becoming especially acute in ages in which a sense of shared meaning has atrophied and every individual is left to find meaning on his own. Stoicism’s answers, its fundamental tenets — what many modern writers and thinkers have deemed the “art of living” — thus feel just as relevant now as they did a couple thousand years ago.

While we’ve covered some tenets of Stoicism on the Art of Manliness before (and given an introduction to it in a podcast interview), we’ve never laid out its more concrete practices — the tactics that lead both to personal joy and the betterment of society. It’s my aim to present five ways you can start to inject Stoicism into your life today, and begin experiencing more happiness and fulfillment.

These aren’t just abstract ideas that I’ll be presenting to you. Rather, they’re based on firsthand experience. Since I first read Marcus Aurelius’ Meditations last year, I’ve been rather intrigued by the philosophy he espoused. So I’ve studied up, read a handful of books — both ancient source material and contemporary guidebooks — and have incorporated a number of new habits into my own daily routines.  

While there are many more practices and principles that can be gleaned and applied from Stoicism, my goal with this article is to provide those that have most impacted my own life (providing plenty of personal anecdotes to that end), and which I believe can most impact the lives of other men as well. These are things to do on a daily and weekly basis (even if some of them are more psychological in nature). While Stoicism also offers an outline of how to react and respond in a number of different situations — from anger and anxiety, to disability and death — that isn’t in the purview of this piece (though perhaps it will be in another article later on).

What’s especially appealing about Stoicism is that it’s what Massimo Pigliucci calls an “ecumenical philosophy.” Its precepts complement those of many other philosophies, religions, and ways of life. You can practice elements of Stoicism and still pursue Christianity, Judaism, atheism, and a number of other isms or non-isms out there. It’s about finding joy, fulfillment, and tranquility, and making society a better place for everyone in it. Isn’t that something we can all get behind?

Without further ado, I present 5 ways to make Stoicism a daily practice:

1. Visualize Your Life Without the Things You Love

“He robs present ills of their power who has perceived their coming beforehand.” —Seneca

William Irvine argues that “the single most valuable technique in the Stoics’ psychological toolkit” is a tactic he calls “negative visualization.” To fully appreciate your blessings — the immaterial and material alike — imagine your life without them.

For example, if you live in a tornado-prone region, imagine your house being destroyed, along with all your possessions. Obviously sort of a sad thought experiment, but chances are good that you’ll actually come to better appreciate your home, and the stuff in it, if you can really visualize what life might be like without it.

This practice might make it seem like Stoics are lifelong pessimists, but this couldn’t be further from the truth. Stoics are in fact the ultimate optimists. Consider the image of a 16oz drinking glass holding 8oz of water. It’s of course either half full or half empty, right? The Stoic, though, would actually just be grateful that there was any water at all! And that there was a vessel to hold that water to boot. The Stoic takes nothing for granted.

This exercise is of course harder to practice with your loved ones, but it’s well worth it. When I drive to daycare in the afternoon to pick up my son, I briefly meditate on the fact that each day really is a gift, and that anything can happen. He might not be around tomorrow, so I better live and love and parent to my fullest, most joyful abilities today.

Now, I’m not consumed with anxiety that my kids aren’t long for this earth (Irvine notes the important difference between contemplating and worrying). I know the odds are extremely slim of that reality. It’s more an acknowledgment that you just never know when the things and people you love might not be there anymore. It’s truly made a difference in my mindset, general gratitude, and mostly — as perhaps to be expected in this young kids phase — my patience. Whether my toddler son is taking forever to brush his teeth, or my 1-month-old daughter decides she won’t sleep unless she’s held and rocked, I seem better able to cope when I briefly imagine a life without them. It should also be noted that this exercise hasn’t made me sad or mopey as you might expect; rather, it makes me swell with gratitude for the days we are given, and I can say that I better truly appreciate all the blessings life has to offer, from my wife and kids, to the cheerful song of a bird out my window on a nice spring day.

As Seneca noted at the top of this section, bad things — which inevitably happen to all of us — are robbed of at least some of their power when we’ve anticipated their possibility, and consequently taken full advantage of each day, hour, and moment given us. The grief of loss isn’t quite as acute when we can truthfully state that we squeezed every ounce of joy out of what we own and who we love when they were with us. As the Reverend William Sloane Coffin said in giving a eulogy for his 24-year-old son, Alex:

“there is much by way of consolation. Because there are no rankling unanswered questions, and because Alex and I simply adored each other, the wound for me is deep, but clean. I know how lucky I am!”

2. Memento Mori — Meditate on Death

“Let us prepare our minds as if we’d come to the very end of life. Let us postpone nothing. Let us balance life’s books each day. . . . The one who puts the finishing touches on their life each day is never short of time.” —Seneca

While related to the above point, memento mori is about meditating on your death rather than that of your loved ones. Whereas negative visualization is about imagining life without the things you love, memento mori asks you to meditate and be aware that you will not, in fact, live forever. Death comes for us all, including you, dear reader.

We live in a pretty death-averse culture though. At large, we’re terribly afraid of it. The Stoics would argue, though, that if you’ve lived a life of purpose and meaning, you shouldn’t have any fear of something that has naturally befallen each and every human being (and every other living creature) since time immemorial.

Now, meditating on your own death is not the same as asking something like “If you knew this was your last day on Earth, what would you do?” In that scenario, I’d play hooky, make my friends and family do the same, and do something memorable with them. I’d eat a ton of tasty but bad-for-you food, drink some whiskey, stay up all night, etc. Those aren’t things you can do on a daily basis, though. Rather, the question is more like “If you don’t wake up in the morning, would you be satisfied with how your last day was spent?” Did you engage fully at work? Did you love your family and your friends? Did you add to society’s greater good at all? Did you make virtuous decisions?

When I ask myself this question, as with the previous point, it’s not a depression- or anxiety-inducing meditation. I realize the likelihood of my dying tomorrow is very slim; I am simply countenancing the fact that it is possible. And this possibility isn’t demoralizing, but invigorating. It makes me far less likely to waste time. If I’m gone tomorrow, I’d much rather have spent time baking a loaf of bread than playing games on my phone. I’d much rather have spent time reading stories to my son at bedtime (all the words) rather than speeding through it to watch another episode of Nailed It (which is great, don’t get me wrong). 

As you go through the day, or just at the end of it, reflect on your activities and decisions. Both the good and the bad. If this day was your last, would you be satisfied with its outcome? What would you have done differently? How would you have changed your interactions with others? How can you use this information to make better decisions and engage in more worthwhile activities tomorrow? Make it actionable. As the Stoics themselves would have asked, what good is philosophy if there’s no impact on how we live day to day?

I’ve also found it’s good to occasionally read memoirs about death and dying. One of my all-time favorite books is When Breath Becomes Air by Paul Kalanithi. He wrote the book as he was dying of lung cancer in his late 30s, married and with a young child. I’ve read it twice — when both of my children were just days old. He provides an unmatched perspective on what it means to not only die well, but to acknowledge its reality: “The fact of death is unsettling. Yet there is no other way to live.” Even in his waning months, he maintained an incredible sense of positivity: “Even if I’m dying, until I actually die, I am still living.” If the words of dying people don’t inspire you to live more fully each day, then nothing will! A few more good books are The Bright Hour, Dying: A Memoir, and The Last Lecture.

3. Set Internal Goals and Detach Yourself From Outcomes

“Some things are within our power, while others are not. Within our power are opinion, motivation, desire, aversion, and, in a word, whatever is of our own doing; not within our power are our body, our property, reputation, office, and, in a word, whatever is not of our own doing.” —Epictetus

One of the pillars of Stoic philosophy is not letting circumstances outside your control disturb your equilibrium. Such externally-dictated circumstances include things we’re used to thinking of as being out of our hands, like the weather, traffic, and our health (and that of our loved ones). But it also includes things we often erroneously believe we have full personal control over, like the outcomes of contests and the success or failure of business ventures.

As a help in grasping a truth we inveterate bootstrappers often resist, Irvine gives the example of a tennis match. You might set a goal of winning the match. Seems perfectly reasonable, no? But when you really think about it, you can’t control many of the factors that determine the contest’s outcome: The weather is poor and wind gusts aren’t favoring you; you experience equipment failure (like a broken string) that isn’t disastrous but a distraction nonetheless; your opponent is simply better prepared than you (or perhaps just better, period); you sprain an ankle part way through the match and can’t continue on. If your goal is to win, and any of these things happen, you’ll be rather upset.

Recognizing that much of life is out of your control doesn’t mean giving up your sense of agency; instead, it means focusing it on the only areas where you do have full control: your own actions.

Instead of focusing on results — which are impacted by external circumstances outside your control — set goals strictly related to your own efforts. Instead of setting a goal to win the match, make it a goal to prepare as best you can, practice as hard as you can, and then play to the best of your abilities. If you do those things, and still lose, there’s just nothing more you could have done, so why fret?

Rather than setting a goal of getting the job you’re interviewing for, make it your goal to prepare well, dress right, and answer every question as best you can. If you do all that and don’t get the job, it wasn’t meant to be (or so the Stoics would argue).

Rather than setting a goal of getting a girlfriend, prioritize making yourself a good catch. Eat well, work out, have a stable job, dress nicely, and make it a goal to ask someone out X times a month until you get a yes.

My own hope regarding this article shouldn’t be, and truly isn’t, that it gets shared or retweeted X number of times. I can’t control what goes viral and what doesn’t. The whims of the internet aren’t worth thinking or worrying about. Instead, my true goal was that I would do all the research I could, and write, organize, and edit the article to the best of my abilities so that those who read it have the best possible chance of engaging it meaningfully and putting something into practice.

When you set goals, attach them to what you can control — your own efforts and attitude — and detach them from what you cannot — their ultimate outcome.

4. Welcome Discomfort

“Nature has intermingled pleasure with necessary things — not in order that we should seek pleasure, but in order that the addition of pleasure may make the indispensable means of existence attractive to our eyes. Should it claim rights of its own, it is luxury. Let us therefore resist these faults when they are demanding entrance, because, as I have said, it is easier to deny them admittance than to make them depart.” —Seneca

One practice the Stoics famously abided was welcoming a certain degree of discomfort into their lives. They’d go without, for a time, certain pleasures — food, drink, sex. They’d immerse themselves in poor weather conditions (and with few clothes to boot). They’d eschew riches (and even praise) so as to not learn to cling to those things. They’d even deliberately subject themselves to ridicule. These practices were rather contrary to the Epicurean view of things, which was to ultimately pursue pleasure. The Stoics knew, though, that in welcoming challenge, they were actually far more content and fulfilled than their Epicurean peers.

To be Epicurean — one who basically just seeks the things in life that feel the best — you have to ever be experiencing pleasure. You’re basically living off constant dopamine hits. But, those senses get dulled after a while, and you need ever bigger and more pervasive doses to keep your pleasure sensors activated at the same level. Once you start running on the “hedonic treadmill,” real contentedness becomes frustratingly elusive.

Let’s show this with a quick little thought exercise. It’s simple: you want to stay cool when it’s hot outside. It’s a natural inclination. So you turn on the AC at home to a chilly 65 degrees while it’s a sizzlingly 95 outside. Ahhh, feels nice, doesn’t it? You get used to that sense of comfort, and even pleasure of staying so cool. But now, to feel comfortable, you also need to feel that cool wherever you go. You need to start your car 10 minutes early so that it cools down enough for you to be comfortable, otherwise you’ll just be miserable. You need your workplace, your favorite restaurant, heck, every establishment you enter, to be that chilled. If, God forbid, the AC goes out, you’re royally screwed. A friend invites you to an outdoor ball game? You’ll go, but you won’t enjoy it because it’ll be too stinkin’ hot. It’s all you’ll be able to focus on.

Consider the alternate scenario. Yes, you turn on the AC at home, but in the car, you just roll the windows down and let yourself be a little warm if it’s hot outside. Rather than work out in your refrigerator of a basement, you take a ruck outside in order to break a sweat. In some regards, you embrace being hot every now and then so that you can feel content in any situation. AC goes out? No biggie, you can adjust. Invited to a ball game in a heat wave? Heck yes! You love baseball, and you’re happy to just be at the game, regardless of the weather. You are a tranquil man who isn’t bothered merely by what the mercury reads on the thermometer.

Isn’t that a better way to live?

It’s sort of a silly and shallow example, but the principle holds for just about any pleasure in life. If your enjoyment and comfort relies too much on it, you’ll turn into a fragile, petulant curmudgeon when you don’t have it.  

Irvine lays out three specific benefits of sometimes welcoming discomfort and intentionally foregoing pleasures (with an example of how a particular practice — periodically abstaining from alcohol — could play out):

  1. It hardens us to whatever misfortunes may come in the future. (If your health turns, and the doctor forbids you imbibing alcohol, having practiced regular periods of sobriety will help you to easily get through that period.)
  2. The idea of those misfortunes won’t cause us anxiety, because we know we can withstand and even be content in just about any scenario. (You can look forward to a birthday party with friends where you know the booze will be flowing; you won’t be downtrodden about not being able to have any fun, because you know you can enjoy things just fine without alcohol.)
  3. It helps us appreciate the pleasures we do have, when we have them. (If you then receive a clean bill of health, you’ll be far more appreciative of the dram of whiskey you can enjoy with friends.)

This is one of the practices most associated with Stoicism, and there are a number of specific things you can do to welcome discomfort into your life and harden your general resolve:

  • Enroll in The Strenuous Life (embrace the motto of “Do Hard Things”)
  • Take cold showers
  • Hold/try to calm a crying baby while staying completely cool
  • Exercise outside in inclement weather (perhaps without shirt, shoes, etc.)
  • Keep your house at a higher temp in the summer, and a lower temp in the winter (don’t freeze out your family though; be reasonable!)
  • Eat nothing but rice/beans for a week (or a month)
  • Fast from food completely for 24 hours once a month
  • Embrace challenging situations in which you aren’t comfortable (travel/vacation with your kids, go to an event you don’t want to attend, make small talk with strangers, volunteer at a soup kitchen)
  • Do manual labor around your house instead of hiring it out

There are innumerable ways to embrace some semblance of discomfort in your life, and it can and will be different for each person. Find yours, and tackle it head on. As Irvine astutely observes, “The act of forgoing pleasure can itself be pleasant.” Embrace the grind!

5. Vigorously Pursue Character and Virtue

“Every day I reduce the number of my vices.” —Seneca

To the Stoics, the best way to live well was to pursue virtue. William Irvine even writes: “What, then, must a person do to have what the Stoics would call a good life? Be virtuous!” In becoming a better person — a man of great character — we’ll naturally find fulfillment, but also make greater contributions to society as a whole in the process. How might that happen, you ask? If you’re committed to virtue, won’t you volunteer more? Be more likely to help a stranger in need? Won’t you take on the role of Neighborhood Watch leader or Little League coach? Will you be more likely to say “Yes!” when a favor is asked? These are all things that improve our communities, and are natural byproducts of attaining greater personal virtue and character.

How does one become more virtuous though? How do you develop your character and exercise it in daily life? Luckily, there are a number of good options (many of which we’ve previously covered in-depth):

Regularly ask yourself: “What would my best self do in this situation?” Father James Martin brought up this idea in his book The Jesuit Guide to (Almost) Everything and in his interview with Brett on our podcast. All of us have an ideal version of ourselves in our head. That version eats better, exercises more, is a little more patient with his wife and kids, doesn’t waste time at work, etc. To more consistently act in ways that align with this ideal, simply ask what your best self would do, or how that best self would decide, in any given scenario:

Would my best self take two minutes to floss in the morning?

Would my best self choose a hard-boiled egg to snack on, or a Girl Scout cookie?

Would my best self call his parents and grandparents just a little more often?

Would my best self watch porn?

Would my best self write more letters to old friends as a way to stay in touch?

Would my best self have a little more patience with his kids’ drawn-out bedtime routines? 

Would my best self yell and flip the bird to the guy who cut him off on the freeway?

Would my best self take work time to dink around with his fantasy football team?

Would my best self read a book on the Kindle app, or play another level of Candy Crush?

Would my best self pursue romancing his wife, or spend another conversation-less night watching TV on the couch?

Would my best self have yet another drink?

Would my best self attend the far-away funeral of a dear friend’s parent?

Would my best self volunteer to clean up a park on a weekend morning, or would he sleep in?

It’s such a simple question to ask, but remarkably powerful. And these aren’t just theoretical examples. Some of these are the very questions I’ve been asking myself since I read Fr. Martin’s book late last year. And while I don’t always follow-through on what I know my best self would do (particularly when it comes to Girl Scout cookies), I’ve seen enormous strides in my being able to make more virtuous decisions on a consistent basis and am slowly getting closer to that ideal.

Follow Benjamin Franklin’s virtue plan. As a 20-year-old, Franklin set a lofty goal for himself: attain moral perfection. To do so, he developed a 13-week plan to improve himself in 13 areas or virtues. He’d particularly focus on one each week, while also keeping track of his progress with the others as well. We’ve written about the program in-depth here, and we have also created a unique journal that acts as a virtue tracker based on this 13-week plan. While Franklin never did attain perfection, over time he saw his mis-steps decrease, and had this to say about his program later in his life:

“Tho’ I never arrived at the perfection I had been so ambitious of obtaining, but fell far short of it, yet I was, by the endeavour, a better and a happier man than I otherwise should have been if I had not attempted it.”

Ask “What good shall I do this day?” Another of Franklin’s ideas on his own pursuit toward being more virtuous. Every morning he’d ask himself this question, and every evening he’d reflect with “What good have I done today?” This question will have you focus less on your pie-in-the-sky “I want to change the world” ideas, and more on doing daily kindnesses to and for your fellow humans. Whether it’s writing a letter home, helping an elderly woman with her groceries, or maybe even just giving someone (your wife, a stranger, anyone!) a compliment, sometimes going smaller to change the world accomplishes much more. Read more about this idea here.

Develop a code of principles. How can you pursue virtue if you aren’t sure of your life’s guiding principles? Massimo Pigliucci writes in How to Be a Stoic: “the question of how to live is central. How should we handle life’s challenges and vicissitudes? How should we conduct ourselves in the world and treat others?” You need some sort of guide in order to best answer those questions; the answers aren’t going to come out of thin air.

The Stoics thought there was one universal Truth which could be discovered by contemplating the laws of Nature. You may choose a different course of study. Whether from religious texts, philosophical ideas, or some combination thereof arrived at through your own rigorous reading and reflection (à la Winston Churchill), it should be your aim to acquire a defined set of principles and values you’ll adhere to in your daily life. If you aren’t sure where to start, dig into classic religious texts. From there dive into various schools of philosophy. What resonates in your soul? What are some practices and/or spiritual disciplines your ideal self would commit to? Speaking of disciplines . . .

Regularly practice the spiritual disciplines. While called “spiritual” because their original purpose was to bring the practitioner closer to God, these disciplines can be used by anyone in order to develop character and “train the soul.” From fasting, to pursuing solitude, to doing service and practicing gratitude, there are a number of disciplines that have guided and strengthened higher-purpose-minded people for thousands of years. Read our series on the topic, and decide which you’d like to take up in daily, weekly, monthly, and annual cycles. You’re guaranteed to come out on the other side more centered, virtuous, and fulfilled.

Pick one of these ideas, stick with it, and see what happens. The only thing holding you back from attaining greater character and virtue is yourself. If you truly and wholeheartedly pursue the task — making it a goal to in fact get veritably drunk on virtue — you’re bound to make strides, and as noted above, you’ll improve your community at the same time.  

Stoicism is a rich philosophy, but it’s not just for contemplation. Full of ancient truths, it’s got myriad modern applications. Put it into action, and practice the art of living.



A Guide to the Good Life by William Irvine (the best modern guidebook, in my opinion)

How to Be a Stoic by Massimo Pigliucci

The Daily Stoic by Ryan Holiday

Meditations by Marcus Aurelius

Letters from a Stoic by Seneca

Discourses by Epictetus

The post 5 Ancient Stoic Tactics for Modern Life appeared first on The Art of Manliness.

Read the whole story
16 days ago
Share this story

Saturday Morning Breakfast Cereal - Hubris


Click here to go see the bonus panel!

If you're more irritated about the geographical location of the penguins than the fact that the penguins can talk, I have nothing to say to you.

New comic!
Today's News:
Read the whole story
90 days ago
Share this story

Smells Like Teen Spirit in a major key is an upbeat pop-punk song

1 Comment and 3 Shares

This bent my brain a little: if you re-tune Nirvana’s Smells Like Teen Spirit in a major key, it sounds like an upbeat pop-punk song. Like, Kurt Cobain actually sounds happy when he says “oh yeah, I guess it makes me smile” and the pre-chorus — “Hello, hello, hello, how low” — is downright joyous. Although I guess it shouldn’t be super surprising…in a 1994 interview with Rolling Stone, Cobain admits that the song was meant to be poppy.

I was trying to write the ultimate pop song. I was basically trying to rip off the Pixies. I have to admit it [smiles]. When I heard the Pixies for the first time, I connected with that band so heavily I should have been in that band — or at least in a Pixies cover band. We used their sense of dynamics, being soft and quiet and then loud and hard.

“Teen Spirit” was such a clichéd riff. It was so close to a Boston riff or “Louie, Louie.” When I came up with the guitar part, Krist looked at me and said, “That is so ridiculous.” I made the band play it for an hour and a half.

Like me, if you don’t know a whole lot about music, here’s the difference between major and minor chords & scales.

The difference between major and minor chords and scales boils down to a difference of one essential note — the third. The third is what gives major-sounding scales and chords their brighter, cheerier sound, and what gives minor scales and chords their darker, sadder sound.

You can also listen to the song on Soundcloud.

See also this falling shovel sounds exactly like Smells Like Teen Spirit.

Tags: music   Nirvana   remix   video
Read the whole story
109 days ago
Share this story
1 public comment
110 days ago
holy crap what a total different feel
Waterloo, Canada

Overcoming Us vs. Them

1 Share

As a kid, I saw the 1968 version of Planet of the Apes. As a future primatologist, I was mesmerized. Years later I discovered an anecdote about its filming: At lunchtime, the people playing chimps and those playing gorillas ate in separate groups.

It’s been said, “There are two kinds of people in the world: those who divide the world into two kinds of people and those who don’t.” In reality, there’s lots more of the former. And it can be vastly consequential when people are divided into Us and Them, ingroup and outgroup, “the people” (i.e., our kind) and the Others.

The core of Us/Them-ing is emotional and automatic.

Humans universally make Us/Them dichotomies along lines of race, ethnicity, gender, language group, religion, age, socioeconomic status, and so on. And it’s not a pretty picture. We do so with remarkable speed and neurobiological efficiency; have complex taxonomies and classifications of ways in which we denigrate Thems; do so with a versatility that ranges from the minutest of microaggression to bloodbaths of savagery; and regularly decide what is inferior about Them based on pure emotion, followed by primitive rationalizations that we mistake for rationality. Pretty depressing.

But crucially, there is room for optimism. Much of that is grounded in something definedly human, which is that we all carry multiple Us/Them divisions in our heads. A Them in one case can be an Us in another, and it can only take an instant for that identity to flip. Thus, there is hope that, with science’s help, clannishness and xenophobia can lessen, perhaps even so much so that Hollywood-extra chimps and gorillas can break bread together.

The Strength of Us Versus Them

Considerable evidence suggests that dividing the world into Us and Them is deeply hard-wired in our brains, with an ancient evolutionary legacy. For starters, we detect Us/Them differences with stunning speed. Stick someone in a “functional MRI”—a brain scanner that indicates activity in various brain regions under particular circumstances. Flash up pictures of faces for 50 milliseconds—a 20th of a second—barely at the level of detection. And remarkably, with even such minimal exposure, the brain processes faces of Thems differently than Us-es.

This has been studied extensively with the inflammatory Us/Them of race. Briefly flash up the face of someone of a different race (compared with a same-race face) and, on average, there is preferential activation of the amygdala, a brain region associated with fear, anxiety, and aggression. Moreover, other-race faces cause less activation than do same-race faces in the fusiform cortex, a region specializing in facial recognition; along with that comes less accuracy at remembering other-race faces. Watching a film of a hand being poked with a needle causes an “isomorphic reflex,” where the part of the motor cortex corresponding to your own hand activates, and your hand clenches—unless the hand is of another race, in which case less of this effect is produced.

The brain’s fault lines dividing Us from Them are also shown with the hormone oxytocin. It’s famed for its pro-social effects—oxytocin prompts people to be more trusting, cooperative, and generous. But, crucially, this is how oxytocin influences behavior toward members of your own group. When it comes to outgroup members, it does the opposite.

The automatic, unconscious nature of Us/Them-ing attests to its depth. This can be demonstrated with the fiendishly clever Implicit Association Test. Suppose you’re deeply prejudiced against trolls, consider them inferior to humans. To simplify, this can be revealed with the Implicit Association Test, where subjects look at pictures of humans or trolls, coupled with words with positive or negative connotations. The couplings can support the direction of your biases (e.g., a human face and the word “honest,” a troll face and the word “deceitful”), or can run counter to your biases. And people take slightly longer, a fraction of a second, to process discordant pairings. It’s automatic—you’re not fuming about clannish troll business practices or troll brutality in the Battle of Somewhere in 1523. You’re processing words and pictures, and your anti-troll bias makes you unconsciously pause, stopped by the dissonance linking troll with “lovely,” or human with “malodorous.”

We’re not alone in Us/Them-ing. It’s no news that other primates can make violent Us/Them distinctions; after all, chimps band together and systematically kill the males in a neighboring group. Recent work, adapting the Implicit Association Test to another species, suggests that even other primates have implicit negative associations with Others. Rhesus monkeys would look at pictures either of members of their own group or strangers, coupled with pictures of things with positive or negative connotations. And monkeys would look longer at pairings discordant with their biases (e.g., pictures of members of their own group with pictures of spiders). These monkeys don’t just fight neighbors over resources. They have negative associations about them—“Those guys are like yucky spiders, but us, us, we’re like luscious fruit.”

Thus, the strength of Us/Them-ing is shown by the: speed and minimal sensory stimuli required for the brain to process group differences; tendency to group according to arbitrary differences, and then imbue those differences with supposedly rational power; unconscious automaticity of such processes; and rudiments of it in other primates. As we’ll see now, we tend to think of Us, but not Thems, fairly straightforwardly.

The Nature of Us

Across cultures and throughout history, people who comprise Us are viewed in similarly self-congratulatory ways—We are more correct, wise, moral, and worthy. Us-ness also involves inflating the merits of our arbitrary markers, which can take some work—rationalizing why our food is tastier, our music more moving, our language more logical or poetic.

Us-ness also carries obligations toward the other guy—for example, in studies in sports stadiums, a researcher posing as a fan, complete with sweatshirt supporting one of the teams and in need of help with something, is more likely to be helped by a fellow fan than by an opposing one.

Ingroup favoritism raises a key question—at our core, do we want Us to do “well” by maximizing absolute levels of well being, or merely “better than,” by maximizing the gap between Us and Them?

We typically claim to wish for the former, but can smolder with desire for the latter. This can be benign—in a tight pennant race, a loss for the hated rival to a third party is as good as a win for the home team, and for sectarian sports fans, both outcomes similarly activate brain pathways associated with reward and the neurotransmitter dopamine. But sometimes, choosing “better than” over “well” can be disastrous. It’s not a great mindset to think you’ve won World War III if afterward Us have two mud huts and three fire sticks and They have only one of each.

Among the most pro-social things we do for ingroup members is readily forgive them for transgressions. When a Them does something wrong, it reflects essentialism—that’s the way They are, always have been, always will be. When an Us is in the wrong, however, the pull is toward situational interpretations—we’re not usually like that, and here’s the extenuating circumstance to explain why he did this. Situational explanations for misdeeds are the reason why defense lawyers want jurors who will view the defendant as an Us.

Something interesting and different can happen when someone’s transgression airs Us’s dirty laundry, affirming a negative stereotype. Ingroup shame can provoke intense punishment for the benefit of outsiders. Consider Rudy Giuliani, growing up in Brooklyn in an Italian-American enclave dominated by organized crime (Giuliani’s father served time for armed robbery and then worked for a mob loan shark). Giuliani gained prominence in 1985 as the attorney prosecuting the “Five Families” in the Mafia Commission Trial, effectively destroying them. He was strongly motivated to counter the stereotype of “Italian-American” as synonymous with organized crime—“If [the successful prosecution is] not enough to remove the Mafia prejudice, then there probably could not be anything you could do to remove it.” If you want someone to ferociously prosecute Mafiosi, get a proud Italian-American outraged by the stereotypes generated by the mob.

Thus, being an Us carries an array of ingroup expectations and obligations. Is it possible to switch from one category of Us to another? That’s easy in, say, sports—when a player is traded he doesn’t serve as a fifth column, throwing games in his new uniform to benefit his old team. The core of such a contractual relationship is the fungibility of employer and employee.

At the other extreme are Us memberships that are not fungible, transcending negotiation. People aren’t traded from the Shiites to the Sunnis, or from the Iraqi Kurds to the Sami herders in Finland. It’s a rare Kurd who wants to be Sami, and her ancestors would likely turn over in their graves when she nuzzled her first reindeer. Converts are often subject to retribution by those they left—consider Meriam Ibrahim, sentenced to death in Sudan in 2014 for converting to Christianity—and suspicion from those they joined.

The Nature of Them

Do we think or feel our way toward disliking Them?

Us/Them-ing is readily framed cognitively. Ruling classes do cognitive cartwheels to justify the status quo. Likewise, it’s a cognitive challenge to accommodate the celebrity Them, the neighborly Them who has saved our keister—“Ah, this Them is different.”

Viewing Thems in certain threatening ways requires cognitive subtlety. Being afraid that some Them will rob you is rife with affect and particularism. But fearing that those Thems will take our jobs, manipulate the banks, dilute our bloodlines, etc., requires thoughts about economics, sociology, and pseudoscience.

Despite that role of cognition, the core of Us/Them-ing is emotional and automatic, as summarized by when we say, “I can’t put my finger on why, but it’s just wrong when They do that.” Jonathan Haidt of New York University has shown that often, cognitions are post-hoc justifications for feelings and intuitions, to convince ourselves that we have indeed rationally put our finger on why.

This can be shown with neuroimaging studies. As noted, when fleetingly seeing the face of a Them, the amygdala activates. Critically, this comes long before (on the time scale of brain processing) more cognitive, cortical regions are processing the Them. The emotions come first.

Dividing the world into Us and Them is deeply hard-wired.

The strongest evidence that abrasive Them-ing originates in emotional, automatic processes is that supposed rational cognitions about Thems can be unconsciously manipulated. Just consider this array of findings: Show subjects slides about some obscure country; afterward, they will have more negative attitudes toward the place if, between slides, pictures of faces with expressions of fear appeared at subliminal speeds. Sitting near smelly garbage makes people more socially conservative about outgroup issues (e.g., attitudes toward gay marriage among heterosexuals). Christians express more negative attitudes toward non-Christians if they’ve just walked past a church. In another study, commuters at train stations in predominantly white suburbs filled out questionnaires about political views. Then, at half the stations, a pair of young Mexicans, conservatively dressed and chatting quietly, appeared daily on the platform for two weeks. Then commuters filled out second questionnaires. Remarkably, the presence of such pairs made people more supportive of decreasing legal immigration from Mexico and making English the official language, and more opposed to amnesty for undocumented immigrants (without changing attitudes about Asian-Americans, African-Americans or Middle Easterners). Women, when ovulating, have more negative attitudes about outgroup men.

In other words, our visceral, emotional views of Thems are shaped by subterranean forces we’d never suspect. And then our cognitions sprint to catch up with our affective selves, generating the minute factoid or plausible fabrication that explains why we hate Them. It’s a kind of confirmation bias: remembering supportive better than opposing evidence; testing things in ways that can support but not negate your hypothesis; skeptically probing outcomes you don’t like more than ones you do.

The Heterogeneity of Thems

Of course, different types of Thems evoke different feelings (and different neurobiological responses). Most common is to view Them as threatening, angry, and untrustworthy. In economic games people implicitly treat other-race individuals as less trustworthy or reciprocating. Whites judge African-American faces as angrier than white faces, and are more likely to categorize racially ambiguous angry faces as the other race.

But Thems do not solely evoke a sense of menace; sometimes, it’s disgust. This brings up one fascinating brain region, the insula. In mammals, it responds to the taste or smell of something rotten, and triggers stomach lurching and gag reflexes. In other words, it protects animals from poisonous food. Crucially, in humans the insula not only mediates such sensory disgust, but also moral disgust—have subjects recount something rotten they’ve done, show them pictures of morally appalling things (e.g., a lynching), and the insula activates. It’s why it’s not just metaphorical that sufficiently morally disgusting material makes us feel sick to our stomachs. And Thems that typically evoke a sense of disgust (e.g. drug addicts) activate the insula at least as much as the amygdala.

Having viscerally negative feelings about abstract features of Thems is challenging; being disgusted by another group’s abstract beliefs isn’t easy for the insula. Us/Them markers provide a stepping-stone. Feeling disgusted by Them because they eat repulsive, sacred, or adorable things, slather themselves with rancid scents, dress in scandalous ways—this the insula can sink its teeth into. In the words of the psychologist Paul Rozin of the University of Pennsylvania, “Disgust serves as an ethnic or outgroup marker.” Deciding that They eat disgusting things facilitates deciding that They also have disgusting ideas about, say, deontological ethics.

Then there are Thems who are ridiculous, i.e., subject to ridicule, humor as hostility. Outgroups mocking the ingroup is a weapon of the weak, lessening the sting of subordination. But when an ingroup mocks an outgroup, it solidifies negative stereotypes and reifies the hierarchy.

Thems are also frequently viewed as more homogeneous than Us, with simpler emotions and less sensitivity to pain. For example, whether in ancient Rome, medieval England, imperial China, or the antebellum South, the elite had system-justifying stereotypes of slaves as simple, childlike, and incapable of independence.

Thus, different Thems come in different flavors with immutable, icky essences—threatening and angry, disgusting and repellent, ridiculous, primitive, and undifferentiated.

Cold and/or Incompetent

Important work by Susan Fiske of Princeton University explores the taxonomies of Thems we carry in our heads. She finds that we tend to categorize Thems along two axes: “warmth” (is the individual or group a friend or foe, benevolent or malevolent?) and “competence” (how effectively can the individual or group carry out their intentions?).

The axes are independent. Ask subjects to assess someone; priming them with cues about the person’s status alters ratings of competence but not of warmth. Priming about the person’s competitiveness does the opposite. These two axes produce a matrix with four corners. We rate ourselves as high in both warmth and competence (H/H), naturally. Americans typically rate good Christians, African-American professionals, and the middle class this way.

There’s the other extreme, low in both warmth and competence (L/L). Such ratings go to the homeless or addicts.

Then there’s the high-warmth/low-competence (H/L) realm—the mentally disabled, people with handicaps, infirm elderly. Low warmth/high competence (L/H) is how people in the developing world tend to view the Europeans who colonized them (“competence” here is not about skill at rocket science, but rather the efficacy those people had when getting it into their heads to, say, steal your ancestral lands), and how many minority Americans view whites. It’s the hostile stereotype of Asian-Americans by white America, of Jews in Europe, of Indo-Pakistanis in East Africa, of Lebanese in West Africa, of ethnic Chinese in Indonesia, and of the rich by the poor most everywhere—they’re cold, greedy, clannish but, dang, go to one who is a doctor if you’re seriously sick.

Between envy and disgust are our most hostile urges.

Each extreme tends to evoke consistent feelings. For H/H (i.e., Us), there’s pride. L/H—envy and resentment. H/L—pity. L/L—disgust. Viewing pictures of L/L people activates the amygdala and insula, but not the fusiform face area; this is the same profile evoked by a picture of, say, a maggot-infested wound. In contrast, viewing L/H or H/L individuals activates emotional and cognitive parts of the frontal cortex.

The places between the extremes evoke their own characteristic responses. Individuals who evoke a reaction between pity and pride evoke a desire to help. Floating between pity and disgust is a desire to exclude and demean. Between pride and envy is a desire to associate, to derive benefits from. And between envy and disgust are our most hostile urges to attack.

What fascinates me is when someone’s categorization changes. Most straightforward are shifts from high-warmth/high-competence (H/H) status:

H/H to H/L: A parent declining into dementia, evoking poignant protectiveness.

H/H to L/H: The business partner who turns out to have embezzled for decades. Betrayal.

H/H to L/L: The rare instance of that successful acquaintance, where “something happened” and now he’s homeless. Disgust mingled with bafflement—what went wrong?

Then there’s L/L to L/H. When I was a kid in the ’60s, the parochial American view of Japan was the former—World War II’s shadow generating dislike and contempt, and “Made in Japan” was about cheap plastic gewgaws. Then, suddenly, “Made in Japan” meant outcompeting American automakers.

When a homeless guy does cartwheels to return someone’s lost wallet—and you realize he’s more decent than your friends—that’s L/L to H/L.

Most interesting to me is L/H to L/L, which invokes gleeful gloating, helping to explain why persecution of L/H groups usually involves degrading and humiliating them to L/L status. During China’s Cultural Revolution, resented elites were first paraded in dunce caps before exile to labor camps. Nazis eliminated the mentally ill, already viewed as L/L, by unceremoniously murdering them; in contrast, pre-murder treatment of the L/H Jews involved forcing them to wear degrading yellow armbands, cutting one another’s beards, scrubbing sidewalks with toothbrushes before jeering crowds. When Idi Amin expelled tens of thousands of L/H Indo-Pakistani citizens from Uganda in the 1970s, he first invited his army to rob, beat, and rape them. Turning L/H Thems into L/L Thems accounts for some of our worst savagery.

Complexities in our categorization of Thems abound. There’s the phenomenon of the grudging respect, even a sense of camaraderie with an enemy, the perhaps apocryphal picture of World War I flying aces, where a glimmer of Us-ness is shared with someone trying to kill you (“Ah, monsieur, if it were another time, I would delight in discussing aeronautics with you over some good wine.” “Baron, it is an honor that it is you who shoots me out of the sky”). And there’s the intricacies of differing feelings about economic versus cultural enemies, new versus ancient ones, or the distant alien enemy versus the familiar one next door (consider Ho Chi Minh, rejecting the offer of help from Chinese troops during the Vietnam War, stating to the effect of “The Americans will leave in a year or a decade, but the Chinese will stay for a thousand years if we let them in”).

And then there is the profoundly strange phenomenon of the self-hating ________ (take your pick of the outgroup member), who has bought into the negative stereotypes and favors the ingroup. This was shown by psychologists Kenneth and Mamie Clark in their heart-breaking “doll studies,” in the 1940s, demonstrating how African-American children, along with white children, preferred playing with white dolls over black ones, ascribing more positive attributes to them (e.g., nice, pretty). That this effect was most pronounced in black kids in segregated schools was cited in Brown v. Board of Education. Or consider the scenario of the strident crusader against gay rights who turns out to be closeted—the Mobius strip pathology of accepting that you are an inferior Them. We put monkeys, even with their complexities of associating alien monkeys with spiders, to shame when it comes to the psychological vagaries of dividing the world into Us and Them.

Multiple Us-es

We also recognize that other individuals belong to multiple categories, and shift which we consider most relevant. Not surprisingly, lots of that literature concerns race, exploring whether it is an Us/Them categorization that trumps all others.

The primacy of race has folk-intuition appeal. First, race is a biological attribute, a conspicuous fixed identity that readily prompts essentialist thinking. Moreover, humans evolved under conditions where different skin color conspicuously signals that someone is a distant Them. Furthermore, a large percentage of cultures, long before Western contact, make status distinctions by skin color.

And yet, evidence is to the contrary. First, while there are obvious biological contributions to racial differences, “race” is a biological continuum rather than discrete categories—for example, unless you cherry-pick the data, genetic variation within race is generally as great as between races. And this really is no surprise when looking at the range of variation within a racial rubric—go compare, say, Sicilians with Swedes.

Moreover, race fails as a fixed classification system. At various times in U.S. census history, “Mexican” and “Armenian” were considered races; southern Italians and northern Europeans were classified differently; someone with one black great-grandparent and seven white ones was “white” in Oregon but not Florida. This is race as a cultural construct.

So it’s not surprising that racial Us/Them dichotomies are frequently trumped by other classifications. In one study, subjects saw pictures of individuals, each black or white, each associated with a statement, and then had to recall which face went with which statement. There was automatic racial categorization—if subjects misattributed a quote, the correct and incorrect faces were likely to be of the same race. Next, half the blacks and whites pictured wore the same distinctive yellow shirt; the other half wore gray. Now subjects most often confused statements by shirt color. Furthermore, gender reclassification particularly overrides unconscious racial categorization. After all, while races have evolved relatively recently in hominid history (probably over the course of just a few tens of thousands of years), our ancestors, almost all the way back to when they were paramecia, cared about Boy or Girl.

Important research by Mary Wheeler along with Fiske showed how categorization is shifted, studying other-race/amygdala activation. When subjects are instructed to look for a distinctive dot in each picture, other-race faces don’t activate the amygdala; face-ness wasn’t being processed. Judging whether each face looked older than some age wasn’t a recategorization that could eliminate the other-race amygdaloid response. But for a third group of subjects, a vegetable was displayed before each face; subjects judged whether the person liked that vegetable. And the amygdala didn’t respond to other-race faces.

Why? You look at the Them, thinking about what food she’d like. You picture her shopping, or ordering a meal in a restaurant. Best case scenario, you decide you and she share some vegetable preference—a smidgen of Us-ness. Worst case, you decide you two differ, a relatively benign Them—history is not stained with blood spilled by animosities between partisans for broccoli versus cauliflower. Most importantly, as you imagine her sitting at dinner, enjoying that food, you are thinking of her as an individual, the surest way to weaken automatic categorization of someone as a Them.

Rapid recategorizations can occur in the most brutal, unlikely, and intensely poignant circumstances:

In the Battle of Gettysburg, Confederate general Lewis Armistead was mortally wounded. As he lay on the battlefield, he gave a secret Masonic sign, hoping it would be recognized by a fellow Mason. It was, by Union officer Hiram Bingham, who protected him, and got him to a Union field hospital. In an instant the Us/Them of Union/Confederate faded before Mason/non-Mason.

During World War II, British commandos kidnapped German general Heinrich Kreipe in Crete, followed by a dangerous 18-day march to the coast to rendezvous with a British ship. One day the party saw the snows of Crete’s highest peak. Kreipe mumbled to himself the first line (in Latin) of an ode by Horace about a snowcapped mountain. At which point the British commander, Patrick Leigh Fermor, continued the recitation. The two men realized that they had, in Leigh Fermor’s words, “drunk at the same fountains.” A recategorization. Leigh Fermor had Kreipe’s wounds treated and personally ensured his safety. The two stayed in touch after the war and were reunited decades later on Greek television. “No hard feelings,” said Kreipe, praising their “daring operation.”

And finally there is the World War I Christmas truce, where opposing trench soldiers spent the day singing, praying, and partying together, playing soccer, and exchanging gifts, where soldiers up and down the lines struggled to extend the truce. It took all of one day for British-versus-German to yield to something more important—all of us in the trenches versus the officers in the rear who want us to kill each other.

We all have multiple dichotomies in our heads, and ones that seem inevitable and crucial can, under the right circumstances, evaporate in an instant.

Lessening the Impact of Us/Them-ing

So how can we make these dichotomies evaporate? Some thoughts:

Contact: The consequences of growing up amid diversity just discussed bring us to the effects of prolonged contact on Us/Theming. In the 1950s the psychologist Gordon Allport proposed “contact theory.” Inaccurate version: bring Us-es and Thems together (say, teenagers from two hostile nations in a summer camp), animosities disappear, similarities start to outweigh differences, everyone becomes an Us. More accurate version: put Us and Thems together under narrow circumstances and something sort of resembling that happens, but you can also blow it and worsen things.

Some of the effective narrower circumstances: each side has roughly equal numbers; everyone’s treated equally and unambiguously; contact is lengthy and on neutral territory; there are “superordinate” goals where everyone works together on a meaningful task (say, summer campers turning a meadow into a soccer field).

Even then, effects are typically limited—Us-es and Thems quickly lose touch, changes are transient and often specific—“I hate those Thems, but I know one from last summer who’s actually a good guy.” Where contact really causes fundamental change is when it is prolonged. Then we’re making progress.

Approaching the implicit: If you want to lessen an implicit Us/Them response, one good way is priming beforehand with a counter-stereotype (e.g., a reminder of a beloved celebrity Them). Another approach is making the implicit explicit—show people their implicit biases. Another is a powerful cognitive tool—perspective taking. Pretend you’re a Them and explain your grievances. How would you feel? Would your feet hurt after walking a mile in their shoes?

Replace essentialism with individuation: In one study, white subjects were asked about their acceptance of racial inequalities. Half were first primed toward essentialist thinking, being told, “Scientists pinpoint the genetic underpinnings of race.” Half heard an anti-essentialist prime—“Scientists reveal that race has no genetic basis.” The latter made subjects less accepting of inequalities.

Flatten hierarchies: Steep ones sharpen Us/Them differences, as those on top justify their status by denigrating the have-nots, while the latter view the ruling class as low warmth/high competence. For example, the cultural trope that the poor are more carefree, in touch with and able to enjoy life’s simple pleasures while the rich are unhappy, stressed, and burdened with responsibility (think of miserable Scrooge and those happy-go-lucky Cratchits). Likewise with the “they’re poor but loving” myth of framing the poor as high warmth/low competence. In one study of 37 countries, the greater the income inequality, the more the wealthy held such attitudes.

Some Conclusions

From massive barbarity to pinpricks of microaggression, Us versus Them has produced oceans of pain. Yet, I don’t think our goal should be to “cure” us of all Us/Them dichotomizing (separate of it being impossible, unless you have no amygdala).

I’m fairly solitary—I’ve spent a lot of my life living alone in a tent in Africa, studying another species. Yet some of my most exquisitely happy moments have come from feeling like an Us, feeling accepted, safe, and not alone, feeling part of something large and enveloping, with a sense of being on the right side and doing both well and good. There are even Us/Thems that I—eggheady, meek, and amorphously pacifistic—would kill or die for.

If we accept that there will always be sides, it’s challenging to always be on the side of angels. Distrust essentialism. Remember that supposed rationality is often just rationalization, playing catch-up with subterranean forces we never suspect. Focus on shared goals. Practice perspective taking. Individuate, individuate, individuate. And recall how often, historically, the truly malignant Thems hid themselves while making third parties the fall guy.

Meanwhile, give the right-of-way to people driving cars with the “Mean people suck” bumper sticker, and remind everyone that we’re in this together against Lord Voldemort and House Slytherin.

Robert Sapolsky is a professor of biology, neurology, and neurosurgery at Stanford University, and author of A Primate’s Memoir, Why Zebras Don’t Get Ulcers, and Behave: The Biology of Humans at Our Best and Worst, his newest book.

From Behave: The Biology of Humans at Our Best and Worst by Robert M. Sapolsky, published on May 2, 2017 by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2017 by Robert M. Sapolsky.

This article was originally published in our “The Absurd” issue in June, 2017.

Read the whole story
132 days ago
Share this story

DRM's Dead Canary: How We Just Lost the Web, What We Learned from It, and What We Need to Do Next

1 Share

EFF has been fighting against DRM and the laws behind it for a decade and a half, intervening in the US Broadcast Flag, the UN Broadcasting Treaty, the European DVB CPCM standard, the W3C EME standard and many other skirmishes, battles and even wars over the years. With that long history behind us, there are two things we want you to know about DRM:

  1. Everybody on the inside secretly knows that DRM technology is irrelevant, but DRM law is everything; and
  2. The reason companies want DRM has nothing to do with copyright.

These two points have just been demonstrated in a messy, drawn-out fight over the standardization of DRM in browsers, and since we threw a lot of blood and treasure at that fight, one thing we hope to salvage is an object lesson that will drive these two points home and provide a roadmap for the future of DRM fighting.


Here's how DRM works, at a high level: a company wants to provide a customer (you) with digital asset (like a movie, a book, a song, a video game or an app), but they want to control what you do with that file after you get it.

So they encrypt the file. We love encryption. Encryption works. With relatively little effort, anyone can scramble a file so well that no one will ever be able to decrypt it unless they're provided with the key.

Let's say this is Netflix. They send you a movie that's been scrambled and they want to be sure you can't save it and watch it later from your hard-drive. But they also need to give you a way to view the movie, too. At some point, that means unscrambling the movie. And there's only one way to unscramble a file that's been competently encrypted: you have to use the key.

So Netflix also gives you the unscrambling key.

But if you have the key, you can just unscramble the Netflix movies and save them to your hard drive. How can Netflix give you the key but control how you use it?

Netflix has to hide the key, somewhere on your computer, like in a browser extension or an app. This is where the technological bankruptcy comes in. Hiding something well is hard. Hiding something well in a piece of equipment that you give to your adversary to take away with them and do anything they want with is impossible.

Maybe you can't find the keys that Netflix hid in your browser. But someone can: a bored grad student with a free weekend, a self-taught genius decapping a chip in their basement, a competitor with a full-service lab. One tiny flaw in any part of the fragile wrapping around these keys, and they're free.

And once that flaw is exposed, anyone can write an app or a browser plugin that does have a save button. It's game over for the DRM technology. (The keys escape pretty regularly, just as fast as they can be revoked by the DRM companies.)

DRM gets made over the course of years, by skilled engineers, at a cost of millions of dollars. It gets broken in days, by teenagers, with hobbyist equipment. That's not because the DRM-makers are stupid, it's because they're doing something stupid.

Which is where the law comes in. DRM law gives rightsholders more forceful, far-ranging legal powers than laws governing any other kind of technology. In 1998, Congress passed the Digital Millennium Copyright Act (DMCA), whose Section 1201 provides for felony liability for anyone commercially engaged in bypassing a DRM system: 5 years in prison and a $500,000 fine for a first offense. Even noncommercial bypass of DRM is subject to liability. It also makes it legally risky to even talk about how to bypass a DRM system.

So the law shores up DRM systems with a broad range of threats. If Netflix designs a video player that won't save a video unless you break some DRM, they now have the right to sue -- or sic the police -- on any rival that rolls out an improved alternative streaming client, or a video-recorder that works with Netflix. Such tools wouldn't violate copyright law any more than a VCR or a Tivo does, but because that recorder would have to break Netflix DRM, they could use DRM law to crush it.

DRM law goes beyond mere bans on tampering with DRM. Companies also use Section 1201 of the DMCA to threaten security researchers who discover flaws in their products. The law becomes a weapon they can aim at anyone who wants to warn their customers (still you) that the products you're relying on aren't fit for use. That includes warning people about flaws in DRM that expose them to being hacked.

It's not just the USA and not just the DMCA, either. The US Trade Representative has "convinced" countries around the world to adopt a version of this rule.


DRM law has the power to do untold harm. Because it affords corporations the power to control the use of their products after sale, the power to decide who can compete with them and under what circumstances, and even who gets to warn people about defective products, DRM laws represent a powerful temptation.

Some things that aren't copyright infringement: buying a DVD while you're on holiday and playing it when you get home. It is obviously not a copyright infringement to go into a store in (say) New Delhi and buy a DVD and bring it home to (say) Topeka. The rightsholder made their movie, sold it to the retailer, and you paid the retailer the asking price. This is the opposite of copyright infringement. That's paying for works on the terms set by the rightsholder. But because DRM stops you from playing out-of-region discs on your home player, the studios can invoke copyright law to decide where you can consume the copyrighted works you've bought, fair and square.

Other not-infringements: fixing your car (GM uses DRM to control who can diagnose an engine, and to force mechanics to spend tens of thousands of dollars for diagnostic information they could otherwise determine themselves or obtain from third parties); refilling an ink cartridge (HP pushed out a fake security update that added DRM to millions of inkjet printers so that they'd refuse remanufactured or third-party cartridges), or toasting home-made bread (though this hasn't happened yet, there's no reason that a company couldn't put DRM in its toasters to control whose bread you can use).

It's also not a copyright infringement to watch Netflix in a browser that Netflix hasn't approved. It's not a copyright infringement to record a Netflix movie to watch later. It's not a copyright infringement to feed a Netflix video to an algorithm that can warn you about upcoming strobe effects that can trigger life-threatening seizures in people with photosensitive epilepsy.


The W3C is the world's foremost open web standards body, a consortium whose members (companies, universities, government agencies, civil society groups and others) engage in protracted wrangles over the best way for everyone to deliver web content. They produce "recommendations" (W3C-speak for "standards") that form the invisible struts that hold up the web. These agreements, produced through patient negotiation and compromise, represent an agreement by major stakeholders about the best (or least-worst) way to solve thorny technological problems.

In 2013, Netflix and a few other media companies convinced the W3C to start work on a DRM system for the web. This DRM system, Encrypted Media Extensions (EME), represented a sharp departure from the W3C's normal business. First, EME would not be a complete standard: the organization would specify an API through which publishers and browser vendors would make DRM work, but the actual "content decryption module" (CDM) wouldn't be defined by the standard. That means that EME was a standard in name only: if you started a browser company and followed all the W3C's recommendations, you still wouldn't be able to play back a Netflix video. For that, you'd need Netflix's permission.

It's hard to overstate how weird this is. Web standards are about "permissionless interoperability." The standards for formatting text mean that anyone can make a tool that can show you pages from the New York Times' website; images from Getty; or interactive charts on Bloomberg. The companies can still decide who can see which pages on their websites (by deciding who gets a password and which parts of the website each password unlocks), but they don't get to decide who can make the web browsing program you type the password into in order to access the website.

A web in which every publisher gets to pick and choose which browsers you can use to visit their sites is a very different one from the historical web. Historically, anyone could make a new browser by making sure it adhered to W3C recommendations, and then start to compete. And while the web has always been dominated by a few browsers, which browsers dominate have changed every decade or so, as new companies and even nonprofits like Mozilla (who make Firefox) overthrew the old order. Technologies that have stood in the way of this permissionless interoperabilty -- for instance, patent-encumbered video -- have been seen as impediments to the idea of the open web, not standardization opportunities.

When the W3C starts making technologies that only work when they're blessed by a handful of entertainment companies, they're putting their thumbs -- their fists -- on the scales in favor of ensuring that the current browser giants get to enjoy a permanent reign.

But that's the least of it. Until EME, W3C standards were designed to give the users of the web (e.g. you) more control over what your computer did while you were accessing other peoples' websites. With EME -- and for the first time ever -- the W3C is designing technology that takes away your control. EME is designed to allow Netflix -- and other big companies -- to decide what your browser does, even (especially) when you disagree about what that should be.

Since the earliest days of computing, there's been a simmering debate about whether computers exist to control their users, or vice versa (as the visionary computer scientist and education specialist Seymour Papert put it, "children should be programming the computer rather than being programmed by it" -- that applies equally well to adults. Every W3C standard until 2017 was on the side of people controlling computers. EME breaks with that. It is a subtle, but profound shift.


Ay yi yi. That is the three billion user question.

The W3C version of the story goes something like this. The rise of apps has weakened the web. In the pre-app days, the web was the only game in town, so companies had to play by web rules: open standards, open web. But now that apps exist and nearly everyone uses them, big companies can boycott the web, forcing their users into apps instead. That just accelerates the rise of apps, and weakens the web even more. Apps are used to implement DRM, so DRM-using companies are moving to apps. To keep entertainment companies from killing the web outright, the Web must have DRM too.

Even if those companies don't abandon the web altogether, continues this argument, getting them to make their DRM at the W3C is better than letting them make it on an ad-hoc basis. Left to their own devices, they could make DRM that made no accommodations for people with disabilities, and without the W3C's moderating influence, these companies would make DRM that would be hugely invasive of web users' privacy.

The argument ends with a broad justification for DRM: companies have the right to protect their copyrights. We can't expect an organization to spend fortunes creating or licensing movies and then distribute them in a way that lets anyone copy and share them.

We think that these arguments don't hold much water. The web does indeed lack some of its earlier only-game-in-town muscle, but the reality is that companies make money by going where their customers are, and every potential customer has a browser, while only existing customers have a company's apps. The more hoops a person has to jump through in order to become your customer, the fewer customers you'll have. Netflix is in a hyper-competitive market with tons of new entrants (e.g. Disney), and being "that streaming service you can't use on the web" is a serious deficit.

We also think that the media companies and tech companies would struggle to arrive at a standard for DRM outside of the W3C, even a really terrible one. We've spent a lot of time in the smoke-filled rooms of DRM standardization and the core dynamic there is the media companies demanding full-on lockdown for every frame of video, and tech companies insisting that the best anyone can hope for is an ineffectual "speed-bump" that they hope will mollify the media companies. Often as not, these negotiations collapse under their own weight.

Then there's the matter of patents: companies that think DRM is a good idea also love software patents, and the result is an impenetrable thicket of patents that make getting anything done next to impossible. The W3C's patent-pooling mechanism (which is uniquely comprehensive in the standards world and stands as an example of the best way to do this sort of thing) was essential to making DRM standardization possible. What's more, there are key players in the DRM world, like Adobe, who hold significant patent portfolios but are playing an ever-dwindling role in the world of DRM (the avowed goal of EME was to "kill Flash"). If the companies involved had to all sit down and negotiate a new patent deal without the W3C's framework, any of these companies could "turn troll" and insist that all the rest would have to shell out big dollars to license their patents -- they have nothing to lose by threatening the entire enterprise, and everything to gain from even a minuscule per-user royalty for something that will be rolled out into three billion browsers.

Finally, there's no indication that EME had anything to do with protecting legitimate business interests. Streaming video services like Netflix rely on customers to subscribe to a whole library with constantly added new materials and a recommendation engine to help them navigate the catalog.

DRM for streaming video is all about preventing competition, not protecting copyrights. The purpose of DRM is to give companies the legal tools to prevent activities that would otherwise be allowed. The DRM part doesn't have to "work" (in the sense of preventing copyright infringement) so long as it allows for the invocation of the DMCA.

To see how true this is, just look at Widevine, Google's version of EME. Google bought the company that made Widevine in 2010, but it wasn't until 2016 that an independent researcher actually took a close look at how well it prevented videos from leaking. That researcher, David Livshits found that Widevine was trivial to circumvent, and it had been since its inception, and that the errors that made Widevine so ineffective were obvious to even a cursory examination. If the millions of dollars and the high-power personnel committed to EME were allocated to create a technology that would effectively prevent copyright infringement, then you'd think that Netflix or one of the other media companies in the negotiations would have diverted some of those resources to a quick audit to make sure that the stuff actually worked as advertised.

(Funny story: Livshits is an Israeli at Ben Gurion University, and Israel happens to be the rare country that doesn't ban breaking DRM, meaning that Israelis are among the only people who can do this kind of research without fear of legal retaliation)

But the biggest proof that EME was just a means to shut down legitimate competitors -- and not an effort to protect copyright -- is what happened next.


When EFF joined the W3C, our opening bid was "Don't make DRM."

We put the case to the organization, describing the way that DRM interferes with the important copyright exceptions (like those that allow people to record and remix copyrighted works for critical or transformative purposes) and the myriad problems presented by the DMCA and laws like it around the world.

The executive team of the W3C basically dismissed all arguments about fair use and user rights in copyright as a kind of unfortunate casualty of the need to keep Netflix from ditching the web in favor of apps, and as for the DMCA, they said that they couldn't do anything about this crazy law, but they were sure that the W3C's members were not interested in abusing the DMCA, they just wanted to keep their high-value movies from being shared on the internet.

So we changed tack, and proposed a kind of "controlled experiment" to find out what the DRM fans at the W3C were trying to accomplish.

The W3C is a consensus body: it makes standards by getting everyone in a room to compromise, moving toward a position that everyone can live with. Our ideal world was "No DRM at the W3C," and DRM is a bad enough idea that it was hard to imagine much of a compromise from there.

But after listening closely to the DRM side's disavowals of DMCA abuse, we thought we could find something that would represent an improvement on the current status quo and that should fit with their stated views.

We proposed a kind of DRM non-aggression pact, through which W3C members would promise that they'd only sue people under laws like DMCA 1201 if there was some other law that had been broken. So if someone violates your copyright, or incites someone to violate your copyright, or interferes with your contracts with your users, or misappropriates your trade secrets, or counterfeits your trademarks, or does anything else that violates your legal rights, you can throw the book at them.

But if someone goes around your DRM and doesn't violate any other laws, the non-aggression pact means that you couldn't use the W3C standardised DRM as a route to legally shut them down. That would protect security researchers, it would protect people analyzing video to add subtitles and other assistive features, it would protect archivists who had the legal right to make copies, and it would protect people making new browsers.

If all you care about is making an effective technology that prevents lawbreaking, this agreement should be a no-brainer. For starters, if you think DRM is an effective technology, it shouldn't matter if it's illegal to criticize it.

And since the nonaggression pact kept all other legal rights intact, there was no risk that agreeing to it would allow someone to break the law with impunity. Anyone who violated copyrights (or any other rights) would be square in the DMCA's crosshairs, and companies would have their finger on the trigger.


Of course, they hated this idea.

The studios, the DRM vendors and the large corporate members of the W3C participated in a desultory, brief "negotiation" before voting to terminate further discussion and press on. The W3C executive helped them dodge discussions, chartering further work on EME without any parallel work on protecting the open web, even as opposition within the W3C mounted.

By the time the dust settled, EME was published after the most divided votes the W3C had ever seen, with the W3C executive unilaterally declaring that issues for security research, accessibility, archiving and innovation had been dealt with as much as they could be (despite the fact that literally nothing binding was done about any of these things). The "consensus" process of the W3C has so thoroughly hijacked that EME's publication was only supported by 58% of the members who voted in the final poll, and many of those members expressed regret that they were cornered into voting for something they objected to.

When the W3C executive declared that any protections for the open web were incompatible with the desires of the DRM-boosters, it was a kind of ironic vindication. After all, this is where we'd started, with EFF insisting that DRM wasn't compatible with security disclosures, with accessibility, with archiving or innovation. Now, it seemed, everyone agreed.

What's more, they all implicitly agreed that DRM wasn't about protecting copyright. It was about using copyright to seize other rights, like the right to decide who could criticize your product -- or compete with it.

DRM's sham cryptography means that it only works if you're not allowed to know about its defects. This proposition was conclusively proved when a W3C member proposed that the Consortium should protect disclosures that affected EME's "privacy sandbox" and opened users to invasive spying, and within minutes, Netflix's representative said that even this was not worth considering.

In a twisted way, Netflix was right. DRM is so fragile, so incoherent, that it is simply incompatible with the norms of the marketplace and science, in which anyone is free to describe their truthful discoveries, even if they frustrate a giant company's commercial aspirations.

The W3C tacitly admitted this when they tried to convene a discussion group to come up with some nonbinding guidelines for when EME-using companies should use the power of DRM law to punish their critics and when they should permit the criticism.


They called this "responsible disclosure," but it was far from the kinds of "responsible disclosure" we see today. In current practice, companies offer security researchers enticements to disclose their discoveries to vendors before going public. These enticements range from bug-bounty programs that pay out cash, to leaderboards that provide glory to the best researchers, to binding promises to act on disclosures in a timely way, rather than crossing their fingers, sitting on the newly discovered defects, and hoping no one else re-discovers them and exploits them.

The tension between independent security researchers and corporations is as old as computing itself. Computers are hard to secure, thanks to their complexity. Perfection is elusive. Keeping the users of networked computers safe requires constant evaluation and disclosure, so that vendors can fix their bugs and users can make informed decisions about which systems are safe enough to use.

But companies aren't always the best stewards of bad news about their own products. As researchers have discovered -- the hard way -- telling a company about its mistakes may be the polite thing to do, but it's very risky behavior, apt to get you threatened with legal reprisals if you go public. Many's the researcher who told a company about a bug, only to have the company sit on that news for an intolerably long time, putting its users at risk. Often, these bugs only come to light when they are independently discovered by bad actors, who figure out how to exploit them, turning them into attacks that compromise millions of users, so many that the bug's existence can no longer be swept under the rug.

As the research world grew more gunshy about talking to companies, companies were forced to make real, binding assurances that they would honor the researchers' discoveries by taking swift action in a defined period, by promising not to threaten researchers over presenting their findings, and even by bidding for researchers' trust with cash bounties. Over the years, the situation has improved, with most big companies offering some kind of disclosure program.

But the reason companies offer those bounties and assurances is that they have no choice. Telling the truth about defective products is not illegal, so researchers who discover those truths are under no obligation to play by companies' rules. That forces companies to demonstrate their goodwill with good conduct, binding promises and pot-sweeteners.

Companies definitely want to be able to decide who can tell the truth about their products and when. We know that because when they get the chance to flex that muscle, they flex it. We know it because they said so at the W3C. We know it because they demanded that they get that right as part of the DRM package in EME.

Of all the lows in the W3C DRM process, the most shocking was when the historic defenders of the open web tried to turn an effort to protect the rights of researchers to warn billions of people about harmful defects in their browsers into an effort to advise companies on when they should hold off on exercising that right -- a right they wouldn’t have without the W3C making DRM for the web.


From the first days of the DRM fight at the W3C, we understood that the DRM vendors and the media companies they supplied weren't there to protect copyright, they were there to grab legally enforceable non-copyright privileges. We also knew that DRM was incompatible with security research: because DRM relies on obfuscation, anyone who documents how DRM works also makes it stop working.

This is especially clear in terms of what wasn't said at the W3C: when we proposed that people should be able to break DRM to generate subtitles or conduct security audits, the arguments were always about whether that was acceptable, but it was never about whether it was possible.

Recall that EME is supposed to be a system that helps companies ensure that their movies aren't saved to their users' hard-drives and shared around the internet. For this to work, it should be, you know, hard to do that.

But in every discussion of when people should be allowed to break EME, it was always a given that anyone who wanted to could do so. After all, when you hide secrets in software you give to people who you want to keep them secret from, you are probably going to be disappointed.

From day one, we understood that we would arrive at a point in which the DRM advocates at the W3C would be obliged to admit that the survival of their plan relied on being able to silence people who examined their products.

However, we did hold out hope that when this became clear to everyone, that they would understand that DRM couldn't peacefully co-exist with the open web.

We were wrong.


The success of DRM at the W3C is a parable about market concentration and the precarity of the open web. Hundreds of security researchers lobbied the W3C to protect their work, UNESCO publicly condemned the extension of DRM to the web, and the many crypto-currency members of the W3C warned that using browsers for secure, high-stakes applications like moving around peoples' life-savings could only happen if browsers were subjected to the same security investigations as every other technology in our life (except DRM technologies).

There is no shortage of businesses that want to be able to control what their customers and competitors do with their products. When the US Copyright Office held hearings on DRM in 2015, they heard about DRM in medical implants and cars, farm equipment and voting machines. Companies have discovered that adding DRM to their products is the most robust way to control the marketplace, a cheap and reliable way to convert commercial preferences about who can repair, improve, and supply their products into legally enforceable rights.

The marketplace harms from this anti-competitive behavior are easy to see. For example, the aggressive use of DRM to prevent independent repair shops ends up diverting tons of e-waste to landfill or recycling, at the cost of local economies and the ability of people to get full use out of your property. A phone that you recycle instead of repairing is a phone you have to pay to replace -- and repair creates many more jobs than recycling (recycling a ton of e-waste creates 15 jobs; repairing it creates 150 jobs). Repair jobs are local, entrepreneurial jobs, because you don't need a lot of capital to start a repair shop, and your customers want to bring their gadgets to someone local for service (no one wants to send a phone to China for repairs -- let alone a car!).

But those economic harms are only the tip of the iceberg. Laws like DMCA 1201 incentivize DRM by promising the power to control competition, but DRM's worst harms are in the realm of security. When the W3C published EME, it bequeathed to the web an unauditable attack-surface in browsers used by billions of people for their most sensitive and risky applications. These browsers are also the control panels for the Internet of Things: the sensor-studded, actuating gadgets that can see us, hear us, and act on the physical world, with the power to boil, freeze, shock, concuss, or betray us in a thousand ways.

The gadgets themselves have DRM, intended to lock our repairs and third-party consumables, meaning that everything from your toaster to your car is becoming off-limits to scrutiny by independent researchers who can give you unvarnished, unbiased assessments of the security and reliability of these devices.

In a competitive market, you'd expect non-DRM options to proliferate in answer to this bad behavior. After all, no customer wants DRM: no car-dealer ever sold a new GM by boasting that it was a felony for your favorite mechanic to fix it.

But we don't live in an a competitive market. Laws like DMCA 1201 undermine the competition that might counter their worst effects.

The companies that fought DRM at the W3C -- browser vendors, Netflix, tech giants, the cable industry -- all trace their success to business strategies that shocked and outraged established industry when they first emerged. Cable started as unlicensed businesses that retransmitted broadcasts and charged for it. Apple's dominance started with ripping CDs and ignoring the howls of the music industry (just as Firefox got where it is by blocking obnoxious ads and ignoring the web-publishers who lost millions as a result). Of course, Netflix's revolutionary red envelopes were treated as a form of theft.

These businesses started as pirates and became admirals, and treat their origin stories as legends of plucky, disruptive entrepreneurs taking on a dinosauric and ossified establishment. But they treat any disruption aimed at them as an affront to the natural order of things. To paraphrase Douglas Adams, any technology invented in your adolescence is amazing and world-changing; anything invented after you turn 30 is immoral and needs to be destroyed.


Most people don't understand the risks of DRM. The topic is weird, technical, esoteric and take too long to explain. The pro-DRM side wants to make the debate about piracy and counterfeiting, and those are easy stories to tell.

But people who want DRM don't really care about that stuff, and we can prove it: just ask them if they'd be willing to promise not to use the DMCA unless someone is violating copyright, and watch them squirm and weasel about why policing copyright involves shutting down competitive activities that don't violate copyright. Point out that they didn't even question whether someone could break their DRM, because, of course, DRM is so technologically incoherent that it only works if it's against the law to understand how it works, and it can be defeated just by looking closely at it.

Ask them to promise not to invoke the DMCA against people who have discovered defects in their products and listen to them defend the idea that companies should get a veto over publication of true facts about their mistakes and demerits.

These inconvenient framings at least establish what we're fighting about, dispensing with the disingenuous arguments about copyright and moving on to the real issues: competition, accessibility, security.

This won't win the fight on its own. These are still wonky and nuanced ideas.

One thing we've learned from 15-plus years fighting DRM: it's easier to get people to take notice of procedural issues than substantive ones. We labored in vain to get people to take notice of the Broadcasting Treaty, a bafflingly complex and horribly overreaching treaty from WIPO, a UN specialized agency. No one cared until someone started stealing piles of our handouts and hiding them in the toilets so no one could read them. That was global news: it's hard to figure out what something like the Broadcast Treaty is about, but it's easy to call shenanigans when someone tries to hide your literature in the toilet so delegates don’t see the opposing view.

So it was that four years of beating the drum about DRM at the W3C barely broke the surface, but when we resigned from the W3C over the final vote, everyone sat up and took notice, asking how they could help fix things. The short answer is, "It's too late: we resigned because we had run out of options.

But the long answer is a little more hopeful. EFF is suing the US government to overturn Section 1201 of the DMCA. As we proved at the W3C, there is no appetite for making DRM unless there's a law like DMCA 1201 in the mix. DRM on its own does nothing except provide an opportunity for competitors to kick butt with innovative offerings that cost less and do more.

The Copyright Office is about to hold fresh hearings about DMCA 1201.

The W3C fight proved that we could shift the debate to the real issues. The incentives that led to the W3C being colonized by DRM are still in play and other organizations will face this threat in the years to come. We'll continue to refine this tactic there and keep fighting, and we'll keep reporting on how it goes so that you can help us fight. All we ask is that you keep paying attention. As we learned at the W3C, we can't do it without you.

Read the whole story
149 days ago
Share this story
Next Page of Stories