159 stories
·
8 followers

Smells Like Teen Spirit in a major key is an upbeat pop-punk song

1 Comment and 3 Shares

This bent my brain a little: if you re-tune Nirvana’s Smells Like Teen Spirit in a major key, it sounds like an upbeat pop-punk song. Like, Kurt Cobain actually sounds happy when he says “oh yeah, I guess it makes me smile” and the pre-chorus — “Hello, hello, hello, how low” — is downright joyous. Although I guess it shouldn’t be super surprising…in a 1994 interview with Rolling Stone, Cobain admits that the song was meant to be poppy.

I was trying to write the ultimate pop song. I was basically trying to rip off the Pixies. I have to admit it [smiles]. When I heard the Pixies for the first time, I connected with that band so heavily I should have been in that band — or at least in a Pixies cover band. We used their sense of dynamics, being soft and quiet and then loud and hard.

“Teen Spirit” was such a clichéd riff. It was so close to a Boston riff or “Louie, Louie.” When I came up with the guitar part, Krist looked at me and said, “That is so ridiculous.” I made the band play it for an hour and a half.

Like me, if you don’t know a whole lot about music, here’s the difference between major and minor chords & scales.

The difference between major and minor chords and scales boils down to a difference of one essential note — the third. The third is what gives major-sounding scales and chords their brighter, cheerier sound, and what gives minor scales and chords their darker, sadder sound.

You can also listen to the song on Soundcloud.

See also this falling shovel sounds exactly like Smells Like Teen Spirit.

Tags: music   Nirvana   remix   video
Read the whole story
ove
13 days ago
reply
Share this story
Delete
1 public comment
glenn
13 days ago
reply
holy crap what a total different feel
Waterloo, Canada

Overcoming Us vs. Them

1 Share

As a kid, I saw the 1968 version of Planet of the Apes. As a future primatologist, I was mesmerized. Years later I discovered an anecdote about its filming: At lunchtime, the people playing chimps and those playing gorillas ate in separate groups.

It’s been said, “There are two kinds of people in the world: those who divide the world into two kinds of people and those who don’t.” In reality, there’s lots more of the former. And it can be vastly consequential when people are divided into Us and Them, ingroup and outgroup, “the people” (i.e., our kind) and the Others.

The core of Us/Them-ing is emotional and automatic.

Humans universally make Us/Them dichotomies along lines of race, ethnicity, gender, language group, religion, age, socioeconomic status, and so on. And it’s not a pretty picture. We do so with remarkable speed and neurobiological efficiency; have complex taxonomies and classifications of ways in which we denigrate Thems; do so with a versatility that ranges from the minutest of microaggression to bloodbaths of savagery; and regularly decide what is inferior about Them based on pure emotion, followed by primitive rationalizations that we mistake for rationality. Pretty depressing.

But crucially, there is room for optimism. Much of that is grounded in something definedly human, which is that we all carry multiple Us/Them divisions in our heads. A Them in one case can be an Us in another, and it can only take an instant for that identity to flip. Thus, there is hope that, with science’s help, clannishness and xenophobia can lessen, perhaps even so much so that Hollywood-extra chimps and gorillas can break bread together.

The Strength of Us Versus Them

Considerable evidence suggests that dividing the world into Us and Them is deeply hard-wired in our brains, with an ancient evolutionary legacy. For starters, we detect Us/Them differences with stunning speed. Stick someone in a “functional MRI”—a brain scanner that indicates activity in various brain regions under particular circumstances. Flash up pictures of faces for 50 milliseconds—a 20th of a second—barely at the level of detection. And remarkably, with even such minimal exposure, the brain processes faces of Thems differently than Us-es.

This has been studied extensively with the inflammatory Us/Them of race. Briefly flash up the face of someone of a different race (compared with a same-race face) and, on average, there is preferential activation of the amygdala, a brain region associated with fear, anxiety, and aggression. Moreover, other-race faces cause less activation than do same-race faces in the fusiform cortex, a region specializing in facial recognition; along with that comes less accuracy at remembering other-race faces. Watching a film of a hand being poked with a needle causes an “isomorphic reflex,” where the part of the motor cortex corresponding to your own hand activates, and your hand clenches—unless the hand is of another race, in which case less of this effect is produced.

The brain’s fault lines dividing Us from Them are also shown with the hormone oxytocin. It’s famed for its pro-social effects—oxytocin prompts people to be more trusting, cooperative, and generous. But, crucially, this is how oxytocin influences behavior toward members of your own group. When it comes to outgroup members, it does the opposite.

The automatic, unconscious nature of Us/Them-ing attests to its depth. This can be demonstrated with the fiendishly clever Implicit Association Test. Suppose you’re deeply prejudiced against trolls, consider them inferior to humans. To simplify, this can be revealed with the Implicit Association Test, where subjects look at pictures of humans or trolls, coupled with words with positive or negative connotations. The couplings can support the direction of your biases (e.g., a human face and the word “honest,” a troll face and the word “deceitful”), or can run counter to your biases. And people take slightly longer, a fraction of a second, to process discordant pairings. It’s automatic—you’re not fuming about clannish troll business practices or troll brutality in the Battle of Somewhere in 1523. You’re processing words and pictures, and your anti-troll bias makes you unconsciously pause, stopped by the dissonance linking troll with “lovely,” or human with “malodorous.”

We’re not alone in Us/Them-ing. It’s no news that other primates can make violent Us/Them distinctions; after all, chimps band together and systematically kill the males in a neighboring group. Recent work, adapting the Implicit Association Test to another species, suggests that even other primates have implicit negative associations with Others. Rhesus monkeys would look at pictures either of members of their own group or strangers, coupled with pictures of things with positive or negative connotations. And monkeys would look longer at pairings discordant with their biases (e.g., pictures of members of their own group with pictures of spiders). These monkeys don’t just fight neighbors over resources. They have negative associations about them—“Those guys are like yucky spiders, but us, us, we’re like luscious fruit.”

Thus, the strength of Us/Them-ing is shown by the: speed and minimal sensory stimuli required for the brain to process group differences; tendency to group according to arbitrary differences, and then imbue those differences with supposedly rational power; unconscious automaticity of such processes; and rudiments of it in other primates. As we’ll see now, we tend to think of Us, but not Thems, fairly straightforwardly.

The Nature of Us

Across cultures and throughout history, people who comprise Us are viewed in similarly self-congratulatory ways—We are more correct, wise, moral, and worthy. Us-ness also involves inflating the merits of our arbitrary markers, which can take some work—rationalizing why our food is tastier, our music more moving, our language more logical or poetic.

Us-ness also carries obligations toward the other guy—for example, in studies in sports stadiums, a researcher posing as a fan, complete with sweatshirt supporting one of the teams and in need of help with something, is more likely to be helped by a fellow fan than by an opposing one.

Ingroup favoritism raises a key question—at our core, do we want Us to do “well” by maximizing absolute levels of well being, or merely “better than,” by maximizing the gap between Us and Them?

We typically claim to wish for the former, but can smolder with desire for the latter. This can be benign—in a tight pennant race, a loss for the hated rival to a third party is as good as a win for the home team, and for sectarian sports fans, both outcomes similarly activate brain pathways associated with reward and the neurotransmitter dopamine. But sometimes, choosing “better than” over “well” can be disastrous. It’s not a great mindset to think you’ve won World War III if afterward Us have two mud huts and three fire sticks and They have only one of each.

Among the most pro-social things we do for ingroup members is readily forgive them for transgressions. When a Them does something wrong, it reflects essentialism—that’s the way They are, always have been, always will be. When an Us is in the wrong, however, the pull is toward situational interpretations—we’re not usually like that, and here’s the extenuating circumstance to explain why he did this. Situational explanations for misdeeds are the reason why defense lawyers want jurors who will view the defendant as an Us.

Something interesting and different can happen when someone’s transgression airs Us’s dirty laundry, affirming a negative stereotype. Ingroup shame can provoke intense punishment for the benefit of outsiders. Consider Rudy Giuliani, growing up in Brooklyn in an Italian-American enclave dominated by organized crime (Giuliani’s father served time for armed robbery and then worked for a mob loan shark). Giuliani gained prominence in 1985 as the attorney prosecuting the “Five Families” in the Mafia Commission Trial, effectively destroying them. He was strongly motivated to counter the stereotype of “Italian-American” as synonymous with organized crime—“If [the successful prosecution is] not enough to remove the Mafia prejudice, then there probably could not be anything you could do to remove it.” If you want someone to ferociously prosecute Mafiosi, get a proud Italian-American outraged by the stereotypes generated by the mob.

Thus, being an Us carries an array of ingroup expectations and obligations. Is it possible to switch from one category of Us to another? That’s easy in, say, sports—when a player is traded he doesn’t serve as a fifth column, throwing games in his new uniform to benefit his old team. The core of such a contractual relationship is the fungibility of employer and employee.

At the other extreme are Us memberships that are not fungible, transcending negotiation. People aren’t traded from the Shiites to the Sunnis, or from the Iraqi Kurds to the Sami herders in Finland. It’s a rare Kurd who wants to be Sami, and her ancestors would likely turn over in their graves when she nuzzled her first reindeer. Converts are often subject to retribution by those they left—consider Meriam Ibrahim, sentenced to death in Sudan in 2014 for converting to Christianity—and suspicion from those they joined.

The Nature of Them

Do we think or feel our way toward disliking Them?

Us/Them-ing is readily framed cognitively. Ruling classes do cognitive cartwheels to justify the status quo. Likewise, it’s a cognitive challenge to accommodate the celebrity Them, the neighborly Them who has saved our keister—“Ah, this Them is different.”

Viewing Thems in certain threatening ways requires cognitive subtlety. Being afraid that some Them will rob you is rife with affect and particularism. But fearing that those Thems will take our jobs, manipulate the banks, dilute our bloodlines, etc., requires thoughts about economics, sociology, and pseudoscience.

Despite that role of cognition, the core of Us/Them-ing is emotional and automatic, as summarized by when we say, “I can’t put my finger on why, but it’s just wrong when They do that.” Jonathan Haidt of New York University has shown that often, cognitions are post-hoc justifications for feelings and intuitions, to convince ourselves that we have indeed rationally put our finger on why.

This can be shown with neuroimaging studies. As noted, when fleetingly seeing the face of a Them, the amygdala activates. Critically, this comes long before (on the time scale of brain processing) more cognitive, cortical regions are processing the Them. The emotions come first.

Dividing the world into Us and Them is deeply hard-wired.

The strongest evidence that abrasive Them-ing originates in emotional, automatic processes is that supposed rational cognitions about Thems can be unconsciously manipulated. Just consider this array of findings: Show subjects slides about some obscure country; afterward, they will have more negative attitudes toward the place if, between slides, pictures of faces with expressions of fear appeared at subliminal speeds. Sitting near smelly garbage makes people more socially conservative about outgroup issues (e.g., attitudes toward gay marriage among heterosexuals). Christians express more negative attitudes toward non-Christians if they’ve just walked past a church. In another study, commuters at train stations in predominantly white suburbs filled out questionnaires about political views. Then, at half the stations, a pair of young Mexicans, conservatively dressed and chatting quietly, appeared daily on the platform for two weeks. Then commuters filled out second questionnaires. Remarkably, the presence of such pairs made people more supportive of decreasing legal immigration from Mexico and making English the official language, and more opposed to amnesty for undocumented immigrants (without changing attitudes about Asian-Americans, African-Americans or Middle Easterners). Women, when ovulating, have more negative attitudes about outgroup men.

In other words, our visceral, emotional views of Thems are shaped by subterranean forces we’d never suspect. And then our cognitions sprint to catch up with our affective selves, generating the minute factoid or plausible fabrication that explains why we hate Them. It’s a kind of confirmation bias: remembering supportive better than opposing evidence; testing things in ways that can support but not negate your hypothesis; skeptically probing outcomes you don’t like more than ones you do.

The Heterogeneity of Thems

Of course, different types of Thems evoke different feelings (and different neurobiological responses). Most common is to view Them as threatening, angry, and untrustworthy. In economic games people implicitly treat other-race individuals as less trustworthy or reciprocating. Whites judge African-American faces as angrier than white faces, and are more likely to categorize racially ambiguous angry faces as the other race.

But Thems do not solely evoke a sense of menace; sometimes, it’s disgust. This brings up one fascinating brain region, the insula. In mammals, it responds to the taste or smell of something rotten, and triggers stomach lurching and gag reflexes. In other words, it protects animals from poisonous food. Crucially, in humans the insula not only mediates such sensory disgust, but also moral disgust—have subjects recount something rotten they’ve done, show them pictures of morally appalling things (e.g., a lynching), and the insula activates. It’s why it’s not just metaphorical that sufficiently morally disgusting material makes us feel sick to our stomachs. And Thems that typically evoke a sense of disgust (e.g. drug addicts) activate the insula at least as much as the amygdala.

Having viscerally negative feelings about abstract features of Thems is challenging; being disgusted by another group’s abstract beliefs isn’t easy for the insula. Us/Them markers provide a stepping-stone. Feeling disgusted by Them because they eat repulsive, sacred, or adorable things, slather themselves with rancid scents, dress in scandalous ways—this the insula can sink its teeth into. In the words of the psychologist Paul Rozin of the University of Pennsylvania, “Disgust serves as an ethnic or outgroup marker.” Deciding that They eat disgusting things facilitates deciding that They also have disgusting ideas about, say, deontological ethics.

Then there are Thems who are ridiculous, i.e., subject to ridicule, humor as hostility. Outgroups mocking the ingroup is a weapon of the weak, lessening the sting of subordination. But when an ingroup mocks an outgroup, it solidifies negative stereotypes and reifies the hierarchy.

Thems are also frequently viewed as more homogeneous than Us, with simpler emotions and less sensitivity to pain. For example, whether in ancient Rome, medieval England, imperial China, or the antebellum South, the elite had system-justifying stereotypes of slaves as simple, childlike, and incapable of independence.

Thus, different Thems come in different flavors with immutable, icky essences—threatening and angry, disgusting and repellent, ridiculous, primitive, and undifferentiated.

Cold and/or Incompetent

Important work by Susan Fiske of Princeton University explores the taxonomies of Thems we carry in our heads. She finds that we tend to categorize Thems along two axes: “warmth” (is the individual or group a friend or foe, benevolent or malevolent?) and “competence” (how effectively can the individual or group carry out their intentions?).

The axes are independent. Ask subjects to assess someone; priming them with cues about the person’s status alters ratings of competence but not of warmth. Priming about the person’s competitiveness does the opposite. These two axes produce a matrix with four corners. We rate ourselves as high in both warmth and competence (H/H), naturally. Americans typically rate good Christians, African-American professionals, and the middle class this way.

There’s the other extreme, low in both warmth and competence (L/L). Such ratings go to the homeless or addicts.

Then there’s the high-warmth/low-competence (H/L) realm—the mentally disabled, people with handicaps, infirm elderly. Low warmth/high competence (L/H) is how people in the developing world tend to view the Europeans who colonized them (“competence” here is not about skill at rocket science, but rather the efficacy those people had when getting it into their heads to, say, steal your ancestral lands), and how many minority Americans view whites. It’s the hostile stereotype of Asian-Americans by white America, of Jews in Europe, of Indo-Pakistanis in East Africa, of Lebanese in West Africa, of ethnic Chinese in Indonesia, and of the rich by the poor most everywhere—they’re cold, greedy, clannish but, dang, go to one who is a doctor if you’re seriously sick.

Between envy and disgust are our most hostile urges.

Each extreme tends to evoke consistent feelings. For H/H (i.e., Us), there’s pride. L/H—envy and resentment. H/L—pity. L/L—disgust. Viewing pictures of L/L people activates the amygdala and insula, but not the fusiform face area; this is the same profile evoked by a picture of, say, a maggot-infested wound. In contrast, viewing L/H or H/L individuals activates emotional and cognitive parts of the frontal cortex.

The places between the extremes evoke their own characteristic responses. Individuals who evoke a reaction between pity and pride evoke a desire to help. Floating between pity and disgust is a desire to exclude and demean. Between pride and envy is a desire to associate, to derive benefits from. And between envy and disgust are our most hostile urges to attack.

What fascinates me is when someone’s categorization changes. Most straightforward are shifts from high-warmth/high-competence (H/H) status:

H/H to H/L: A parent declining into dementia, evoking poignant protectiveness.

H/H to L/H: The business partner who turns out to have embezzled for decades. Betrayal.

H/H to L/L: The rare instance of that successful acquaintance, where “something happened” and now he’s homeless. Disgust mingled with bafflement—what went wrong?

Then there’s L/L to L/H. When I was a kid in the ’60s, the parochial American view of Japan was the former—World War II’s shadow generating dislike and contempt, and “Made in Japan” was about cheap plastic gewgaws. Then, suddenly, “Made in Japan” meant outcompeting American automakers.

When a homeless guy does cartwheels to return someone’s lost wallet—and you realize he’s more decent than your friends—that’s L/L to H/L.

Most interesting to me is L/H to L/L, which invokes gleeful gloating, helping to explain why persecution of L/H groups usually involves degrading and humiliating them to L/L status. During China’s Cultural Revolution, resented elites were first paraded in dunce caps before exile to labor camps. Nazis eliminated the mentally ill, already viewed as L/L, by unceremoniously murdering them; in contrast, pre-murder treatment of the L/H Jews involved forcing them to wear degrading yellow armbands, cutting one another’s beards, scrubbing sidewalks with toothbrushes before jeering crowds. When Idi Amin expelled tens of thousands of L/H Indo-Pakistani citizens from Uganda in the 1970s, he first invited his army to rob, beat, and rape them. Turning L/H Thems into L/L Thems accounts for some of our worst savagery.

Complexities in our categorization of Thems abound. There’s the phenomenon of the grudging respect, even a sense of camaraderie with an enemy, the perhaps apocryphal picture of World War I flying aces, where a glimmer of Us-ness is shared with someone trying to kill you (“Ah, monsieur, if it were another time, I would delight in discussing aeronautics with you over some good wine.” “Baron, it is an honor that it is you who shoots me out of the sky”). And there’s the intricacies of differing feelings about economic versus cultural enemies, new versus ancient ones, or the distant alien enemy versus the familiar one next door (consider Ho Chi Minh, rejecting the offer of help from Chinese troops during the Vietnam War, stating to the effect of “The Americans will leave in a year or a decade, but the Chinese will stay for a thousand years if we let them in”).

And then there is the profoundly strange phenomenon of the self-hating ________ (take your pick of the outgroup member), who has bought into the negative stereotypes and favors the ingroup. This was shown by psychologists Kenneth and Mamie Clark in their heart-breaking “doll studies,” in the 1940s, demonstrating how African-American children, along with white children, preferred playing with white dolls over black ones, ascribing more positive attributes to them (e.g., nice, pretty). That this effect was most pronounced in black kids in segregated schools was cited in Brown v. Board of Education. Or consider the scenario of the strident crusader against gay rights who turns out to be closeted—the Mobius strip pathology of accepting that you are an inferior Them. We put monkeys, even with their complexities of associating alien monkeys with spiders, to shame when it comes to the psychological vagaries of dividing the world into Us and Them.

Multiple Us-es

We also recognize that other individuals belong to multiple categories, and shift which we consider most relevant. Not surprisingly, lots of that literature concerns race, exploring whether it is an Us/Them categorization that trumps all others.

The primacy of race has folk-intuition appeal. First, race is a biological attribute, a conspicuous fixed identity that readily prompts essentialist thinking. Moreover, humans evolved under conditions where different skin color conspicuously signals that someone is a distant Them. Furthermore, a large percentage of cultures, long before Western contact, make status distinctions by skin color.

And yet, evidence is to the contrary. First, while there are obvious biological contributions to racial differences, “race” is a biological continuum rather than discrete categories—for example, unless you cherry-pick the data, genetic variation within race is generally as great as between races. And this really is no surprise when looking at the range of variation within a racial rubric—go compare, say, Sicilians with Swedes.

Moreover, race fails as a fixed classification system. At various times in U.S. census history, “Mexican” and “Armenian” were considered races; southern Italians and northern Europeans were classified differently; someone with one black great-grandparent and seven white ones was “white” in Oregon but not Florida. This is race as a cultural construct.

So it’s not surprising that racial Us/Them dichotomies are frequently trumped by other classifications. In one study, subjects saw pictures of individuals, each black or white, each associated with a statement, and then had to recall which face went with which statement. There was automatic racial categorization—if subjects misattributed a quote, the correct and incorrect faces were likely to be of the same race. Next, half the blacks and whites pictured wore the same distinctive yellow shirt; the other half wore gray. Now subjects most often confused statements by shirt color. Furthermore, gender reclassification particularly overrides unconscious racial categorization. After all, while races have evolved relatively recently in hominid history (probably over the course of just a few tens of thousands of years), our ancestors, almost all the way back to when they were paramecia, cared about Boy or Girl.

Important research by Mary Wheeler along with Fiske showed how categorization is shifted, studying other-race/amygdala activation. When subjects are instructed to look for a distinctive dot in each picture, other-race faces don’t activate the amygdala; face-ness wasn’t being processed. Judging whether each face looked older than some age wasn’t a recategorization that could eliminate the other-race amygdaloid response. But for a third group of subjects, a vegetable was displayed before each face; subjects judged whether the person liked that vegetable. And the amygdala didn’t respond to other-race faces.

Why? You look at the Them, thinking about what food she’d like. You picture her shopping, or ordering a meal in a restaurant. Best case scenario, you decide you and she share some vegetable preference—a smidgen of Us-ness. Worst case, you decide you two differ, a relatively benign Them—history is not stained with blood spilled by animosities between partisans for broccoli versus cauliflower. Most importantly, as you imagine her sitting at dinner, enjoying that food, you are thinking of her as an individual, the surest way to weaken automatic categorization of someone as a Them.

Rapid recategorizations can occur in the most brutal, unlikely, and intensely poignant circumstances:

In the Battle of Gettysburg, Confederate general Lewis Armistead was mortally wounded. As he lay on the battlefield, he gave a secret Masonic sign, hoping it would be recognized by a fellow Mason. It was, by Union officer Hiram Bingham, who protected him, and got him to a Union field hospital. In an instant the Us/Them of Union/Confederate faded before Mason/non-Mason.

During World War II, British commandos kidnapped German general Heinrich Kreipe in Crete, followed by a dangerous 18-day march to the coast to rendezvous with a British ship. One day the party saw the snows of Crete’s highest peak. Kreipe mumbled to himself the first line (in Latin) of an ode by Horace about a snowcapped mountain. At which point the British commander, Patrick Leigh Fermor, continued the recitation. The two men realized that they had, in Leigh Fermor’s words, “drunk at the same fountains.” A recategorization. Leigh Fermor had Kreipe’s wounds treated and personally ensured his safety. The two stayed in touch after the war and were reunited decades later on Greek television. “No hard feelings,” said Kreipe, praising their “daring operation.”

And finally there is the World War I Christmas truce, where opposing trench soldiers spent the day singing, praying, and partying together, playing soccer, and exchanging gifts, where soldiers up and down the lines struggled to extend the truce. It took all of one day for British-versus-German to yield to something more important—all of us in the trenches versus the officers in the rear who want us to kill each other.

We all have multiple dichotomies in our heads, and ones that seem inevitable and crucial can, under the right circumstances, evaporate in an instant.

Lessening the Impact of Us/Them-ing

So how can we make these dichotomies evaporate? Some thoughts:

Contact: The consequences of growing up amid diversity just discussed bring us to the effects of prolonged contact on Us/Theming. In the 1950s the psychologist Gordon Allport proposed “contact theory.” Inaccurate version: bring Us-es and Thems together (say, teenagers from two hostile nations in a summer camp), animosities disappear, similarities start to outweigh differences, everyone becomes an Us. More accurate version: put Us and Thems together under narrow circumstances and something sort of resembling that happens, but you can also blow it and worsen things.

Some of the effective narrower circumstances: each side has roughly equal numbers; everyone’s treated equally and unambiguously; contact is lengthy and on neutral territory; there are “superordinate” goals where everyone works together on a meaningful task (say, summer campers turning a meadow into a soccer field).

Even then, effects are typically limited—Us-es and Thems quickly lose touch, changes are transient and often specific—“I hate those Thems, but I know one from last summer who’s actually a good guy.” Where contact really causes fundamental change is when it is prolonged. Then we’re making progress.

Approaching the implicit: If you want to lessen an implicit Us/Them response, one good way is priming beforehand with a counter-stereotype (e.g., a reminder of a beloved celebrity Them). Another approach is making the implicit explicit—show people their implicit biases. Another is a powerful cognitive tool—perspective taking. Pretend you’re a Them and explain your grievances. How would you feel? Would your feet hurt after walking a mile in their shoes?

Replace essentialism with individuation: In one study, white subjects were asked about their acceptance of racial inequalities. Half were first primed toward essentialist thinking, being told, “Scientists pinpoint the genetic underpinnings of race.” Half heard an anti-essentialist prime—“Scientists reveal that race has no genetic basis.” The latter made subjects less accepting of inequalities.

Flatten hierarchies: Steep ones sharpen Us/Them differences, as those on top justify their status by denigrating the have-nots, while the latter view the ruling class as low warmth/high competence. For example, the cultural trope that the poor are more carefree, in touch with and able to enjoy life’s simple pleasures while the rich are unhappy, stressed, and burdened with responsibility (think of miserable Scrooge and those happy-go-lucky Cratchits). Likewise with the “they’re poor but loving” myth of framing the poor as high warmth/low competence. In one study of 37 countries, the greater the income inequality, the more the wealthy held such attitudes.

Some Conclusions

From massive barbarity to pinpricks of microaggression, Us versus Them has produced oceans of pain. Yet, I don’t think our goal should be to “cure” us of all Us/Them dichotomizing (separate of it being impossible, unless you have no amygdala).

I’m fairly solitary—I’ve spent a lot of my life living alone in a tent in Africa, studying another species. Yet some of my most exquisitely happy moments have come from feeling like an Us, feeling accepted, safe, and not alone, feeling part of something large and enveloping, with a sense of being on the right side and doing both well and good. There are even Us/Thems that I—eggheady, meek, and amorphously pacifistic—would kill or die for.

If we accept that there will always be sides, it’s challenging to always be on the side of angels. Distrust essentialism. Remember that supposed rationality is often just rationalization, playing catch-up with subterranean forces we never suspect. Focus on shared goals. Practice perspective taking. Individuate, individuate, individuate. And recall how often, historically, the truly malignant Thems hid themselves while making third parties the fall guy.

Meanwhile, give the right-of-way to people driving cars with the “Mean people suck” bumper sticker, and remind everyone that we’re in this together against Lord Voldemort and House Slytherin.

Robert Sapolsky is a professor of biology, neurology, and neurosurgery at Stanford University, and author of A Primate’s Memoir, Why Zebras Don’t Get Ulcers, and Behave: The Biology of Humans at Our Best and Worst, his newest book.

From Behave: The Biology of Humans at Our Best and Worst by Robert M. Sapolsky, published on May 2, 2017 by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2017 by Robert M. Sapolsky.

This article was originally published in our “The Absurd” issue in June, 2017.

Read the whole story
ove
35 days ago
reply
Share this story
Delete

DRM's Dead Canary: How We Just Lost the Web, What We Learned from It, and What We Need to Do Next

1 Share

EFF has been fighting against DRM and the laws behind it for a decade and a half, intervening in the US Broadcast Flag, the UN Broadcasting Treaty, the European DVB CPCM standard, the W3C EME standard and many other skirmishes, battles and even wars over the years. With that long history behind us, there are two things we want you to know about DRM:

  1. Everybody on the inside secretly knows that DRM technology is irrelevant, but DRM law is everything; and
  2. The reason companies want DRM has nothing to do with copyright.

These two points have just been demonstrated in a messy, drawn-out fight over the standardization of DRM in browsers, and since we threw a lot of blood and treasure at that fight, one thing we hope to salvage is an object lesson that will drive these two points home and provide a roadmap for the future of DRM fighting.

DRM IS TECHNOLOGICALLY BANKRUPT; DRM LAW IS DEADLY

Here's how DRM works, at a high level: a company wants to provide a customer (you) with digital asset (like a movie, a book, a song, a video game or an app), but they want to control what you do with that file after you get it.

So they encrypt the file. We love encryption. Encryption works. With relatively little effort, anyone can scramble a file so well that no one will ever be able to decrypt it unless they're provided with the key.

Let's say this is Netflix. They send you a movie that's been scrambled and they want to be sure you can't save it and watch it later from your hard-drive. But they also need to give you a way to view the movie, too. At some point, that means unscrambling the movie. And there's only one way to unscramble a file that's been competently encrypted: you have to use the key.

So Netflix also gives you the unscrambling key.

But if you have the key, you can just unscramble the Netflix movies and save them to your hard drive. How can Netflix give you the key but control how you use it?

Netflix has to hide the key, somewhere on your computer, like in a browser extension or an app. This is where the technological bankruptcy comes in. Hiding something well is hard. Hiding something well in a piece of equipment that you give to your adversary to take away with them and do anything they want with is impossible.

Maybe you can't find the keys that Netflix hid in your browser. But someone can: a bored grad student with a free weekend, a self-taught genius decapping a chip in their basement, a competitor with a full-service lab. One tiny flaw in any part of the fragile wrapping around these keys, and they're free.

And once that flaw is exposed, anyone can write an app or a browser plugin that does have a save button. It's game over for the DRM technology. (The keys escape pretty regularly, just as fast as they can be revoked by the DRM companies.)

DRM gets made over the course of years, by skilled engineers, at a cost of millions of dollars. It gets broken in days, by teenagers, with hobbyist equipment. That's not because the DRM-makers are stupid, it's because they're doing something stupid.

Which is where the law comes in. DRM law gives rightsholders more forceful, far-ranging legal powers than laws governing any other kind of technology. In 1998, Congress passed the Digital Millennium Copyright Act (DMCA), whose Section 1201 provides for felony liability for anyone commercially engaged in bypassing a DRM system: 5 years in prison and a $500,000 fine for a first offense. Even noncommercial bypass of DRM is subject to liability. It also makes it legally risky to even talk about how to bypass a DRM system.

So the law shores up DRM systems with a broad range of threats. If Netflix designs a video player that won't save a video unless you break some DRM, they now have the right to sue -- or sic the police -- on any rival that rolls out an improved alternative streaming client, or a video-recorder that works with Netflix. Such tools wouldn't violate copyright law any more than a VCR or a Tivo does, but because that recorder would have to break Netflix DRM, they could use DRM law to crush it.

DRM law goes beyond mere bans on tampering with DRM. Companies also use Section 1201 of the DMCA to threaten security researchers who discover flaws in their products. The law becomes a weapon they can aim at anyone who wants to warn their customers (still you) that the products you're relying on aren't fit for use. That includes warning people about flaws in DRM that expose them to being hacked.

It's not just the USA and not just the DMCA, either. The US Trade Representative has "convinced" countries around the world to adopt a version of this rule.

DRM HAS NOTHING TO DO WITH COPYRIGHT

DRM law has the power to do untold harm. Because it affords corporations the power to control the use of their products after sale, the power to decide who can compete with them and under what circumstances, and even who gets to warn people about defective products, DRM laws represent a powerful temptation.

Some things that aren't copyright infringement: buying a DVD while you're on holiday and playing it when you get home. It is obviously not a copyright infringement to go into a store in (say) New Delhi and buy a DVD and bring it home to (say) Topeka. The rightsholder made their movie, sold it to the retailer, and you paid the retailer the asking price. This is the opposite of copyright infringement. That's paying for works on the terms set by the rightsholder. But because DRM stops you from playing out-of-region discs on your home player, the studios can invoke copyright law to decide where you can consume the copyrighted works you've bought, fair and square.

Other not-infringements: fixing your car (GM uses DRM to control who can diagnose an engine, and to force mechanics to spend tens of thousands of dollars for diagnostic information they could otherwise determine themselves or obtain from third parties); refilling an ink cartridge (HP pushed out a fake security update that added DRM to millions of inkjet printers so that they'd refuse remanufactured or third-party cartridges), or toasting home-made bread (though this hasn't happened yet, there's no reason that a company couldn't put DRM in its toasters to control whose bread you can use).

It's also not a copyright infringement to watch Netflix in a browser that Netflix hasn't approved. It's not a copyright infringement to record a Netflix movie to watch later. It's not a copyright infringement to feed a Netflix video to an algorithm that can warn you about upcoming strobe effects that can trigger life-threatening seizures in people with photosensitive epilepsy.

WHICH BRINGS US TO THE W3C

The W3C is the world's foremost open web standards body, a consortium whose members (companies, universities, government agencies, civil society groups and others) engage in protracted wrangles over the best way for everyone to deliver web content. They produce "recommendations" (W3C-speak for "standards") that form the invisible struts that hold up the web. These agreements, produced through patient negotiation and compromise, represent an agreement by major stakeholders about the best (or least-worst) way to solve thorny technological problems.

In 2013, Netflix and a few other media companies convinced the W3C to start work on a DRM system for the web. This DRM system, Encrypted Media Extensions (EME), represented a sharp departure from the W3C's normal business. First, EME would not be a complete standard: the organization would specify an API through which publishers and browser vendors would make DRM work, but the actual "content decryption module" (CDM) wouldn't be defined by the standard. That means that EME was a standard in name only: if you started a browser company and followed all the W3C's recommendations, you still wouldn't be able to play back a Netflix video. For that, you'd need Netflix's permission.

It's hard to overstate how weird this is. Web standards are about "permissionless interoperability." The standards for formatting text mean that anyone can make a tool that can show you pages from the New York Times' website; images from Getty; or interactive charts on Bloomberg. The companies can still decide who can see which pages on their websites (by deciding who gets a password and which parts of the website each password unlocks), but they don't get to decide who can make the web browsing program you type the password into in order to access the website.

A web in which every publisher gets to pick and choose which browsers you can use to visit their sites is a very different one from the historical web. Historically, anyone could make a new browser by making sure it adhered to W3C recommendations, and then start to compete. And while the web has always been dominated by a few browsers, which browsers dominate have changed every decade or so, as new companies and even nonprofits like Mozilla (who make Firefox) overthrew the old order. Technologies that have stood in the way of this permissionless interoperabilty -- for instance, patent-encumbered video -- have been seen as impediments to the idea of the open web, not standardization opportunities.

When the W3C starts making technologies that only work when they're blessed by a handful of entertainment companies, they're putting their thumbs -- their fists -- on the scales in favor of ensuring that the current browser giants get to enjoy a permanent reign.

But that's the least of it. Until EME, W3C standards were designed to give the users of the web (e.g. you) more control over what your computer did while you were accessing other peoples' websites. With EME -- and for the first time ever -- the W3C is designing technology that takes away your control. EME is designed to allow Netflix -- and other big companies -- to decide what your browser does, even (especially) when you disagree about what that should be.

Since the earliest days of computing, there's been a simmering debate about whether computers exist to control their users, or vice versa (as the visionary computer scientist and education specialist Seymour Papert put it, "children should be programming the computer rather than being programmed by it" -- that applies equally well to adults. Every W3C standard until 2017 was on the side of people controlling computers. EME breaks with that. It is a subtle, but profound shift.

WHY WOULD THE W3C DO THIS?

Ay yi yi. That is the three billion user question.

The W3C version of the story goes something like this. The rise of apps has weakened the web. In the pre-app days, the web was the only game in town, so companies had to play by web rules: open standards, open web. But now that apps exist and nearly everyone uses them, big companies can boycott the web, forcing their users into apps instead. That just accelerates the rise of apps, and weakens the web even more. Apps are used to implement DRM, so DRM-using companies are moving to apps. To keep entertainment companies from killing the web outright, the Web must have DRM too.

Even if those companies don't abandon the web altogether, continues this argument, getting them to make their DRM at the W3C is better than letting them make it on an ad-hoc basis. Left to their own devices, they could make DRM that made no accommodations for people with disabilities, and without the W3C's moderating influence, these companies would make DRM that would be hugely invasive of web users' privacy.

The argument ends with a broad justification for DRM: companies have the right to protect their copyrights. We can't expect an organization to spend fortunes creating or licensing movies and then distribute them in a way that lets anyone copy and share them.

We think that these arguments don't hold much water. The web does indeed lack some of its earlier only-game-in-town muscle, but the reality is that companies make money by going where their customers are, and every potential customer has a browser, while only existing customers have a company's apps. The more hoops a person has to jump through in order to become your customer, the fewer customers you'll have. Netflix is in a hyper-competitive market with tons of new entrants (e.g. Disney), and being "that streaming service you can't use on the web" is a serious deficit.

We also think that the media companies and tech companies would struggle to arrive at a standard for DRM outside of the W3C, even a really terrible one. We've spent a lot of time in the smoke-filled rooms of DRM standardization and the core dynamic there is the media companies demanding full-on lockdown for every frame of video, and tech companies insisting that the best anyone can hope for is an ineffectual "speed-bump" that they hope will mollify the media companies. Often as not, these negotiations collapse under their own weight.

Then there's the matter of patents: companies that think DRM is a good idea also love software patents, and the result is an impenetrable thicket of patents that make getting anything done next to impossible. The W3C's patent-pooling mechanism (which is uniquely comprehensive in the standards world and stands as an example of the best way to do this sort of thing) was essential to making DRM standardization possible. What's more, there are key players in the DRM world, like Adobe, who hold significant patent portfolios but are playing an ever-dwindling role in the world of DRM (the avowed goal of EME was to "kill Flash"). If the companies involved had to all sit down and negotiate a new patent deal without the W3C's framework, any of these companies could "turn troll" and insist that all the rest would have to shell out big dollars to license their patents -- they have nothing to lose by threatening the entire enterprise, and everything to gain from even a minuscule per-user royalty for something that will be rolled out into three billion browsers.

Finally, there's no indication that EME had anything to do with protecting legitimate business interests. Streaming video services like Netflix rely on customers to subscribe to a whole library with constantly added new materials and a recommendation engine to help them navigate the catalog.

DRM for streaming video is all about preventing competition, not protecting copyrights. The purpose of DRM is to give companies the legal tools to prevent activities that would otherwise be allowed. The DRM part doesn't have to "work" (in the sense of preventing copyright infringement) so long as it allows for the invocation of the DMCA.

To see how true this is, just look at Widevine, Google's version of EME. Google bought the company that made Widevine in 2010, but it wasn't until 2016 that an independent researcher actually took a close look at how well it prevented videos from leaking. That researcher, David Livshits found that Widevine was trivial to circumvent, and it had been since its inception, and that the errors that made Widevine so ineffective were obvious to even a cursory examination. If the millions of dollars and the high-power personnel committed to EME were allocated to create a technology that would effectively prevent copyright infringement, then you'd think that Netflix or one of the other media companies in the negotiations would have diverted some of those resources to a quick audit to make sure that the stuff actually worked as advertised.

(Funny story: Livshits is an Israeli at Ben Gurion University, and Israel happens to be the rare country that doesn't ban breaking DRM, meaning that Israelis are among the only people who can do this kind of research without fear of legal retaliation)

But the biggest proof that EME was just a means to shut down legitimate competitors -- and not an effort to protect copyright -- is what happened next.

A CONTROLLED EXPERIMENT

When EFF joined the W3C, our opening bid was "Don't make DRM."

We put the case to the organization, describing the way that DRM interferes with the important copyright exceptions (like those that allow people to record and remix copyrighted works for critical or transformative purposes) and the myriad problems presented by the DMCA and laws like it around the world.

The executive team of the W3C basically dismissed all arguments about fair use and user rights in copyright as a kind of unfortunate casualty of the need to keep Netflix from ditching the web in favor of apps, and as for the DMCA, they said that they couldn't do anything about this crazy law, but they were sure that the W3C's members were not interested in abusing the DMCA, they just wanted to keep their high-value movies from being shared on the internet.

So we changed tack, and proposed a kind of "controlled experiment" to find out what the DRM fans at the W3C were trying to accomplish.

The W3C is a consensus body: it makes standards by getting everyone in a room to compromise, moving toward a position that everyone can live with. Our ideal world was "No DRM at the W3C," and DRM is a bad enough idea that it was hard to imagine much of a compromise from there.

But after listening closely to the DRM side's disavowals of DMCA abuse, we thought we could find something that would represent an improvement on the current status quo and that should fit with their stated views.

We proposed a kind of DRM non-aggression pact, through which W3C members would promise that they'd only sue people under laws like DMCA 1201 if there was some other law that had been broken. So if someone violates your copyright, or incites someone to violate your copyright, or interferes with your contracts with your users, or misappropriates your trade secrets, or counterfeits your trademarks, or does anything else that violates your legal rights, you can throw the book at them.

But if someone goes around your DRM and doesn't violate any other laws, the non-aggression pact means that you couldn't use the W3C standardised DRM as a route to legally shut them down. That would protect security researchers, it would protect people analyzing video to add subtitles and other assistive features, it would protect archivists who had the legal right to make copies, and it would protect people making new browsers.

If all you care about is making an effective technology that prevents lawbreaking, this agreement should be a no-brainer. For starters, if you think DRM is an effective technology, it shouldn't matter if it's illegal to criticize it.

And since the nonaggression pact kept all other legal rights intact, there was no risk that agreeing to it would allow someone to break the law with impunity. Anyone who violated copyrights (or any other rights) would be square in the DMCA's crosshairs, and companies would have their finger on the trigger.

NOT SURPRISED BUT STILL DISAPPOINTED

Of course, they hated this idea.

The studios, the DRM vendors and the large corporate members of the W3C participated in a desultory, brief "negotiation" before voting to terminate further discussion and press on. The W3C executive helped them dodge discussions, chartering further work on EME without any parallel work on protecting the open web, even as opposition within the W3C mounted.

By the time the dust settled, EME was published after the most divided votes the W3C had ever seen, with the W3C executive unilaterally declaring that issues for security research, accessibility, archiving and innovation had been dealt with as much as they could be (despite the fact that literally nothing binding was done about any of these things). The "consensus" process of the W3C has so thoroughly hijacked that EME's publication was only supported by 58% of the members who voted in the final poll, and many of those members expressed regret that they were cornered into voting for something they objected to.

When the W3C executive declared that any protections for the open web were incompatible with the desires of the DRM-boosters, it was a kind of ironic vindication. After all, this is where we'd started, with EFF insisting that DRM wasn't compatible with security disclosures, with accessibility, with archiving or innovation. Now, it seemed, everyone agreed.

What's more, they all implicitly agreed that DRM wasn't about protecting copyright. It was about using copyright to seize other rights, like the right to decide who could criticize your product -- or compete with it.

DRM's sham cryptography means that it only works if you're not allowed to know about its defects. This proposition was conclusively proved when a W3C member proposed that the Consortium should protect disclosures that affected EME's "privacy sandbox" and opened users to invasive spying, and within minutes, Netflix's representative said that even this was not worth considering.

In a twisted way, Netflix was right. DRM is so fragile, so incoherent, that it is simply incompatible with the norms of the marketplace and science, in which anyone is free to describe their truthful discoveries, even if they frustrate a giant company's commercial aspirations.

The W3C tacitly admitted this when they tried to convene a discussion group to come up with some nonbinding guidelines for when EME-using companies should use the power of DRM law to punish their critics and when they should permit the criticism.

"RESPONSIBLE DISCLOSURE" ON OUR TERMS, OR JAIL

They called this "responsible disclosure," but it was far from the kinds of "responsible disclosure" we see today. In current practice, companies offer security researchers enticements to disclose their discoveries to vendors before going public. These enticements range from bug-bounty programs that pay out cash, to leaderboards that provide glory to the best researchers, to binding promises to act on disclosures in a timely way, rather than crossing their fingers, sitting on the newly discovered defects, and hoping no one else re-discovers them and exploits them.

The tension between independent security researchers and corporations is as old as computing itself. Computers are hard to secure, thanks to their complexity. Perfection is elusive. Keeping the users of networked computers safe requires constant evaluation and disclosure, so that vendors can fix their bugs and users can make informed decisions about which systems are safe enough to use.

But companies aren't always the best stewards of bad news about their own products. As researchers have discovered -- the hard way -- telling a company about its mistakes may be the polite thing to do, but it's very risky behavior, apt to get you threatened with legal reprisals if you go public. Many's the researcher who told a company about a bug, only to have the company sit on that news for an intolerably long time, putting its users at risk. Often, these bugs only come to light when they are independently discovered by bad actors, who figure out how to exploit them, turning them into attacks that compromise millions of users, so many that the bug's existence can no longer be swept under the rug.

As the research world grew more gunshy about talking to companies, companies were forced to make real, binding assurances that they would honor the researchers' discoveries by taking swift action in a defined period, by promising not to threaten researchers over presenting their findings, and even by bidding for researchers' trust with cash bounties. Over the years, the situation has improved, with most big companies offering some kind of disclosure program.

But the reason companies offer those bounties and assurances is that they have no choice. Telling the truth about defective products is not illegal, so researchers who discover those truths are under no obligation to play by companies' rules. That forces companies to demonstrate their goodwill with good conduct, binding promises and pot-sweeteners.

Companies definitely want to be able to decide who can tell the truth about their products and when. We know that because when they get the chance to flex that muscle, they flex it. We know it because they said so at the W3C. We know it because they demanded that they get that right as part of the DRM package in EME.

Of all the lows in the W3C DRM process, the most shocking was when the historic defenders of the open web tried to turn an effort to protect the rights of researchers to warn billions of people about harmful defects in their browsers into an effort to advise companies on when they should hold off on exercising that right -- a right they wouldn’t have without the W3C making DRM for the web.

DRM IS THE OPPOSITE OF SECURITY

From the first days of the DRM fight at the W3C, we understood that the DRM vendors and the media companies they supplied weren't there to protect copyright, they were there to grab legally enforceable non-copyright privileges. We also knew that DRM was incompatible with security research: because DRM relies on obfuscation, anyone who documents how DRM works also makes it stop working.

This is especially clear in terms of what wasn't said at the W3C: when we proposed that people should be able to break DRM to generate subtitles or conduct security audits, the arguments were always about whether that was acceptable, but it was never about whether it was possible.

Recall that EME is supposed to be a system that helps companies ensure that their movies aren't saved to their users' hard-drives and shared around the internet. For this to work, it should be, you know, hard to do that.

But in every discussion of when people should be allowed to break EME, it was always a given that anyone who wanted to could do so. After all, when you hide secrets in software you give to people who you want to keep them secret from, you are probably going to be disappointed.

From day one, we understood that we would arrive at a point in which the DRM advocates at the W3C would be obliged to admit that the survival of their plan relied on being able to silence people who examined their products.

However, we did hold out hope that when this became clear to everyone, that they would understand that DRM couldn't peacefully co-exist with the open web.

We were wrong.

THE W3C IS THE CANARY IN THE COALMINE

The success of DRM at the W3C is a parable about market concentration and the precarity of the open web. Hundreds of security researchers lobbied the W3C to protect their work, UNESCO publicly condemned the extension of DRM to the web, and the many crypto-currency members of the W3C warned that using browsers for secure, high-stakes applications like moving around peoples' life-savings could only happen if browsers were subjected to the same security investigations as every other technology in our life (except DRM technologies).

There is no shortage of businesses that want to be able to control what their customers and competitors do with their products. When the US Copyright Office held hearings on DRM in 2015, they heard about DRM in medical implants and cars, farm equipment and voting machines. Companies have discovered that adding DRM to their products is the most robust way to control the marketplace, a cheap and reliable way to convert commercial preferences about who can repair, improve, and supply their products into legally enforceable rights.

The marketplace harms from this anti-competitive behavior are easy to see. For example, the aggressive use of DRM to prevent independent repair shops ends up diverting tons of e-waste to landfill or recycling, at the cost of local economies and the ability of people to get full use out of your property. A phone that you recycle instead of repairing is a phone you have to pay to replace -- and repair creates many more jobs than recycling (recycling a ton of e-waste creates 15 jobs; repairing it creates 150 jobs). Repair jobs are local, entrepreneurial jobs, because you don't need a lot of capital to start a repair shop, and your customers want to bring their gadgets to someone local for service (no one wants to send a phone to China for repairs -- let alone a car!).

But those economic harms are only the tip of the iceberg. Laws like DMCA 1201 incentivize DRM by promising the power to control competition, but DRM's worst harms are in the realm of security. When the W3C published EME, it bequeathed to the web an unauditable attack-surface in browsers used by billions of people for their most sensitive and risky applications. These browsers are also the control panels for the Internet of Things: the sensor-studded, actuating gadgets that can see us, hear us, and act on the physical world, with the power to boil, freeze, shock, concuss, or betray us in a thousand ways.

The gadgets themselves have DRM, intended to lock our repairs and third-party consumables, meaning that everything from your toaster to your car is becoming off-limits to scrutiny by independent researchers who can give you unvarnished, unbiased assessments of the security and reliability of these devices.

In a competitive market, you'd expect non-DRM options to proliferate in answer to this bad behavior. After all, no customer wants DRM: no car-dealer ever sold a new GM by boasting that it was a felony for your favorite mechanic to fix it.

But we don't live in an a competitive market. Laws like DMCA 1201 undermine the competition that might counter their worst effects.

The companies that fought DRM at the W3C -- browser vendors, Netflix, tech giants, the cable industry -- all trace their success to business strategies that shocked and outraged established industry when they first emerged. Cable started as unlicensed businesses that retransmitted broadcasts and charged for it. Apple's dominance started with ripping CDs and ignoring the howls of the music industry (just as Firefox got where it is by blocking obnoxious ads and ignoring the web-publishers who lost millions as a result). Of course, Netflix's revolutionary red envelopes were treated as a form of theft.

These businesses started as pirates and became admirals, and treat their origin stories as legends of plucky, disruptive entrepreneurs taking on a dinosauric and ossified establishment. But they treat any disruption aimed at them as an affront to the natural order of things. To paraphrase Douglas Adams, any technology invented in your adolescence is amazing and world-changing; anything invented after you turn 30 is immoral and needs to be destroyed.

LESSONS FROM THE W3C

Most people don't understand the risks of DRM. The topic is weird, technical, esoteric and take too long to explain. The pro-DRM side wants to make the debate about piracy and counterfeiting, and those are easy stories to tell.

But people who want DRM don't really care about that stuff, and we can prove it: just ask them if they'd be willing to promise not to use the DMCA unless someone is violating copyright, and watch them squirm and weasel about why policing copyright involves shutting down competitive activities that don't violate copyright. Point out that they didn't even question whether someone could break their DRM, because, of course, DRM is so technologically incoherent that it only works if it's against the law to understand how it works, and it can be defeated just by looking closely at it.

Ask them to promise not to invoke the DMCA against people who have discovered defects in their products and listen to them defend the idea that companies should get a veto over publication of true facts about their mistakes and demerits.

These inconvenient framings at least establish what we're fighting about, dispensing with the disingenuous arguments about copyright and moving on to the real issues: competition, accessibility, security.

This won't win the fight on its own. These are still wonky and nuanced ideas.

One thing we've learned from 15-plus years fighting DRM: it's easier to get people to take notice of procedural issues than substantive ones. We labored in vain to get people to take notice of the Broadcasting Treaty, a bafflingly complex and horribly overreaching treaty from WIPO, a UN specialized agency. No one cared until someone started stealing piles of our handouts and hiding them in the toilets so no one could read them. That was global news: it's hard to figure out what something like the Broadcast Treaty is about, but it's easy to call shenanigans when someone tries to hide your literature in the toilet so delegates don’t see the opposing view.

So it was that four years of beating the drum about DRM at the W3C barely broke the surface, but when we resigned from the W3C over the final vote, everyone sat up and took notice, asking how they could help fix things. The short answer is, "It's too late: we resigned because we had run out of options.

But the long answer is a little more hopeful. EFF is suing the US government to overturn Section 1201 of the DMCA. As we proved at the W3C, there is no appetite for making DRM unless there's a law like DMCA 1201 in the mix. DRM on its own does nothing except provide an opportunity for competitors to kick butt with innovative offerings that cost less and do more.

The Copyright Office is about to hold fresh hearings about DMCA 1201.

The W3C fight proved that we could shift the debate to the real issues. The incentives that led to the W3C being colonized by DRM are still in play and other organizations will face this threat in the years to come. We'll continue to refine this tactic there and keep fighting, and we'll keep reporting on how it goes so that you can help us fight. All we ask is that you keep paying attention. As we learned at the W3C, we can't do it without you.

Read the whole story
ove
53 days ago
reply
Share this story
Delete

Creating an Autonomous System for Fun and Profit

1 Share

At its core, the Internet is an interconnected fabric of separate networks. Each network which makes up the Internet is operated independently and only interconnects with other networks in clearly defined places.

For smaller networks like your home, the interaction between your network and the rest of the Internet is usually pretty simple: you buy an Internet service plan from an ISP (Internet Service Provider), they give you some kind of hand-off through something like a DSL or cable modem, and give you access to "the entire Internet". Your router (which is likely also a WiFi access point and Ethernet switch) then only needs to know about two things; your local computers and devices are on one side, and the ENTIRE Internet is on the other side of that network link given to you by your ISP.

For most people, that's the extent of what's needed to be understood about how the Internet works. Pick the best ISP, buy a connection from them, and attach computers needing access to the Internet. And that's fine, as long as you're happy with only having one Internet connection from one vendor, who will lend you some arbitrary IP address(es) for the extend of your service agreement, but that starts not being good enough when you don't want to be beholden to a single ISP or a single connection for your connectivity to the Internet.

That also isn't good enough if you *are* an Internet Service Provider so you are literally a part of the Internet. You can't assume that the entire Internet is

way when half of the Internet is actually in the other direction.

This is when you really have to start thinking about the Internet and treating the Internet as a very large mesh of independent connected organizations instead of an abstract cloud icon on the edge of your local network map.

Almost no one needs to consider the Internet at this level. The long flight of steps from DSL for your apartment up to needing to be an integral part of the Internet means that pretty much regardless of what level of Internet service you need for your projects, you can probably pay someone else to provide it and don't need to sit down and learn how

is.

To become your own Internet Service Provider with customers who pay you to access the Internet, or be your own web hosting provider with customers who pay you to be accessible from the Internet, or your own transit provider who has customers who pay you to move their customer's packets to other people's customers, you need a few things:

Once your router tells other networks that you're now the home of some specific range of IP addresses, and that advertisement propagates out through the rest of the Internet, everyone else's routers will have an entry in their routing tables so if they see any packets with your address on them, they know which direction to send them so they eventually end up at your door step.

So why doesn't your home router need to speak BGP or you need to own public IP space to be reachable from the rest of the Internet? Because your ISP takes care of that for you. In addition to funding the wiring from their data center to your house, the $50/month you pay to your ISP funds them getting address space allocated for you, advertising it to the rest of the Internet, and getting enough connectivity to the rest of the Internet that your packets can get where they're headed.

If you've made it this far, you're probably pretty curious why I'm talking about BGP at all, and what this blog post is leading up to. 

So... I recently set up my own autonomous system... and I don't really have a fantastic justification for it...

So admittedly, my justification for going through the additional trouble to set up this single rack of servers as an AS is a little more tenuous. I will readily admit that, more than anything else, this was a "hold my beer" sort of engineering moment, and not something that is at all needed to achieve what we actually needed (a rack to park all our servers in).


But what the hell; I've figured out how to do it, so I figured it would make an entertaining blog post. So here's how I set up a multi-homed autonomous system on a shoe-string budget:



Step 1. Found a Company

You're going to need a legal entity of some sort for a few steps here, so you're going to need a business name. I already happened to have one from other projects, so at the minimum you'll want to go to your local city hall and get a business license. My business license cost me the effort to come up with a kick-ass company name and about $33/year, and I've never even gotten around to doing anything fancy like incorporating it, so it's really just a piece of paper that hangs in my hallway and allows me to file 1099-MISC forms on my tax returns within the city of Sunnyvale, CA. In the context of this project, this business license primarily just needs to look official enough to get me approvals when I apply for an autonomous system number needed to set up my own network.



Step 2. Get Yourself Public Address Space

This step is, unfortunately, probably also the most difficult. You need to get yourself a block of public IP addresses big enough to be advertised over BGP (there's generally agreed upon minimums to keep the global routing table from getting ridiculous) and allocated for you to advertise over BGP yourself. You'll probably want both IPv4 addresses, which have to be at least a /24 subnet (256 addresses) and IPv6 addresses, which have to be at least a /48 subnet (65536 subnets of /64).


The big problem is that there are no IPv4 addresses left. There was only 4 billion of them in the first place, and we've simply run out of them, so the "normal" procedure of going to your local Internet numbers organization like ARIN isn't that productive. If all you need is IPv6 space (which is unlikely) and you happen to be in North America, you actually can still go to

ARIN and request resources

, but IPv4 addresses are generally still really needed. There's other solutions like buying IPv4 space on the second hand market, but that's getting expensive, so here's probably the least helpful part of this whole blog post:



I just borrowed some address space from my friends.

For example, I've got another friend who, for a different project, got a /32 IPv6 allocation from ARIN, which is a metric TON of addresses, so I asked him if I could have a (relatively small) /48 sub-allocated from his /32, so he drafted me an all official looking "Letter of Authorization" on his company letterhead that literally just says:


"Dear Sirs,
"Please accept this letter of authority on behalf of [FRIEND'S COMPANY NAME] to permit the BGP announcement of [/48 IPv6 SUBNET INSIDE HIS /32 SUBNET] by [KENNETH'S COMPANY NAME].
"Sincerely, [FRIEND'S SIGNATURE]"

It's not as impressive as having IP space with my name on it in ARIN's database, but it's also a whole hell of a lot cheaper than even the smallest address allocation you can get from ARIN (a couple beers vs $250/year).


This letter of authorization is also the first instance of where learning about how the Internet

actually

works gets a little weird. That letter is literally all it took for me to take control of a sub-block of someone else's public address space and get it routed to my network instead of theirs. Some of my network peers later asked for me to provide this LoA when we were setting up my network links, but that means I just sent them a PDF scan of a letter with my friend's signature on it. And I mean an actual signature; not some kind of fancy cryptographic signature, but literally a blue scribble on a piece of paper.


To be fair, the letterhead looked very official.



Step 3. Find Yourself Multiple Other Autonomous Systems to Peer With

So the name of the game with the Internet is that you need to be interconnected with at least one other part of it to be able to reach any of it, but that isn't necessarily good enough here. If you were only peering with one other autonomous system, you probably wouldn't even need to run BGP, and if you did, you could even do it using a "private" autonomous system number (ASN) which your upstream provider could just replace with their own before passing your routes on to the rest of the Internet.


But that's not good enough here. I didn't want to use some kind of lousy non-public ASN! This project was a personal challenge from a friend and the network engineering equivalent of driving a pickup with a lift kit, so we need a

public

ASN. We're going to later need to apply to ARIN to get one allocated and we'll need to provide at least two other autonomous systems we're going to be peering with to justify the "multi-homed" routing policy we're using to justify ARIN allocating us an ASN.


This multi-homed policy where we're peering with multiple other networks is also kind of neat because it means that if one of our upstream providers decides to take the day off, or only provide us a certain amount of bandwidth to the rest of the Internet, we have alternatives we can use from our peering links into other autonomous systems.


This whole concept of peering and all the different types of peering policies you might want for your network is a pretty deep rabbit hole, so I actually ended up buying a whole book just on peering, which was very helpful:

The 2014 Internet Peering Playbook, Norton

. He also has a

website

, which is a significant fraction of the content of his book in a less curated form.


Peering is definitely one of these "how the sausage gets made" sorts of topics that a lot of networks tend not to like to talk about. Exactly how well connected one network is to other networks is hugely important to their customer's quality of service, so everyone wants to make it appear that they're extremely well connected without showing their hand and letting others see their weaknesses. This means the peering community is rife with quiet backroom handshake deals that are never publicly disclosed, and you can spend hours digging through online

looking glass servers

that show you the global BGP tables trying to figure out what in the world networks are doing with their peering links.


Long story short, I'm getting a "paid transit" peering link from Hurricane Electric due to renting one of their cabinets, and managed to find a few friends in the Hurricane Electric FMT2 data center who had spare Ethernet ports on their router and were willing to set up free peering links for what little traffic happens to go directly between our own networks. Free peering links tend to be pretty common when both networks are at about the same level in the network provider / customer hierarchy, so tier 1 transit providers tend to peer for free to make the Internet happen, and lower tier small networks tend to peer for free to by-pass both needing to pay higher level ISPs to transit their traffic when they can move it directly, but if either network thinks they can charge the other for money, that might happen as well.


This is obviously where human networking becomes exceedingly important in computer networking, so start making friends with the peering coordinators for other networks which you expect to be trading a lot of traffic with. Every packet I'm able to shed off onto one of these lateral peering links with another AS is traffic that doesn't tie up my primary 1Gb hand-off from HE and makes my network faster.



Step 4. Apply for an Autonomous System Number

There are five Internet number organizations world-wide, and since I'm in North America the one I care about is ARIN, so I created an account on ARIN's website and:


  1. Created a Point of Contact Record for myself - Pretty much just a public profile for my identity: "Kenneth Finnegan, address, phone number, etc etc"
  2. Requested an Organization Identifier for "[MY COMPANY NAME]" and tied my point of contact record to it - This was by opening a ticket and attaching my business license to prove that my company actually exists. Since my company isn't its own legal identity, ARIN internally tracks it as "Kenneth Finnegan (doing business as) [MY COMPANY NAME]", but this doesn't show up on the public listing, so it wasn't a big deal.
  3. Requested an ASN for my Organization Identifier - This is where I needed to be able to list at least two other ASes I was planning on peering with. 
  4. Pay the $550 fee for getting an ASN issued per ARIN's current fee schedule for resources.

The whole process took about a week between setting up the orgID and requesting the ASN, mainly because I didn't quite get either support ticket request right on the first try due to me not quite knowing what I was doing, but in the end ARIN ultimately took my money and issued me an ASN all my own.

Step 5. Source a Router Capable of Handling the Entire Internet Routing Table

Remember how your home router only needs two routes? One for your local computers and one for the rest of the Internet, so the two routes are probably something like "192.168.1.0/24 (local) 0.0.0.0/0 (WAN)"

Processing a full BGP Internet routing table is a little more involved than that, and means you need a much more powerful router than one you can go buy at Office Depot. You could also probably build a router yourself out of a server running a router operating system like pfsense or just your favorite Linux distro with the right iptables voodoo and a BGP daemon like

quagga

, but that wasn't part of the originally thrown gauntlet challenge for this project.

The challenge was to use a real Cisco router capable of handling the entire Internet routing table, and I wanted one that can switch it at line speed. Hurricane Electric alone is giving us a 1Gb Ethernet hand-off, not including all the additional bandwidth out of our rack available due to other peering links, so we wanted a router that could definitely handle at least 1Gbps.

Meet the Cisco Catalyst 6506. Yes, it's rack mount, on a 19" rack. And is 12U high, which since a U is 1.75", means that this router is almost two feet high. And 150 pounds. And burns 1.2kW.


Yes. It's size is ridiculous. Which for this project, isn't entirely out of line.


But it's also kind of a really awesome router, particularly for being a nearly 2 decade old product. The 6500 series is a line of switch/router chassis which support 3,4,6,9, or 13 modular line cards/router supervisors. In the early 2000s this was the best switch that money could buy, and it is definitely showing its age now, but that's perfect. Network engineers love to hate their 6500s because they're so old, but its relatively limited "only" 30 million packets per second throughput is plenty for an autonomous system that fits in a single rack, and its age means I was able to pick up a 6506 configured with dual supervisors and three (3!) x 48 port Gigabit Ethernet line cards on eBay for $625 shipped!


I probably could have found a more reasonably sized router for what I needed, but the 6506 has the appeal that it definitely has more switching horsepower than I'll ever need for this project, and its biggest downsides are it's size and power, which are both not

that

big of issues since I've got a whole 44U rack for just a few servers and I don't get billed for my power usage. More desirable routers have the big downside that they're actually desirable, so other people are willing to spend a few thousand dollars on them, where I didn't really want to drop $2k on a well kitted out 3945.

The 6506 probably deserves blog posts of its own, but the main thing is that low end configurations of it like this are cheap on eBay, with the one disadvantage that they don't come with a supervisor card with enough memory to handle a full Internet table. This means I did need to scrounge a sup720-XL supervisor that can handle 1 million routes in its routing table. Another few hundred bucks on eBay, or a friend with access to the right kind of e-waste pile solves this problem.


Granted, a million routes isn't actually that much. The public IPv4 table is about 675,000 routes, and IPv6 is another 45,000, and they're both growing fast, so in another 2-3 years the Internet is going to exceed the capabilities of this router. When that happens, I'm going to need to either replace it with something more advanced than this ancient beast or start using some tricks to trim down the full table once we get there. If you'd like to follow along at home and watch the IPv4 routing table march towards the demise of the cat6500,

you can find a link to a bunch of graphs here

.


I also added a 4 port 10Gb line card, because 10Gb is awesome, and took one of the 48x1Gb line cards out because I really didn't need 144 Ethernet ports on this thing. That's just a ridiculous number of Ethernet ports for a single rack.


So the final configuration is:



  1. 48x1Gb line card for my four copper peering links with other autonomous systems, including Hurricane Electric
  2. 4x10Gb line card for my peering link with one of my friends who happened to have a spare 10Gb port on his router, and who also thinks 10Gb is awesome. This will probably also serve some local 10Gb links in the rack once I grow beyond one server.
  3. A blankoff plate instead of my third 48x1Gb line card to save power.
  4. 48x1Gb line card for the local servers in the cabinet. Since we've only got two servers installed so far, there's currently only a 2x1Gb bonded link to my server and a 4x1Gb bonded link + out of band management to my friend's server.
  5. The sup720-BXL which does the actual router processing and makes this whole mess a BGP router. The one cable from this card runs up to a console server letting me manage this beast remotely from the comfort of my apartment without standing in a cold data center.
  6. One of my spare sup720 (not XL) which can't handle the full Internet table pulled out an inch so it doesn't power up because this seemed like the best place to store it until I figure out what to do with it. 

Step 6. Turn it All On and Pray

Wait, I mean, carefully engineer your brand new network and swagger into the data center confident that your equipment is all correctly configured.


But seriously, I found a few textbooks on BGP network design and happened to have a 13 hour flight to China and back to take a crash course in BGP theory, and spent a week in my apartment with ear plugs in taking a crash course in how to interact with Cisco devices more sophisticated than just setting up VLANs on an Ethernet switch, which is about all my experience with Cisco IOS before this month.


After spending a week lovingly hand crafting my router configuration (while listening to networking podcasters bagging on how ridiculous it is that we still lovingly hand craft our routing configurations), I was ready to deploy my router plus all of our servers in the data center.


When I signed my service agreement with Hurricane Electric, it consisted of:



  • One full 44U rack with threaded posts.
  • Two single phase 208V 20A feeds.
  • A single 1Gbps copper network drop

The network operations center then also emailed me and asked how many of Hurricane's IP addresses I needed, which was two: one for my router's uplink interface, and a second for a serial port console server so if I ever manage to really bork my router's configuration I can still reach it's console port without having to trek over to the data center and stand there in the cold. This means that my hand-off from HE is a /29, so I actually have 5 usable addresses, but that Ethernet drop goes into a fixed eight port GigE switch which breaks out the console server, then plugs into the 6506 for the majority of my Internet traffic.

Once I confirmed that my network feed from HE was live, I then opened a support ticket with HE saying "My BGP router is on [IPv4 ADDRESS] and [IPv6 ADDRESS] and will be advertising these specific routes per attached letters of authorization" and waited for them to set it up on their side, which took less than an hour before I got an email from them saying "we turned it on, and your router connected, so it looks good from here"

And we're off to the races.

At this point, Hurricane Electric is feeding us all ~700k routes for the Internet, we're feeding them our two routes for our local IPv4 and IPv6 subnets, and all that's left to do is order all our cross-connects to other ASes in the building willing to peer with us (mostly for fun) and load in all our servers to build our own personal corner of the Internet.

The only major goof so far has been accidentally feeding the full IPv6 table to our first other peer that we turned on, but thankfully he has a much more powerful supervisor than the Sup720-BXL, so he just sent me an email to knock that off, a little fiddling with my BGP egress policies, and we were all set.

In the end, setting up my own autonomous system wasn't exactly simple, it was definitely not justified, but some times in life you just need to take the more difficult path. And there's a certain amount of pride in being able to claim that I'm part of the actual Internet. That's pretty neat. 

And of course, thanks to all of my friends who variously contributed parts, pieces, resources, and know-how to this on-going project. I had to pull in a lot of favors to pull this off, and I appreciate it.

Read the whole story
ove
62 days ago
reply
Share this story
Delete

Ancient data, modern math and the hunt for 11 lost cities of the Bronze Age - The Washington Post

1 Share

A typical passage from the clay tablets, translated by the team, reads something like this:

From Durhumit until Kaneš I incurred expenses of 5 minas of refined (copper), I spent 3 minas of copper until Wahšušana, I acquired and spent small wares for a value of 4 shekels of silver

Most tantalizing to archaeologists are the mentions in the tablets of ancient cities and settlements — some of which have been located, others of which remain unknown. In the record above, for instance, while Kaneš (Kanesh) has been located and excavated. Durhumit is, at present, lost to history.

Traditionally, historians and archaeologists have analyzed texts like these for bits of qualitative information that might locate a site — descriptions of landscape features, for instance, or indications of distance or direction from other, known cities.

But Barjamovic and his co-authors had a different idea: What if they analyzed the quantitative data contained in the tablets instead? In the passage above, for instance, you have a record of three separate cargo shipments: Durhumit to Kanesh, Kanesh to Wahshushana, and Durhumit to Wahshushana.

If you analyze thousands of tablets and tally up each record of a cargo shipment contained therein, you end up with a remarkably comprehensive picture of trade among the cities around Kanesh 4,000 years ago. Barjamovic did exactly that, translating and parsing 12,000 clay tablets, extracting information on merchants' trade itineraries.

What they had, in the end, was a record of hundreds of trade interactions among a total of 26 ancient cities: 15 whose locations were known and 11 that remain lost.

Here's where things get really interesting: In the ancient world, trade was strongly dependent on geographic distance. Moving goods from Point A to Point B was a lot more difficult at a time when roads were rough, goods had to be transported on the backs of donkeys and robbers lurked everywhere.

Cities located closer together traded more, while those farther apart traded less. This is the key insight driving the entire paper. Let's say we have an ancient city, such as Kanesh, that we know the location of. We also have two lost cities, Kuburnat and Durhumit. If we know Kanesh traded more with Kuburnat than with Durhumit, we can reasonably assume that Kuburnat is closer to Kanesh than Durhumit is.

Conceptual illustration by the Washington Post

The figure above is a conceptual illustration of this idea. Kanesh is in the center, Kuburnat is somewhere in the inner light-blue region, and Duhurmit is somewhere farther out, in the dark-blue area.

If you have decent data on trade volume (from, say, thousands of clay tablets), you can do one better than this: You can actually plug the trade data into an algorithm that uses other pieces of known data, such as commodity prices and population size, to estimate the distance between two given cities, given the volume of trade between them.

Conceptual illustration by the Washington Post

Updating our example illustration, you can see that if we know the rough distance between two cities, we can narrow our concentric circles down to concentric rings.

That still leaves a large area to search if we're trying to find these lost cities. But recall: the clay tablet data set includes trade volumes for 14 other known cities in addition to Kanesh. We can run our trade algorithm for any given lost city, such as Durhumit, against any other city we already know the location of. That gives us an estimate of the distance to Durhumit from each of those cities.

If a number of those distance estimates overlap in the same region, that's a pretty strong indicator that Durhumit would have been located in that region.

Conceptual illustration by the Washington Post

In the end, the trade data contained on 12,000 ancient clay tablets allowed Barjamovic and his co-authors to estimate the locations of the 11 lost cities mentioned therein. As a sanity check, they mapped their own estimates against some qualitative guesses produced by historians over the years. In some cases, the qualitative and quantitative estimates were in precise agreement. In others, the quantitative model lends credence to one historical assessment vs. another. In others, the model suggests that the historians previously got it completely wrong.

“For a majority of cases, our quantitative estimates are remarkably close to qualitative proposals made by historians,” the authors conclude. “In some cases where historians disagree on the likely site of lost cities, our quantitative method supports the suggestions of some historians and rejects that of others.”

As a final check, the authors ran the model against the location of known ancient cities to see whether its results matched the actual archaeological record.

One two out of three of the known cities they tested against, the model nailed it. But it whiffed on the third.

The authors suspect their algorithm performs better for cities located near the center of the Assyrian trade network. The “estimation of the location of lost cities is reliable for central cities, but less precise for peripheral cities,” they write. Whether you're a Bronze Age merchant of a modern-day economist, long distances remain treacherous.

Still, the authors say their approach for finding lost cities can be used to supplement more traditional methods, helping historians fill in gaps of knowledge in the archaeological record. Beyond that, the paper is a fascinating illustration of how modern knowledge can breathe new life into numbers inscribed on clay tablets 4,000 years ago.

Read the whole story
ove
62 days ago
reply
Share this story
Delete

Kids? Just say no

1 Share

You don’t have to dislike children to see the harms done by having them. There is a moral case against procreation

By David Benatar

Read at Aeon

Read the whole story
ove
93 days ago
reply
Share this story
Delete
Next Page of Stories