Например, мы, совершенно случайно, нашли там музей, обладающий собственной индивидуальностью. Музей театрального костюма http://museum.kostromadrama.ru/ при Костромском художественном драматическом театре им. А. Островского.
( Read more... )
|You are viewing garret_lab's network page|
Create a Dreamwidth Account Learn More
Платон. Не ты ли, Сократ, сказал, что по одной капле воды человек, умеющий мыслить логически, может сделать вывод о возможности существования Атлантического океана?
Сократ. Не я. То был Шерлок Холмс.
Платон. Можешь ли ты предположить, Сократ, что в землях, именуемых Украиной, появится в будущем некий механизм, который позволяет писать письма другим людям, пусть и живущим на краю света, и получать от них ответ через минуту?
Сократ. Допустим это.
Платон. Тогда можешь ли ты, Сократ, как человек, умеющий мыслить логически, сказать, о чём они там будут писать?
Сократ. Если бы я мог писать, положим, своему другу Ксенофонту, то мы говорили бы о добре и зле, рождая истину в дискуссии.
Платон. А я бы полемизировал со своим недругом Диогеном, который сейчас влечёт жалкую жизнь раба в Македонии, споря с ним о природе вещей.
Сократ. Однако это мы; а что же делают те, в Украине?
Платон. Он обсуждают, справа ли или слева Шабунин бил Филимоненко.
Сократ. Пиздец блядь, ты серьёзно, Платон?
Платон. Сократ, я даю тебе слово афинянина, что это так.
Сократ. Ебать, ебать...
Платон. И более того, они об этом спорят третий день.
Сократ. Пизда, о боги, горе мне. Официант, одну цикуту без льда.
The debate over Confederate monuments has been framed by President Donald Trump — and some who share his views — as a fight between those who wish to preserve history and those who would “erase” it. But let us linger on what history we’ll be preserving as long as Confederate memorials stand.
The Confederate monuments in New Orleans; Charlottesville, Virginia; Durham, North Carolina, and elsewhere did not organically pop up like mushrooms. The installation of the 1,000-plus memorials across the US was the result of the orchestrated efforts of white Southerners and a few Northerners with clear political objectives: They tended to be erected at times when the South was fighting to resist political rights for black citizens. The preservation of these monuments has likewise reflected a clear political agenda.
It is going to take equal energy and focus to remove them from the national landscape.
But the story of the monuments is even stranger than many people realize. Few if any of the monuments went through any of the approval procedures that we now commonly apply to public art. Typically, groups like the United Daughters of the Confederacy (UDC), which claimed to represent local community sentiment (whether they did or did not), funded, erected, and dedicated the monuments. As a consequence, contemporaries, especially African Americans, who objected to the erection of monuments had no realistic opportunity to voice their opposition.
Most Confederate monuments were, in short, the result of private groups colonizing public space.
Over the past decade, Southern legislatures have passed laws requiring approval from state legislatures before any historical monuments can be moved, removed, or altered — thereby freezing those private decisions in place.
A controversy in Reidsville, North Carolina in 2011, which failed to attract any national attention, offers a window into the origins of Confederate monuments and their contested “ownership.” That year, an errant driver plowed into the generic Confederate soldier memorial that stood precariously beside a major street in the small town, 25 miles north of Greensboro.
Because other motorists had previously hit the monument, the UDC, which had funded and erected the monument in 1910, decided the sculpture would be safer if it was moved to a nearby cemetery. But in a strange twist, the plan was blocked when the Sons of Confederate Veterans, another Confederate heritage organization, sued the UDC to prevent the relocation of the monument. Eventually, the UDC prevailed and the restored monument was rededicated in the cemetery in 2014. The city itself was a spectator in this legal fight.
Had the dispute flared after 2015, when the state legislature passed a law effectively blocking the removal of monuments, the UDC would have had to tangle not only with neo-Confederates but also with state legislators.
A smaller number of monuments, like the one recently toppled in Durham, were indeed funded with public money — but an asterisk must be attached to the word “public.” In 1922, Confederate veterans in Durham persuaded the state legislature to allocate $5,000 of county taxes to fund the monument. No one asked black residents, who were denied the right to vote by Jim Crow laws, whether they supported spending their tax dollars on this public, political statement.
Let us acknowledge that the architectural landscapes we have inherited are neither sacred nor unchanging. The timing of the proliferation of the monuments themselves illustrates this point. In the years immediately after the Civil War, North Carolina Confederates understandably mourned their dead, yet the state erected fewer than 30 memorials between 1865 and 1890. Then, during the next half century, they dedicated more than 130.
It is hardly coincidence that the cluttering of the state’s landscape with Confederate monuments coincided with two major national cultural projects: first, the “reconciliation” of the North and the South, and second, the imposition of Jim Crow and white supremacy in the South. As part of the process of national reconciliation, white Northerners agreed to tolerate the commemoration of Confederates, and they contributed both moral support and funds to the veneration of a few Confederate figures in particular, especially Robert E. Lee.
Lee became a convenient icon of reconciliation who was depicted as having reluctantly fought to protect his native state — not slavery— and then after the war devoted himself to the uplift of the South and to binding the nation’s wounds. For white Northerners, Lee was a military hero who could be venerated without having to embrace the Confederate cause in its totality. (This impulse explains the monuments to Lee in the US Capitol, at the City University of New York, and other sites outside of the former Confederacy.)
Meanwhile, white Southerners used the commemoration of the Confederacy to promote a degree of white cultural unity that had never existed in the region either before or during the Civil War. An observer scanning the commemorative landscape of North Carolina will see little evidence of the tens of thousands of white North Carolinians who fought for the Union, the even larger number of white North Carolinians who actively opposed the Confederacy, or the tens of thousands of African Americans who escaped slavery and joined the Union army.
Confederate commemorators suppressed these unwelcome blemishes to their preferred version of history while simultaneously making the Confederate cause virtually sacred. White Southerners who questioned the Confederate narrative faced ostracism or worse.
The pursuit of white cultural unity through Confederate commemoration went hand-in-hand with the promotion of white supremacy. The Confederate monuments themselves were sometimes explicitly linked to the cause of white supremacy by the notables who spoke at their dedication. For instance, at the 1913 dedication of an on-campus monument honoring University of North Carolina students who fought for the Confederacy, white industrialist Julian Carr unambiguously urged his audience to devote themselves to the maintenance of white supremacy with the same vigor that their Confederate ancestors had defended slavery.
During the dedication speech, Carr praised Confederate soldiers not just for their wartime valor but also for their defense “of the Anglo Saxon race during the four years after the war” when “their courage and steadfastness saved the very life of the Anglo Saxon race in the South.” The “four years after the war” was a clear reference to the period in which the Ku Klux Klan, a white paramilitary organization terrorized blacks and white Republicans who threatened the traditional white hierarchy in the state. Then he boasted that “one hundred yards from where we stand” — and within months of Lee’s 1865 surrender — “I horse whipped a negro wench until her skirts hung in shreds because she had maligned and insulted a Southern lady.”
Carr admittedly was uncommonly explicit about conflating Confederate memorialization with white supremacy, but Southern memorials inherently celebrated the slave South and white power along with the heroism of Confederate soldiers.
We topple old buildings, move or rename streets, and engage in creative destruction all the time — which is inevitable when the needs of the people living contemporary landscapes change. The somewhat comical events in Reidsville (in which the United Daughters of the Confederacy concluded it would be for the best if fewer drivers crashed into their statue) provide just one example of a decision to move a memorial for a practical reason.
Elsewhere, communities have had other reasons to act. Wilson, North Carolina, for example, has been home since 1926 to a memorial that commemorated the Revolution and the Confederacy: It originally featured a massive central column depicting the Stars and Stripes and the flag of the Confederate States of America, flanked by two water fountains — one for whites, one for blacks. It apparently outlasted its welcome sometime during the 1960s. Without fanfare, the fountain was moved from the court house to an inconspicuous park, and the fountains were replaced by small granite caps. Today you would be unlikely to recognize it as a one-time segregated water fountain.
So how should we move forward to dismantle the Confederate commemorative landscape? We should begin by acknowledging that the American South is now a pluralist society for the first time in its history. Whereas the current commemorative landscape of the South is a product of white privilege and power, the future landscape should be crafted after inclusive public debate and through democratic procedures. New Orleans and Baltimore, which conducted public conversations about the removal of monuments, can serve as models for other communities. New Orleans Mayor Mitch Landrieu has provided an exceptionally articulate justification for the removal of Confederate memorials.
A crucial step in many Southern states will be to repeal laws constraining the removal or alteration of historic monuments, such as North Carolina’s two-year-old Historic Artifact Management and Patriotism Act. Let there be no doubt about the intent of this or similar “heritage preservation” laws: They “protect” and perpetuate the racist commemorative landscape that currently exists. Why shouldn’t the citizens of Durham have had the choice to preserve, move, or remove the Confederate monument there? Local choice may allow some communities to keep “their” Confederate monuments. So be it. Let them defend their decision if they do so.
We are also sure to hear calls to add monuments (honoring African Americans, for example) as an alternative to removing those we find offensive, and thereby “erasing” history. But removing — or moving — Confederate monuments is not historical erasure. The same logic could have been used to justify maintaining, after 1964, signs that identified “Negro water fountains,” “Colored waiting room,” and the other markers of Southern segregation.
In an ideal world with unlimited resources, a proposal to add monuments might make sense. But given the vast number of monuments to the Confederacy across the United States it would take decades, and millions of dollars, to add enough statuary to create a more inclusive commemorative landscape. And is there any reason to believe that state legislators are going to appropriate sufficient money for that purpose? Perhaps the defenders of Confederate monuments will demonstrate their good faith by pressing for funding for new monuments to Southerners, white and black, who fought on behalf of the Union or otherwise opposed the Confederacy. Until then, I will view their devotion to heritage preservation with skepticism.
This is hardly the first time that a society has confronted the issue of dealing with art harnessed to objectionable causes. Art museums are filled with medieval and early modern Western art that is offensive to many of our contemporary values — depicting rape, the slaughter of Muslims, or demeaning images of non-Europeans. Like those works of art, those Confederate monuments that have aesthetic significance can and should be preserved in museums where they can be properly interpreted by curators and docents. In such settings, they will serve as historical artifacts rather than civic monuments.
But many Confederate monuments were essentially “mail order” sculptures mass produced by Northern and Southern foundries during the late 19th and early 20th centuries. Whatever value they have as historical artifacts, they were not the work of some latter-day Michelangelo.
Before any Confederate monuments are removed, they should be carefully photographed and measured so that the historical record of the monuments in situ can be preserved and made available for historians and art historians in the future. Then they can be transferred to the archives, museums — or the trash heap of history.
W. Fitzhugh Brundage is the William B. Umstead Professor of History at the University of North Carolina at Chapel Hill. He is the scholarly adviser to the Commemorative Landscapes of North Carolina project.
The Big Idea is Vox’s home for smart discussion of the most important issues and ideas in politics, science, and culture — typically by outside contributors. If you have an idea for a piece, pitch us at email@example.com.
An anatomy of what made “Despacito” the most popular song of the year.
On the way home from the beach last weekend, as we got into the car and turned on the radio, I immediately heard the familiar plucks of the cuatro, a steel-strung Puerto Rican guitar, on Luis Fonsi, Daddy Yankee, and Justin Bieber’s “Despacito” remix.
When the song ended and the station went to commercial, we switched to another station, and within minutes the falling melody of the cuatro came on again. Having just heard the song, we tried another station. And another. And then we realized that we’d run out of pop stations before going 10 minutes without hearing “Despacito.”
The sweltering pop reggaeton-love ballad hybrid has been everywhere this summer, playing in cities and suburbs, at house parties and barbecues, at wedding receptions and department stores, in people’s headphones during their commute.
“Despacito” is inescapable and inevitable. You couldn’t avoid the song if you tried.
“It’s massively popular. It’s sort of unprecedented to have a song do so well in so many formats simultaneously,” Tom Poleman, the chief programming officer of iHeartRadio, told Vox. He explained that the song’s popularity spans a wide range of listening categories, including Top 40, Adult Contemporary, and Spanish Contemporary: “If you look at what we call total audience spins or total impressions — ‘Despacito’ has 1.8 billion total audience spins. That’s massive,” he said.
The original song and its music video were released in January; the video has since become the most watched YouTube video of all time, with more than 3 billion views. The remix, which features Justin Bieber, came out in April — and the two versions of the song combined have earned “Despacito” the distinction of “most streamed song in history.”
In May, the remix hit No. 1 on Billboard’s Hot 100 chart, where it has remained for the past 14 weeks. It’s only the third Spanish-language song in history to reach No. 1 in America — the first since 1996’s “Macarena,” and before that, Los Lobos's 1987 cover of “La Bamba.” And now it’s tied with a handful of other songs for the title of second longest-leading Billboard Hot 100 No. 1.
“Despacito” is equal parts heartbeat, heat, sweat, and skin, making it perfect for summer. But it’s become much more than the song of summer 2017, more than the results of what happens when human voice is stretched on top of music, more than a beat that sits at your hips and a melody that hits you in your chest.
Quite simply, “Despacito” is magic.
To have a whole country singing along and connecting to a song that so many of us don’t know the words to is a feat. “Despacito” appeals to each one of us in its own way, and that’s the greatest thing about it.
On a technical level, we can look at its chord progressions and melody and identify a few reasons why the song is so beloved. Audiences seem to be craving something that’s different than what they’ve been hearing, yet still familiar, and “Despacito” offers that.
But the song also represents something you can’t find in the notes and melodies and lyrics. “Despacito” now occupies a special place in recorded musical history. It represents incredible potential. It’s a reflection of its culture, and the appreciation it can bring to that culture. And to some, its popularity and crossover appeal have even become a political message of defiance against the status quo and the summer of 2017.
To fully understand why people love “Despacito,” you have to understand the current state of pop music in America. “Despacito” is fusion of reggaeton, a style of music that originated in Puerto Rico, and pop. But for the past five or six years, American pop music has become nearly synonymous with electronic dance music (EDM), with not just EDM artists, producers, and DJs crossing over, but also major pop stars embracing the features and structures of the genre. And when everything begins to sound the same, people start to crave something new.
Beginning in late 2010 and continuing throughout 2011, pop music began to fuse with EDM. Rihanna and Calvin Harris’s 2011 single “We Found Love” became an absolutely huge hit, spending 10 weeks at No. 1 on the Billboard Hot 100; the song introduced some classic EDM elements (or dumbed them down, if you’re talking to EDM purists) to mainstream audiences. Among those elements were the manipulation of vocals and the tweaking of more traditional song structures, as well as one that’s specifically known as the drop — the moment in a dance track where the music coils around itself, building and building until it bursts, then unspools in a glorious, tempestuous release as the beat kicks in (in “We Found Love,” the drop comes about a minute and seven seconds into the song).
Success begets success, and EDM producers, DJs, and artists began to notice that there was a mainstream audience for a pop version of EDM. If a song could mimic “We Found Love” or David Guetta’s “Titanium,” particularly in its vocals, build-up, and drops, it could find the same audience.
Since then, many have forecasted the death of EDM. Yet its influence on different genres of music, particularly pop, has continued for years. Bieber’s 2016 album Purpose, along with popular collaborations between pop and EDM artists — think Selena Gomez and Ariana Grande’s songs with Zedd, or DJ Snake’s and Lil Jon’s “Turn Down For What” — are a testament to that. The popularity of dubstep, along with Skrillex’s mainstream success and that “wub-wub” sound you hear in so many pop songs, is evidence of it too. And earlier this year, Lady Gaga released “The Cure,” which features a chirping chimera-like synth that mimics the music of the Chainsmokers, whom she was trolling on Twitter just last year.
As a result, American listeners and even artists seem to be burned out on that sound and are craving something new, something that doesn’t sound like anything we’ve been hearing lately.
The “Despacito” remix — which features a verse sung in English by Justin Bieber at the start of the song, followed by Fonsi’s swooning vocals and Daddy Yankee’s grit — helps to satisfy those sonic cravings. In particular, it focuses on intimate vocals, and shifts away from high-energy choppy vocal synths and swirling drops.
“Between the smoothness of its backing instrumentals, its midtempo groove, and its repetitive and very familiar chord progression, it’s as if they’ve removed anything that could distract us from the interaction of the voice, the melody, and the language,” says Alex Reed, an assistant professor of music theory, history, and composition at Ithaca College. “The fact that it’s three men alternating verses makes it a showcase of for subtle differences in vocal timbre.”
This up-front approach to vocals is something pop artists have begun experimenting with of late. Chris Harding, a songwriter and co-creator of the Switched on Pop podcast, explained to me that songs with “much more restrained, close-up, nice vocals that feel intimate and feel more minimalist” — like Bieber’s verse on “Despacito,” as well as Selena Gomez’s “Bad Liar” and Julia Michaels’s “Issues” — have been growing in popularity.
But this isn’t to say that the only reason “Despacito” hit No. 1 in America is that it sounds different and enjoyed some fortuitous timing. There are a lot of great songs out there that are popular but sound similar to other hits, and there are a lot of great songs out there that are sonically different but will never find a huge audience.
“Despacito” is a scorcher of a tune — the experts I talked to all agree. And standing out from recent pop music is only the start of what it has going for it.
In addition to Bieber’s buttery vocals, and the contrast between its reggaeton-inspired style and the EDM-inspired pop dance music of the last few years — its most pronounced feature is a thumping downbeat, a.k.a. what the Atlantic has called the “boom-ch-boom-chick" beat — the opening and chorus of “Despacito” sink their teeth into you via a perpetual rise and drop.
“If you want to geek out over the melody, it does a similar thing [as] the chorus, it keeps climbing in thirds,” Reed says. “An important part of the rhythm is its syncopation on offbeats, which make it feel kind of open, giving the listener and dancer a lot of space to move around — it ends up feeling free, evocative, and sensual.”
To really hear the difference, listen to the melody in the opening verse of the “Despacito” remix, and compare that to the chorus of Taylor Swift’s “Welcome New York.” The chorus of “Welcome to New York” feels like it wants to keep you at one moment or one level, while “Despacito” wants to keep climbing.
“One thing that stands out about ‘Despacito’ is that ‘Despacito’ opens on melodic movement,” Harding says. “What ‘Despacito’ is doing is, instead of having a rise to this epic big moment, it's constantly moving — it's forcing us to feel different emotions.”
Harding explains that Bieber’s vocals sort of sound like the beginning to a pop song. But then the rise and drop of “Despacito” become really noticeable when Fonsi’s swooning voice comes swooping in, shifting the song from pop to love ballad. And then, there’s another aural surprise when the downbeat kicks in, and the song assumes its reggaeton-pop form.
“The cool thing about where it goes from the pre-chorus to the chorus, it’s kinda like this buildup, this suspense that’s building, and then all of a sudden, it’s like you’re there and then you go, ‘Despacito,’” Fonsi said in his commentary about the song on Genius. “We even slowed down the track just to give it a little bit more of a dramatic feel.”
Perhaps the most beguiling thing about “Despacito” is the way it surprises our ears — in both its melodies and the fact that it’s a Spanish-language song in the American pop music ecosystem — yet still folds in the familiar.
“The chord progression is the most common one of the last 20 years: It’s what Marc Hirsh called the ‘sensitive female chord progression’ in 2008,” Reed told me.
The chord progression Reed mentions (vi-IV-I-V) was dubbed the sensitive female chord progression because it appeared in a bevy of pop songs sung by women in the late ’90s and early 2000s. Beneath the surface differences of those songs is a feeling of yearning, a kind of ache that never quite feels resolved.
One of the most well-known examples of the sensitive female chord progression is in 1995’s “One of Us,” by Joan Osborne, where you can hear it in the chorus:
It’s also present in Beyoncé’s 2008 song “If I Were a Boy”:
"What [the sensitive female chord progression] allows is for [a song] to be very fluid. You're really not centered anywhere,” Rob Kapilow, a conductor and former host of NPR’s What Makes It Great series, said in 2008. “What it does is not have that kind of resolution, that kind of firm, declarative ‘We're here.’”
“Despacito” also fits this description. “It repeats cyclically in a way that feels always rolling forward, without a clear beginning or end,” Reed says. That repetitive, rolling quality is especially apparent in the chorus.
The chord progression in “Despacito” fit into the song, since the song is open-ended. It’s a question and an invitation without a response from whoever Bieber, Fonsi, and Yankee are singing it to.
“Despacito” is all about lingering in that moment of connection between two people — not what comes before or after. The song’s title literally translates to “slowly.” And when you dig deeper into the lyrics, it becomes about seduction: all the things you’d want to do to someone you’re madly attracted to. It’s not about prelude or resolution, but about being locked into a moment of infatuation.
It’s funny that a song as sexy and passionate as “Despacito” is using the same chord progression that made the music of the late ’90s and early 2000s so folky and sad. But the key to “Despacito” sounding so different is that it puts that chord progression into the frame of reggaeton.
“Compared to the pop genres where the progression is common, it appears less often in reggaeton and Latin music, so it’s a synthesis of different modes of pop,” Reed says.
“Despacito” also features another common sound. According to Harding, the song actually relies on a principle that a lot of hit songs over the past couple of years employ: “harmonically ambiguous or modally ambiguous chord progressions,” where “the listener is being pulled between a predominantly minor sound, and a predominantly major sound.”
Or what I, as a tone-deaf pop music fan, might call the “minor sad.”
In the plainest English speak: “Despacito” and a lot of hit songs of the past couple of years use notes that aren’t definitively upbeat, which makes it hard to pinpoint whether the song is happy or sad.
“Minor sad” examples in pop music include the Chainsmokers’ “Closer,” and a lot of the Weeknd’s songs — songs that feel like they’re danceable, but aren’t necessarily outright “happy.” They can also make you question why you’re dancing in the first place.
“Whether or not we are musically literate, we hear major [chords] as happier and more optimistic, and minor [chords] as more sad and sorrowful, solemn, maybe introspective,” Harding explained, noting that adding a minor sound has been used in dance music to make songs, which can be repetitive, feel less so.
I don’t think there’s a point in “Despacito” that feels sorrowful or solemn. But it does feel like a song that isn’t obviously happy or sad. We don’t really know if the singer’s seduction is successful. At the end of the remix, everything cuts out and all you’re left with is Bieber’s feathery voice singing “Des-pa-cito” with a blush of yearning — an end that fits seamlessly with the beginning of the song.
“Despacito” is a song that’s lived two lives. Long before the remix hit No. 1 in the United States, the original version of the song was a global hit — one that Bieber heard in a club while touring in Colombia earlier this year.
“About two weeks ago, the song took another step because Justin Bieber did a feature on it, and that gave the song a different dimension,” Fonsi told Forbes at the beginning of May. “The story behind it was he was on tour in Bogotá, Colombia, and he went out to a club and he heard the song, and he saw how people went crazy over it and started singing it, so he contacted us through his management.”
According to iHeartRadio’s Poleman, Bieber’s vocals and credit on the remix are what helped it achieve mainstream success and the No. 1 spot on the Billboard Hot 100.
“The song was a hit in the Latin community before Justin Bieber was added to it,” Poleman said. “But he has that magic touch in pop for the last couple of years. Adding him made a huge difference.”
In those last couple years Poleman is talking about, Bieber’s music has functioned more as a showcase for producers, DJs, and trends than anything uniquely Bieber. His hit songs feel like aural kaleidoscopes that highlight the neat things producers, songwriters, and Bieber can do with his breezy vocals.
In 2015, the Skrillex-Diplo collaboration Jack Ü chopped Bieber’s voice into jagged, mewing bits for the emo-EDM track “Where Are Ü Now.” Those vocal synths and beats, combined with tropical house (which borrows rhythms from dancehall and reggaeton), showed up in Bieber’s 2015 album Purpose, which featured hits like “What Do You Mean” and the breakup bop “Sorry.” In 2016, Bieber teamed up with DJ Snake for “Let Me Love You,” an existential love letter that sometimes sounds like a humpback whale laced up in a booming electro-pop corset.
As a result of all this experimentation, Bieber has become something of a gateway for mainstream pop fans, allowing them to experience sounds they weren’t listening to before. And by attaching his name to “Despacito,” he introduced his fans to the song.
Ironically, however, the extended success that “Despacito” has enjoyed in the US as a result of Bieber’s involvement in the remix underlines an unfortunate reality about the state of the American Top 40: It’s not even remotely diverse. “Despacito” being the third Spanish-language song to hit No. 1 in the US is a triumph, but it’s also a sign of how flat American listening tastes can be. A potential hit could be all around us and people might not embrace it if Bieber isn’t involved.
Adding insult to injury is the fact that, despite his fluent-sounding Spanish in the “Despacito” remix, the Canadian star has forgotten the words to the song during multiple live performances — occasionally singing “blah blah blah” without any hint of embarrassment. In one video, you can hear Bieber flub the chorus to the song, admit not knowing what he’s singing, and then swap the lyrics for Spanish you’d see at a Taco Bell drive-thru. He eventually stopped performing the song live.
Still, in the coming months, we’ll see if “Despacito” bucks American pop’s history with Spanish-language songs and ushers in a new wave of appreciation for Latin music and the artists who create it — DJ Khaled and Rihanna’s “Wild Thoughts,” which includes a heavy sample and homage to Carlos Santana’s “Maria Maria” is arguably the second biggest song of summer, possibly signaling the effect “Despacito” has already had in creating that appreciation.
Poleman believes “Despacito” can bring true change in a way that’s more lasting and significant than the faddish late-‘90s craze of the “Macarena” or the early 2000’s Latin-pop trend.
“It’s going to certainly change the desire for record labels to sign Latin artists that they think can cross over,” he told me. “It’s already happening. The barrier has been broken. People have seen that a Spanish song can be a mainstream hit. I don’t think it’s going to completely change the complexion [of the charts] right away, but I think it opens the door.”
My favorite origin story about the popularity of “Despacito” is that it the song has been a huge hit among people who do Zumba, a popular dance workout at health clubs across the country. Daddy Yankee, speaking at a conference in France earlier this summer, said: “Zumba is a huge platform as well, and it relates to the music we’re making. They reach out to millions of people in their platform and that’s another tool we have to promote our music. I’m taking advantage of many platforms.”
I’m not familiar with the latest Zumba trends, but anecdotally I can say that whenever an instructor drops “Despacito” in one of the SoulCycle classes I frequent, everyone in the studio — predominantly white women in Spandex — loses their collective shit. Eyes squint, hair is tossed, Caucasian selves are felt.
I’m not exempt; I absolutely lose my shit too, mouthing along to the lyrics that I still don’t really know (prior to writing this story, I just knew “Despacito” was about doing sexual things slowly to someone). By the end of the three minutes and 48 seconds, I’m ready to name my firstborn child “Suave Suavecito.”
Reed says this is natural.
“When we hear songs in foreign languages, our hearing is connotative, and not denotative — and actually we often prefer it that way, since music itself is more about evoking ideas than dictating them,” he explains. “Even when we hear songs in English, we rarely really latch onto the lyrics in an expository, textual way.”
Harding says this idea illustrates the idiom of “getting lost in the music” — where you are more interested in how a song makes you feel, as all of its components work in unison to create something bigger than just lyrics or a melody.
If you don’t know the meaning of the words to “Despacito,” you can still pick up on the images and feelings it’s creating. It’s still possible to appreciate the way the lyrics sound, how they flow from verse to verse. For listeners who don’t speak Spanish, their “appreciation” of the song and the feelings it conjures up come from their individual experiences with the song’s roots.
“Those connotations are ones that are suggested to us by our background knowledge of the language and its culture, which is why ‘Despacito’ seems to resonate for a lot of people,” Reed says. “It plays up existing cultural stereotypes of Puerto Rico as sensual, bodily, passionate — stereotypes that you can find way back in West Side Story.”
But the cultural stereotypes and touchstones present in “Despacito” and its music video don’t necessarily have to be taken negatively. In an age where the president of the United States flattens entire peoples into exaggerated, inaccurate caricatures, a song like “Despacito” could give listeners an appreciation for the cultures and people who created it.
“Well, I think it’s ironic,” says Enrique Santos, an on-air radio personality at iHeartRadio and the chairman of iHeartLatino. “But [the song and the appreciation it brings to Latin art] is a great thing when you have had such negative rhetoric being tossed around. It shows that we’re much more than what some people have portrayed us to be as drug dealers or rapists — no. We’re musicians, we’re artists, we’re mothers, dads, brothers, sisters.”
In that sense, “Despacito” can be an act of defiance.
In early August, Moises Velasquez-Manoff wrote a column for the New York Times about how “Despacito” is undeniably politically relevant to him. The song’s success in the time of Donald Trump doesn’t necessarily mean that it will conquer people’s tribalistic instincts or topple any of his administration’s actions. But to Velasquez-Manoff, the song’s success and Americans’ appreciation of it represents what he believes is an anthem that celebrates natural inclination of the human spirit.
“We have this other side that’s curious, that doesn’t cringe from difference so much as find inspiration in it,” Velasquez-Manoff writes. “A transcendent side that takes joy in bringing together disparate parts, in creation, in play … The song is a fusion, an amalgam. As such, it doesn’t just illustrate the genius of pop music but also serves as a model of how creativity works generally.”
The common thread among many of the music experts I spoke to is that they believe “Despacito” is more than just a song about a certain kind of slow lovemaking, but is also very much about a specific kind of human love. Music’s most powerful magic is its ability to connect people.
“Despacito” and other popular songs like it give us something to feel even if we don’t know what’s happening the melody, the chord progressions, or the words. For three minutes and 48 seconds, it can change our lives. It can be a passionate love song, a beacon for humanity, or an inventive fusion of innovation all at once or not at all.
Because all we want to do is listen just one more time.
I hate this study already. Some psychologists attempted to develop a psychological profile of the alt-right by interviewing them and using a questionnaire. Fine. There’s nothing unexpected in their results.
A lot of the findings align with what we intuit about the alt-right: This group is supportive of social hierarchies that favor whites at the top. It’s distrustful of mainstream media and strongly opposed to Black Lives Matter. Respondents were highly supportive of statements like, “There are good reasons to have organizations that look out for the interests of white people.” And when they look at other groups — like black Americans, Muslims, feminists, and journalists — they’re willing to admit they see these people as “less evolved.”
It’s that last bit that bugs me. One of their questions primed them with a bad pseudoscientific image, and then asked them to rate various groups of people on how “evolved” they are.
That question makes no sense. It starts by leading people to think an invalid, linear model of progressive evolution is scientifically reasonable, and then asks them to indulge in rating human beings. It doesn’t surprise me that Nazis are willing to dehumanize, but is it fair to miseducate in the process of figuring that out?
Here’s the average of the answers they got.
If they’d asked me this question, I would have slammed every slider straight to 100%, and then aborted the whole survey and told the investigators that their methodology was poisonous. But that’s me.
They’re trying to measure dehumanization, and I can appreciate that this might be an effective way to do it, but really, do we need to spread more misinformation in the process? They got a strong distinction, but I’m also annoyed by the comparison group.
The comparison group, on the other hand, scored all these groups in the 80s or 90s on average. (In science terms, the alt-righters were nearly a full standard deviation more extreme in their responses than the comparison group.)
How can you be 80% evolved? How can you even argue that different groups of Homo sapiens are “evolved” to different degrees? None of this makes any sense.
Although the result that Trump’s favorite Nazis think he is less evolved than women in general has got to burn.
Also, they determined that racists are not more economically stressed than other people. They are just goddamned racists. No surprised there.
The English word eclipse comes from the Greek ἔκλειψις, ekleípō: disappearance, abandonment. A solar eclipse is the moment in which the sun disappears, abandoning the world. It’s like being forsaken by a god.
The ancient Greeks thought of a solar eclipse as an act of abandonment, a terrible crisis and an existential threat. It meant that the king would fall, that terrible misfortunes would rain down on the world, or that demons had swallowed the sun.
Yet not everyone thought of the eclipse as a horrible threat. For some cultures, the eclipse was an act of creation: The sun and moon were coupling, and would create more stars. For others, it was a random and chaotic act by a trickster or a mischievous boy, causing trouble just for the sake of it.
On Monday, a solar eclipse is coming to America. In the 21st century, a solar eclipse means eclipse parties. It means buying specialty glasses and building pinhole boxes and preparing to see “the most beautiful sight you can see in nature,” as one cartographer put it.
But for much of human history, that’s not how people reacted to eclipses, even after they were able to predict them accurately (around 206 AD for the Chinese and 150 BC for the Greeks). Here’s a rundown of some of history’s most pervasive eclipse folklore.
In many cultures, the darkening of the sun meant the gods were very, very angry with humanity, and about to inflict some punishment. Often, that meant that in order to appease them, you had to kill someone.
In Transylvania, people believed an eclipse was caused by the sun turning its back on the sins of humanity, creating a poisonous dew. The Inca viewed eclipses as a sign that the sun god Inti was angry, and required appeasement with offerings. For the Native American Tewa tribe, an eclipse meant that the angry sun was leaving the sky to go visit his home in the underworld.
Aztec priests predicted that if there was a solar eclipse accompanied by an earthquake on the date 4 Ollin, the world would end, so every year on 4 Ollin they would perform a ritual human sacrifice. (As the priests likely knew — they were sophisticated astronomers — there would be no solar eclipse on 4 Ollin until the 21st century.) Solar eclipses on other dates were also met with human sacrifices. According to some, the Aztecs mostly sacrificed fair-skinned prisoners to appease the gods on eclipse days, but that report comes from a 16th-century Spanish missionary, so take it with a grain of salt.
The Greeks thought an eclipse meant that the gods were about to rain punishment down on a king, so in the days before an eclipse, they would choose prisoners or peasants to stand in as the king in the hopes that they’d get the eclipse punishment and the real king would be saved. Once the eclipse was over, the substitute king was executed.
The idea that a solar eclipse meant a demon was swallowing the sun shows up in eclipse folklore across the globe, and if you look at pictures of a partial solar eclipse, you can see why: It’s easy to imagine that some giant creature is slowly taking bite after bite out of the sun. In ancient China, the earliest word for eclipse, shih, meant to eat, and eclipses were believed to be caused by a dragon eating the sun. In Vietnam, the sun eater was a frog. For the Native American Pomo, it was a bear. In Yugoslavia it was a werewolf, and in Siberia a vampire.
In ancient Egypt, Apep, the serpent of chaos and death, opposed Ra, the sun god, and was always trying to reach Ra’s skyboat to devour the sundisc — but in the end, Ra was always able to fight him off, and the sun would come back.
In ancient India, Rahu was an immortal demigod with a severed head. He had a grudge against both sun and moon — they were the ones who convinced Lord Vishnu to chop off Rahu’s head in the first place, after he drank the nectar of immortality — so he chased them endlessly across the sky, and sometimes caught them. But whenever he managed to swallow either sun or moon, his victory was short-lived: They’d pass out of the stump of his throat shortly thereafter.
In Norse mythology, the sky wolves Hati and Skoll chase the sun and the moon endlessly, waiting for Ragnarok, when they can finally swallow their prey and plunge the earth into darkness, heralding the final destruction of the Viking gods. It’s not entirely clear whether the Vikings thought of eclipses as near misses at Ragnarok, with Hati and Skoll nearly capturing their prey, but many scholars believe there’s a pretty strong possibility that they did.
Generally, across the globe, when a demon is trying to eat the sun, there’s only one thing to do: make as much noise as possible until it gets scared and flees. Then you survive until the next eclipse.
Eclipses weren’t always seen as a cosmic calamity. Sometimes they just meant that the sun and the moon, who were usually understood to be a married couple, were working out their issues. Celestial marriage counseling.
For the Tlingit tribes of North America, as well as some Australian aboriginal cultures, an eclipse meant that the sun and moon were having more children: the stars and planets that became apparent in the darkness of an eclipse but weren’t otherwise visible.
For the Batammaliba people of Togo and Benin in Africa, an eclipse meant the sun and moon were fighting with each other. So to encourage them to come to peace, people would approach eclipses as an opportunity to resolve their feuds and put away old grudges.
For the Inuits, the sun and moon weren’t a married couple but brother and sister. At the beginning of the world they quarreled, and the sun goddess Malina walked away from her brother, the moon god Anningan. Anningan continued to chase after her, and whenever he caught up to her, there was an eclipse.
The Kalina of Suriname also thought of the moon as brother and sister, but their version of the relationship between the two heavenly bodies was a little more violent. An eclipse meant one of them had knocked the other one out.
Sometimes eclipses don’t happen because the gods are angry or because terrible things are going to happen or because a demon is hungry or because cosmic bodies are working through their feelings. They happen because some random trickster figure feels like being a dick.
For ancient Persia, eclipses happened if the trickster pari decided to blot out the sun for fun. In the legends of multiple Native American tribes — the Cree, the Choctaw, and the Menomini — an eclipse happens because a little boy has trapped the sun in a net, usually to get revenge on the sun for burning him. The boy refuses to release the sun, and an animal has to chew the net open.
Of all the eclipse myths, the trickster stories perhaps come the closest to the way we think about eclipses in modern America: There is no particular moral judgment at work here, and no dark omen. The eclipse simply comes, inevitable and unstoppable, without caring what we think of it, and there is absolutely nothing we can do about it.
“You get an overwhelming sense of humbleness and how small and petty we really are compared to the mechanics of the solar system, the clockwork of the universe,” says retired NASA astrophysicist and eclipse chaser Fred Espenak. “These events that are taking place, that in no way can we affect or stop.”
People can no longer afford to move to opportunity.
America in the Gilded Age was a starkly unequal place, not just in terms of inequality between people but inequality between regions. Long-settled, fast-industrializing states in the Northeast were far richer than those of the West or the South, which had many fewer factories, railroads, and other kinds of capital goods that allowed for productive work and high wages. But around 1880 that began to change, and for 100 years, income gaps between states slowly converged at a rate of about 1.8 percent per year.
But since 1980, that process has began to slow, and over the past decade it’s essentially stopped entirely. Today, Massachusetts’s GDP per capita is about double what you find in Mississippi — roughly equivalent to the gap between Switzerland and Slovakia — and it’s not getting any narrower.
Phillip Longman of New America’s Open Markets program has been arguing for some years now that Reagan-era shifts in the federal government’s attitude toward corporate concentration are to blame. This is one of several arguments that’s helped inspire Democrats to start calling for a rethink of federal antitrust policy. But new empirical research from Peter Ganong of the University of Chicago and Daniel Shoag of Harvard’s Kennedy School of Government suggests the issue is more complicated than that. After all, even as the richest cities have gotten richer on a per capita basis, their share of aggregate national output has stagnated because their populations are growing slowly.
Ganong and Shoag argue that the slowing population growth in rich cities and the slowing of regional income convergence are intimately linked trends.
Less skilled workers used to move to rich states to increase their wages. That lowered average income in the rich states while raising it in the poor ones, as people’s natural tendency to move toward economic opportunity helped drive nationwide convergence of wages and incomes. But in the contemporary United States, zoning restrictions that prevent adequate levels of house building mean that much of the higher incomes earned in rich states simply pass through in the form of higher housing costs.
For skilled workers, this trade-off is worth it, but for the working class, it generally isn’t. Consequently, working-class people have begun to move out of the rich states and toward the cheap ones — throwing the pattern of convergence into reverse.
This set of four charts in Ganong and Shoag’s paper tells the fundamental story — in the old days, there was a strong tendency for poor states’ per capita incomes to grow faster than those of rich ones and an equally strong tendency for people to move away from poor states to go live in rich ones. But in recent years, the income convergence trend has slowed and the migration pattern has reversed.
People move, of course, for non-economic reasons. You can see clearly on these charts that the warm weather of Nevada and Arizona causes those states to punch above their weight in terms of migration in both eras. But the overall pattern is striking. Lots of people used to move to rich places like California, Maryland, and the tri-state area around New York City. These days, very few people move there, even though the typical resident of the South or Midwest could earn more by moving to a rich city.
The reason is that these states are also more expensive, and for working-class people the higher costs are no longer worth the higher wages.
This chart shows that until 1990 or so, both skilled and unskilled workers could improve their standard of living, even considering housing costs, by moving to a high-income state. But the net gains for unskilled workers began to diminish sharply, and by 2010 a typical low-skill household was actually worse off in a high-income state due to the even higher housing costs.
Traditionally, in other words, both lawyers and janitors earned more in the New York City area than they did in the Deep South. Today, “lawyers continue to earn much more in the New York area in both nominal terms and net of housing costs, but janitors now earn less in the New York area after subtracting housing.”
The result is that less skilled workers now tend to eschew the highest-wage, highest-cost locations — creating a powerful counterpressure to other forces that would otherwise drive regional income convergence.
This and other lines of recent research tend to indicate that the gains to increasing the housing supply (whether through zoning changes to allow more market-rate housing or through the direct construction of social housing) would produce large economic benefits. Regional inequality would be reduced, as the pattern of state-level income convergence restarted. Ganong and Shoag also believe that about 8 percent of the increase in individual-level inequality can be explained through this mechanism. Meanwhile, overall GDP would be about 9.5 percent higher, and the structural increase in the capital share of national income would be greatly reduced.
In short, with more elastic housing supply, the United States would be richer on average, and the gains would be disproportionately concentrated among poorer people and poorer states.
But there is a paradoxical aspect to this. The housing fix for regional inequality entails more rather than less concentration of economic activity in rich coastal metro areas. The mechanism is that with a greater supply of housing, the working-class share of the population of these metro areas would grow disproportionately — dragging per capita incomes down while pulling them up in poorer places. Sunbelt and Rust Belt cities would be richer but smaller, while coastal ones would be bigger.
This would leave almost everyone better off, but it’s not exactly the political solution to the problem of regional inequality that elected officials are looking for. To get that job done, politicians may need to look at more direct solutions like moving white-collar government work to cities that have suffered population decline or creating new universities in declining areas.
White supremacy is an American cultural value — not a fringe ideology.
After an image of Peter Cvjetanovic’s rage-filled face, illuminated by a tiki torch, was snapped at last Friday’s white nationalist march in Virginia and subsequently spread across the internet, Cvjetanovic talked to his local news station to defend himself and the cause he traveled to Charlottesville to support: the preservation of a public statue of Confederate Gen. Robert E. Lee.
“I came to this march for the message that white European culture has a right to be here just like every other culture,” the 20-year-old from Reno, Nevada, told Channel 2 News. “I do believe that the replacement of the statue will be the slow replacement of white heritage within the United States and the people who fought and defended and built their homeland.”
In July, a Ku Klux Klan member told the Washington Post’s Joe Helm much the same thing about the city’s plans to remove the Lee memorial. “The liberals are taking away our heritage,” said James Moore from North Carolina. “By taking these monuments away, that’s what they’re working on. They’re trying to erase the white culture right out of the history books.”
President Trump agrees. In a Tuesday press conference, he defended the Charlottesville protesters (there were “innocent” and “very fine people” marching alongside neo-Nazis and KKK, he insisted), demonized the anti-racist counterprotesters (who he argued shared equal blame for the violence), and used the same barely coded language KKK members use to defend their cause.
“You’re changing culture,” he said, addressing those who want to get rid of public Confederate monuments like the Lee statue. (He reiterated this point in a later tweet, writing, “Sad to see the history and culture of our great country being ripped apart with the removal of our beautiful statues and monuments.”)
The response to the protests by many across the political spectrum has been to deny the basic legitimacy of this view. Sen. Mark Warner tweeted in reaction to the protests that “hate has no place in Va.” In a written statement, Warner’s fellow Virginian Sen. Tim Kaine promised that “this is not who we are.” This notion was emblazoned across counterprotesters’ signs. Former Deputy Attorney General Sally Yates and Republican pundit Ana Navarro shared the general sentiment, along with other public and private figures.
Denying the Americanness of the racists who descended on Charlottesville holds a compelling patriotic power. Being drawn to that message in this dark moment is understandable, as is the compulsion of decent people to distance themselves from the countrymen they don’t recognize and the hate they don’t hold in their own hearts. And it’s very true that this isn’t what America should be. It doesn’t hold up to the promise of our ideals.
But as wrong as white supremacists are about most everything, they’re right about this: White supremacy is our culture — not just theirs, but all of America’s. It lives in our hearts and minds and institutions, and in public parks and highways across the country. Hate has a home here, and it always has. This newly empowered white nationalist movement, and the president’s unabashed alignment with it, shows we’ve never fulfilled the promise of our ideals.
Trump has said many things that were false and offensive about the protests. But in asserting that revering the Confederacy — and the monuments to white supremacy erected in its honor — is an American cultural value, he’s acknowledging a fact about our country that many people horrified by what they saw in Charlottesville can’t seem to.
In his 1963 “Letter from a Birmingham Jail,” Martin Luther King Jr. wrote that “shallow understanding from people of goodwill is more frustrating than absolute misunderstanding from people of ill will.” To denounce this emboldened alt-right’s actions while continuing to deny the direct link between their movement and our country’s worst unresolved sins is the precise sort of shallow understanding King was lamenting. And if so many persist in looking at events like the Charlottesville uprising as an aberration instead of a logical extension of our country’s pervasive and powerful tradition of white supremacy, we never will.
A 2016 Southern Poverty Law Center study found that there are at least 1,500 public spaces in the US honoring Confederates and the Confederacy, primarily in the South. And as the momentum to get rid of these memorials picks up — in Charlottesville and New Orleans and other cities across the South — so does the pushback from powerful people who want to protect them.
In Alabama, for example, Gov. Kay Ivey recently signed a law forbidding local governments from removing Confederate monuments from public property or renaming public schools that have been around for longer than 40 years. And, of course, our highest-ranking elected official just aligned himself with a white nationalist mob in support of this cause.
When you consider how and when these Confederate monuments came to be, the arguments that they’re points of cultural pride become more and more illuminating. In its study, the SPLC noted that most of the Confederate memorials were erected in the first two decades of the 20th century and during the civil rights movement.
These periods coincide with the 50th and 100th anniversaries of the Civil War. But they also overlap with two of the most heinous periods of racial terror in American history: the post-Reconstruction era, when white people moved decisively and violently to disenfranchise black Americans under Jim Crow, and the civil rights era, when white Southerners were desperate to keep that disenfranchisement in place.
More than historical markers commemorating a war, the purpose of these memorials is to celebrate a cultural heritage of white supremacy and affirm its enduring place in America. These are literal monuments to the racist ideology that underpinned slavery and Jim Crow, and they serve as a powerful message, not just to the white people of the South but to the black people of the South as well: Laws may change, but white supremacy remains.
It’s not hard to imagine that in the 1950s and ’60s, when white America’s carefully constructed framework of legal racial supremacy was threatened, there arose a great swell of pride for the Confederate States of America — a symbol of the last true moment that white Americans had absolute legal power over black bodies and put everything on the line to preserve it.
It’s not hard to compare that segregationist backlash to what’s happened over the past couple of years: As our country moves toward being majority minority, our first black president finished up his last term, and more and more marginalized groups are asserting their voices and rights, there emerged a powerful movement of racist, sexist, anti-Jewish, and anti-Muslim white nationalists. And then their dream candidate became president of the United States. And now, white supremacists descend upon Charlottesville to wreak mayhem and murder over a statue of Robert E. Lee, shocking a nation that wouldn’t be shocked if it were paying attention.
But today, and always, too many would rather turn a blind eye to the depths of the racism that infects American culture than do the uncomfortable work of facing it.
This willful ignorance is one reason why a majority of Americans might be shocked at what happened in Charlottesville but the majority of black Americans probably aren’t. To be a black person in the US is to feel constantly gaslit by our fellow citizens. There are things we know to be true because we experience them daily — we get harassed by police for nothing more than driving down a street; we make less money than our white peers; our lives are so devalued that it causes a national backlash when we argue that they matter; our states and towns still proudly honor the treasonous movement that fought to keep our ancestors enslaved.
And these things conflict directly with what is supposed to be true about our country: In America, all people are created equal and there is the same opportunity for everyone. In America, all lives really do matter. In America, the past has long since been reckoned with and the playing field is now level.
So when we bring up racism and inequality, we’re told we’re overreacting. Or we’re exaggerating. Or these are isolated incidents. Or we’re just being special snowflakes. Or that Confederate nostalgia is about heritage, not hate.
But the events of the past week have shown that you can’t disconnect the hate from Confederate nostalgia. You can’t untangle the nation’s unresolved racial sins from the extremists in Charlottesville. And when our president stands before the press and provides comforting words to those extremists, you can’t say, “This isn’t America.”
I was raised in Virginia, and the idolatry of Gen. Robert E. Lee and the love for the Confederacy that underpinned the weekend’s protests is part of the state’s identity. Drive around the state for a couple of hours, and it’s clear the belief that Confederate history is a point of cultural pride is a mainstream one. In Virginia, symbols of the Confederacy are everywhere. And Lee is a favorite son.
There are Robert E. Lee Elementary Schools and a Lee Highway. Richmond boasts a Robert E. Lee Memorial Bridge. Washington and Lee, a prominent private college where Lee served as president after the war, bears his name. From 1983 to 2000, Virginia had a state holiday called Lee-Jackson-King Day — created when legislators decided it made sense to merge Lee-Jackson Day (a state holiday honoring Lee and his Confederate comrade Stonewall Jackson) with the newly minted federal holiday of Martin Luther King Jr. Day. After a couple of decades, it was determined that this made Virginia look pretty bad, so the two holidays were separated, with Lee-Jackson Day observed the Friday before MLK Day. Most state workers still get both days off.
Observing all of this, a visitor to Virginia who is ignorant of history might be surprised to learn that Lee’s true legacy is one of terror. Of course, there are persistent myths that Lee was anti-slavery and simply a victim of his time who was loyal to his homeland. If you want to read about how none of that is true, the Atlantic’s Adam Serwer has done a thorough myth-busting, detailing, among other things, Lee’s opinion that slavery was a good thing for black people and the cruelty with which Lee treated his slaves.
Serwer wrote, “Lee had beaten or ordered his own slaves to be beaten for the crime of wanting to be free, he fought for the preservation of slavery, his army kidnapped free blacks at gunpoint and made them unfree — but all of this, he insisted, had occurred only because of the great Christian love the South held for blacks.”
Lee wasn’t a good man, but the idea that Americans should honor him for his terrible deeds isn’t just the off-the-wall notion of tiki torch–toting white supremacists. It is a widespread belief, one that is still sanctioned and supported by local and state governments over an entire region of the US and endorsed by the president of the United States.
In an interview with Vox’s Ezra Klein earlier this year, Equal Justice Initiative founder Bryan Stevenson, a native Alabaman, said that the way a country approaches and memorializes its darkest moments says a lot about its values — in Germany, for instance, you won’t find any monuments to Adolf Hitler. On the other hand, he said, “the American South is littered with the iconography of the Confederacy. We are celebrating the architects and defenders of slavery. I don't think we understand what that means for our commitment to equality and fairness and justice.”
Ignoring the horrors that people like Lee — who were traitors to this country besides — wrought upon my ancestors so that white people can feel good about their own isn’t something only “white culture” warriors do. This is a belief system that many of the people who are terrified by what they saw in Charlottesville probably share with the protesters. Putting white people first, it turns out, is pretty damn American: A May 2017 Rasmussen poll found that 69 percent of likely voters oppose tearing down Confederate memorials.
“In this country, we don't talk about slavery,” Stevenson told Klein. “We don't talk about lynching. Worse, we've created the counternarrative that says we have nothing about which we should be ashamed. Our past is romantic and glorious.”
Even in a much more perfect United States, there would be evil people and evil ideas. But until those who hate what they saw in Charlottesville, as well as Trump’s response to it, move beyond their shallow understanding of the racism in America’s present and past, these people and their ideas will have a home in our country. Until they accept that the racist anti-Semites who terrorized Charlottesville for two days didn’t pop up out of nowhere, and are instead deeply connected to a foundation of white supremacy that Americans at large have refused to adequately reckon with — and, indeed, still celebrate in the public square — their movement will further spread and take hold while those in denial about America’s fatal flaws turn their backs. And we’ll remain as far away as ever from the idyllic country too many claim already exists.
Not getting sick and dying from pollution is worth quite a bit, it turns out.
Wind and solar power are subsidized by just about every major country in the world, either directly or indirectly through tax breaks, mandates, and regulations.
The main rationale for these subsidies is that wind and solar produce, to use the economic term of art, “positive externalities” — benefits to society that are not captured in their market price. Specifically, wind and solar power reduce pollution, which reduces sickness, missed work days, and early deaths. Every wind farm or solar field displaces some other form of power generation (usually coal or natural gas) that would have polluted more.
Subsidies for renewables are meant to remedy this market failure, to make the market value of renewables more accurately reflect their total social value.
This raises an obvious question: Are renewable energy subsidies doing the job? That is to say, are they accurately reflecting the size and nature of the positive externalities?
That turns out to be a devilishly difficult question to answer. Quantifying renewable energy’s health and environmental benefits is super, super complicated. Happily, researchers at the Lawrence Berkeley Lab have just produced the most comprehensive attempt to date. It contains all kinds of food for thought, both in its numbers and its uncertainties.
(Quick side note: Just about every country in the world also subsidizes fossil fuels. Globally, fossil fuels receive far more subsidies than renewables, despite the lack of any policy rationale whatsoever for such subsidies. But we’ll put that aside for now.)
The researchers studied the health and environmental benefits of wind and solar in the US between 2007 (when the market was virtually nothing) and 2015 (after years of explosive market growth).
Specifically, they examined how much wind and solar reduced emissions of four main pollutants — sulfur dioxide (SO2), nitrogen oxides (NOx), fine particulate matter (PM2.5), and carbon dioxide (CO2) — over that span of years. The goal was to understand not only the size of the health and environmental benefits, but their geographical distribution and how they have changed over time.
To cut to the chase, let’s review the top-line conclusions:
So, if you add up those central estimates, wind and solar saved Americans around $88 billion in health and environmental costs over eight years. Not bad.
That number is worth reflecting on, but first let’s talk a second about how they came up with it.
Tallying up these benefits is difficult for all sorts of reasons.
First, you have to figure out which sources are displaced, when and where, which meant researchers had to build a power system model that covered the country and produced hourly estimates.
Second, you have to figure out just how much of the primary pollutants — SO2, NOx, PM2.5, and CO2 — were avoided by displacing that power generation. To do that, researchers used EPA’s AVoided Emissions and geneRation Tool (AVERT) model. (Don’t ask.)
Third, you have to figure out the avoided impacts, and their value, of the local air pollutants (SO2, NOx, and PM2.5) that were prevented. To do that, researchers used a “suite of air quality, exposure and health impact models” from EPA and elsewhere. (Not all pollutants or impacts were included — impacts from other parts of the power plant lifecycle, like mining, were excluded, for instance. See the paper itself for many more caveats.)
Fourth, you have to figure out the avoided impacts, and their value, of the carbon dioxide emissions that were prevented. To do that, you need to know the “social cost of carbon” (the total quantified benefits of an avoided ton of CO2). Researchers used a wide range of estimates for the SCC.
In all those steps, there are uncertainties and ranges, some having to do with the limitations of models, some having to do with the limitations of our understanding of the impacts of pollution, some having to do with difficult-to-quantify intangibles like the value of a human life.
These uncertainties explain the wide range of estimates involved: premature mortalities range from 3,000 to 12,700; local pollution impacts from $30 to $113 billion; CO2 climate impacts from $5 to $107 billion. (It’s worth saying that there are good reasons to think most SCC estimates are lowballing — certainly $5 billion is ludicrous.)
These ranges reflect the simple fact that different models weigh things differently, from the physiological impacts of pollution to the value of missed work. This is part of what muddies the politics of environmental regulation: Costs are specific and concentrated; benefits are uncertain and diffuse.
If you dig into the paper, you find that the most interesting data has to do with the variations in benefits across regions and over time.
It’s complex, but in a nutshell, the health and environmental benefits of wind and solar vary depending on what other sources are being displaced, and how much, and when.
For example, fuel shifting (coal to gas) and various pollution regulations have meant that the average pollution of conventional power plants fell over the years of the study. If conventional plants are emitting less, then displacing them avoids less. So on average, early wind and solar displaced more local pollutants per-MWh than later.
It’s slightly different with CO2. The average CO2 emissions of the power sector fell as well, thanks to fuel shifting, but not as fast — not fast enough offset the explosive growth of wind and solar. So the amount of CO2 displacement per-MWh has remained roughly steady.
Here’s what that looks like graphically — these are the benefits over time. CO2 is on the upper left:
Wind and solar’s positive effects on local pollution have, on a per-MWh basis, fallen over time as other power plants have cleaned up somewhat. But their positive effects on CO2 pollution have remained steady. If it isn’t already, CO2 displacement will soon become wind and solar’s most valuable positive externality.
Wind and solar effects also varied widely by region, because some regions have cleaner power sectors than others. In California, wind and solar are mostly displacing natural gas. In the upper Midwest and mid-Atlantic regions, which rely more heavily on coal, wind and solar have greater impact.
Here’s what that looks like graphically. The first two rows show the marginal benefits of wind and solar by region; the bottom two rows show the total benefits by region:
A few things jump out here.
First, wind power has had enormous air-quality benefits in the upper Midwest and the mid-Atlantic. Yuge!
Second, poor California is building the shit out of renewable energy, but it’s mostly displacing natural gas, which has relatively low levels of local pollution. That means the state is getting relatively little air-quality benefit from wind and solar — its shift to renewables is mostly benefiting the climate (look at that solar spike at the bottom).
Third, the Southeast, one of the regions that benefits most from every marginal addition of wind and solar (thanks to the prevalence of coal power), has built the least. That’s dumb.
Fourth, I hadn’t realized how big solar was in the mid-Atlantic region! It’s producing almost as much air-quality benefit as wind.
In something of a curious coincidence, the central-estimate health and environmental benefits of wind and solar in 2015 — 7.3¢/kWh and 4.0¢/kWh respectively — are “comparable to estimates of total federal and state financial support” for wind and solar.
So for all the various subsidies and tax breaks for wind and solar, we’re getting roughly what we paid for. (If you believe the central estimates. Of course, “central” does not mean “most likely,” so we still don’t really know exactly what we’re getting, but we’ll put that aside too.)
However, while the absolute level of subsidies might match the absolute level of benefits, they do not line up on a granular level. The health and environmental benefits of wind and solar vary widely by time and region, but most policy incentives for wind and solar do not. Federal tax incentives treat all wind and solar projects the same. And when subsidies do vary, as in state-level policy, it’s rarely connected to their varying benefits.
The conclusion the researchers draw from this subsidy mismatch is that “addressing air quality and climate change through policies directly supporting wind and solar is not necessarily the most cost-effective approach.”
That’s true, as far as it goes, though I think there are still plenty of good reasons to support wind and solar. What’s fun, though, is to think about what it might look like if state and federal supports for wind and solar did vary by time and region.
How might that work? Take the same pot of money and instead of flat, capacity-based subsidies, offer time-varying, per-KWh subsidies based the pollution-intensity of the power being displaced. That would be computationally difficult, but not theoretically impossible. (If you want to really nerd out, the Brattle Group proposed something roughly similar for energy markets in a white paper.)
Time- and location-sensitive subsidies would attract wind and solar investment to the regions where it will do the most air-quality and atmospheric good, increasing their impact. And as a bonus, those regions often overlap with regions badly in need of blue collar jobs and regions where the fight against climate change could use a political boost, so it could increase their sociopolitical impact as well.
California wouldn’t like that much. But the upper-Midwest sure would!
In this case, as in all such cases, it is somewhat misleading to simply compare total subsidies with total health and environmental benefits. The total amounts are not all that matters. It also matters how costs and benefits are distributed — i.e., equity matters as well.
To put it bluntly: A dollar in federal taxes is not equivalent to a dollar of avoided health and environmental costs. The latter dollar is worth more than the former dollar.
Why is that? Simple: Federal taxes come disproportionately from the wealthy, via our progressive federal income tax, but health and environmental benefits disproportionately help the poor. And as any good economist will tell you, the same dollar is worth more to a poor person than it is to a rich person.
This is something that often gets lost in discussions of environmental regulations. It’s not just that their total benefits almost always exceed their direct costs. It’s that those benefits are uniquely egalitarian and progressive.
In the case of climate change, any reduction in CO2 emissions benefits everyone on Earth (egalitarian), while disproportionately helping the poor, who suffer earliest and most from climate impacts (progressive).
In the case of local air-quality benefits, cleaner air benefits everyone in the region who breathes (egalitarian), while disproportionately helping the poor, who are more likely to live in close proximity to fossil fuel power plants (progressive).
In terms of equity, converting a dollar of wealthy people’s money into a dollar of health for low-income communities seems like a good deal to me. And if you can get multiple dollars of low-income health benefit for every dollar of high-income taxes, well, that’s a no brainer.
Everybody breathes. Any dollar of federal income taxes used to produce a dollar of air and climate benefits is a net gain for justice.
We need to talk about homegrown extremism too.
There were many horrible sights and sounds at the Charlottesville, Virginia, protests over the weekend, from Nazi and Ku Klux Klan iconography to chants of “Jews will not replace us” and the Nazi slogan “blood and soil.” But perhaps the most horrifying of all came after the protests were technically over, when a 20-year-old Nazi sympathizer sped his car into a crowd of counterprotesters. In just a few seconds, he killed a woman and injured at least 19 others.
For many Americans, the realization came as a shock. This wasn’t supposed to happen in 2017. But it’s true: America has a white supremacist problem.
It’s not just Charlottesville. In another horrific terrorist attack by another young white supremacist, Dylann Roof shot and killed nine people in a predominantly black church in Charleston, South Carolina, in 2015. Generally, more terrorist attacks in the US are perpetrated by right-wing extremists than by Islamists, according to data from Reveal (although, overall, there are still very few terrorist acts in the US).
It raises the question: What is leading these people to such extremes?
I turned to experts on radicalization and terrorism for answers. One thing that surprised me: They consistently said that the processes of radicalization are similar across ideologies, whether the person is a jihadist, a white supremacist, or some other ideology.
“The processes are pretty much the same,” Mary Beth Altier, an expert on radicalization at New York University, told me. “There aren’t really distinctions between joining a group like the KKK and ISIS.”
Now, there are differences between having radical beliefs, joining a radical group, and actually committing violence or terrorism. Someone may hold white supremacist beliefs but may not join the KKK or any other racist group. And someone may be part of a racist group but never engage in racist violence.
In fact, one tough lesson is that there’s no one archetype for extremists. A 2014 study from researchers Paul Gill, John Horgan, and Paige Deckert looked at 119 lone-actor terrorists, and concluded that “[t]here was no uniform profile of lone-actor terrorists.” Basically every major demographic factor except gender (most were male), from age to educational level to marital status, greatly differed.
Still, experts have identified some common contributors to radicalization. Knowing these is crucial to potentially preventing tragic attacks like those in Charlottesville and Charleston: By understanding what puts someone on the path to radicalization, that path can be cut short before it becomes a potentially violent threat.
One thing experts emphasize: There is no single pathway to radicalization, and there are many contributors to radicalization. But generally, radicalization takes root when someone has some sort of problem — whether about his own life, society at large, or something else entirely — and a radical ideology or group provides an answer to that problem. He may seek out that radical ideology himself, or a group will come to him.
J.M. Berger, an expert on terrorism and author of Jihad Joe: Americans Who Go to War in the Name of Islam, explained in a talk that sources of grievances can be broadly broken into two categories: personal issues and social issues.
For personal, he cited economic insecurity, loss of a loved one, exposure to violence, relocation, religious conversion, and some kinds of mental illness.
For social, he cited war and insurgency, rapidly changing demographics, swift changes in civil society or civil rights, watershed changes in communication technology, efforts to foment uncertainty by state actors, and economic upheaval.
Many of these issues are present for some white Americans. Manufacturing jobs have been shipped overseas. The opioid epidemic is killing family and friends, and addiction is on the rise. Meanwhile, demographic statistics show that white people will no longer be the majority in the US in a few decades. Facebook, Twitter, Reddit, and other social media tools are giving white Americans an outlet to voice these concerns. President Donald Trump is giving voice to many people’s uncertainty with his own racist rhetoric.
Arlie Hochschild, a sociologist and author of Strangers in Their Own Land, provided an apt analogy for how many white Americans feel: As they see it, they’re all in a line toward a hill with prosperity at the top. But over the past few years, globalization and income stagnation have caused the line to stop moving. And from their perspective, other groups — black and brown Americans, women — are now cutting in the line, because they’re getting new (and more equal) opportunities through new anti-discrimination laws and policies like affirmative action.
One of the things that makes this so complicated is many of these factors can be present in someone’s life and still not lead to radicalization. And if they do contribute to radicalization, it’s often not just one but multiple factors simultaneously that play a role, often with unique, individualized issues playing a part as well.
So in the case of white Americans, many of them share the concerns noted by Hochschild. But only a very small number of white Americans will become radicalized and, in even rarer situations, commit a violent act as a result.
As Mia Bloom, an expert on radicalization at Georgia State University, told me, “There is no simple explanation.” There are just some broad, common contributors.
A common thread among people who are radicalized is a lack of purpose in life, which radical views — especially if a person acts on them — can help fill. “People tend to be seeking some meaning in their lives,” Bloom said. “They want to be part of something bigger than themselves.”
Peter Bergen, an expert on radicalization at New America, put this bluntly: They’re people we would often consider losers. “If you look at the attackers in this country, that is not a bad description,” Bergen said. “They are often people whose lives aren’t going well.”
He pointed to Omar Mateen, who killed 49 people in a mass shooting at a gay nightclub in Orlando, Florida: “He was going nowhere in life. He was working as a security guard at a golf retirement resort. He had dreams of being a cop; he tried to get in a police academy, and failed. By his first and second wives’ accounts, he had abused both of them. By suddenly quote-unquote becoming a soldier of ISIS, even though he had nothing to do with ISIS, he became the heroic figure that he believed himself to be.”
This applies to white supremacist terrorists as well. James Fields, the man accused of killing a woman in Charlottesville with his car, reportedly had trouble making friends, left the military after only four months of service (“due to a failure to meet training standards”), and until a few months ago lived with his mom. It’s hard to say for sure, but those issues may have contributed to his radicalization.
In other cases, it might not be personal grievances but rather political ones that lead to radicalization. For example, a white man may have concerns about immigration and, specifically, white Americans losing majority status in the coming decades. He also might feel like he can’t bring up these issues in public discourse without being quickly dismissed as racist. So as he sees these issues go unaddressed or become worse, he might dig deeper for any answer — and that might, over time, lead him to extremism.
“Rather than personal meaning, someone might deeply feel the political grievances that are being articulated and are drawn into the movement through that articulation,” Daveed Gartenstein-Ross, a counterterrorism scholar at the Foundation for Defense of Democracies, told me.
Groups take advantage of any of these types of issues to try to recruit people — often with downright devious tactics.
Altier gave an example she saw in her research on white supremacists: “I interviewed one fellow. He said they would go into schools and they would put things — racist fliers — in black children’s lockers. The black kids would think it was certain white kids doing it. Although the white kids weren’t actually putting the fliers in, [the black kids] would beat them up. Then the white supremacists would come in and protect them.”
Once they lock someone in, these groups can then foster radicalization. “You may start interacting with a group before you’re radicalized,” Altier explained. “And then because you’re hanging out with those people, you might become radicalized. … Once you’re actually in the group, you’re constantly subjected to the ideology. It’s reinforced by the people you’re around. And you may cut yourself off from other people, so it becomes this self-reinforcing mechanism.”
This is why various extremist groups, from ISIS to white supremacists, look for people who are socially isolated and lack purpose in life. Once these groups reel people in by giving them some sense of purpose, they can then begin to radicalize them.
It’s also one reason, Altier argued, that some people eventually get out of these groups after realizing they don’t really believe in what they’re doing, particularly in cases that involve a mental illness or some other issue that an extremist group took advantage of. Of course, there are plenty of true, hardcore believers in such causes as well, but that’s not always the case.
This only covers some of the contributors to self-radicalization and how groups can take advantage of people to radicalize them. There is still a lot of debate about what leads to radicalization, which common factors are more important, and so on. But the examples above give something of a rough consensus between the experts I spoke to, showing the many kinds of issues that can lead someone to extremist views and acts.
If radicalization is a result of messaging that extremists deploy to attract people with specific grievances, then one way to prevent radicalization may be to develop countermessaging that addresses those grievances in a way that avoids radicalization.
In the context of white supremacists, part of addressing this may mean expanding the Overton window — meaning what’s acceptable to talk about in public discourse. “The more we put things off limits, the more we empower bad actors who will talk about things other people aren’t willing to,” Gartenstein-Ross said.
For instance, right now it’s difficult for a white man to bring up concerns about changing racial demographics without getting labeled as racist. But maybe his concerns don’t have anything to do with race. He may be concerned that as the group he belongs to loses status, he will as well — economically, socially, and so on. A good response to this could point out that, for example, New York City is very diverse and still people, including white men, lead prosperous lives (and it has a below-average crime rate, contrary to what some dog whistles may suggest).
But if that person never has that kind of discussion because he’s dismissed as a racist, his concerns about changing demographics won’t go away. So he might search for answers outside the mainstream, and that might lead him to an extremist group. That is especially true if he experiences what sociologists call “white fragility”: When white people are asked to answer for potential racism, some become defensive — pushing them into denial that they’ve done anything wrong and, in some cases, hardening their racist attitudes. (Much more on that in a previous piece I wrote about this research.)
This doesn’t mean people can’t call out racism when it’s in front of them. But it does suggest that public and political discourse about race may need to better address the underlying concerns that lead people to racism while also making it clear that racism is unacceptable. It may not be a comfortable conversation, but it’s potentially necessary.
Gartenstein-Ross pointed to President Donald Trump’s rise as a less extreme example of this. For much of the 2016 election, Trump was considered a long-shot candidate — someone who held far too many extreme, unconventional views to become president. But it may be those same extreme, unconventional views that made him successful; by reflecting the concerns some people have about immigrants and Muslims, he appealed in a way other candidates did not. And the underlying concerns behind those racist views weren’t addressed by Trump’s opponents; instead, they often just dismissed Trump as crude, racist, or insane.
Another example of countermessaging several experts pointed to: Life After Hate. This organization, largely made up of former members of the far-right movement, directly intervenes with radicalized individuals to help them leave extremist organizations and lifestyles.
The organization explains: “Through personal experience and highly unique skill sets, we have developed a sophisticated understanding about what draws individuals to extremist groups and, equally important, why they leave. Compassion is the opposite of judgment and we understand the roles compassion and empathy play in healing individuals and communities.”
Countermessaging can also involve robbing extremist messages of a platform. For example, Twitter can ban people with explicitly racist views, or those who are trolling people of color and getting others to harass them. That makes it much harder, if not impossible, for someone to use that platform to get his message out.
“To me, the issue is not radicalization; the issue is violence,” Bergen said. “There are a lot of people with radical ideas — very stupid ideas — all over the country. But very few of them are going to commit acts of violence.”
To this end, there was one issue that several experts raised: People need to be vigilant in their communities and even families, watching for signs of radicalism and potential violence. And they’ll need to give that information to authorities when necessary.
Consider this chilling statistic, from the 2014 study by Gill, Horgan, and Deckert: “In 64% of cases, family and friends were aware of the individual’s intent to engage in a terrorism-related activity because the offender verbally told them.” (Although right-wing offenders were, compared to others, “less likely to … make verbal statements to friends and family about their intent or beliefs.”)
“These findings suggest that friends, family, and coworkers can play important roles in efforts that seek to prevent or disrupt lone-actor terrorist plots,” the researchers concluded. “In many cases, those aware of the individual’s intent to engage in violence did not report this information to the relevant authorities.”
Or consider the warning signs to the attack in Charlottesville. Based on reports, we now know that Fields, the alleged perpetrator, had a history of violence — leading his own mother to call 911 twice. In one situation, he struck her in the head and put his hands over her mouth. In another, he brandished a 12-inch knife. He also showed a fondness for Adolf Hitler, and apparently was fairly vocal about his racist views. And he told his mom that he would go to a rally for the “alt-right” (an umbrella term for white nationalists) in Charlottesville.
If authorities, family, and peers took the warning signs and threats more seriously in these kinds of cases, the attack could have — although it’s hard to say for certain — been prevented.
Experts readily acknowledge this will be difficult. People don’t, after all, want to report their children, family, or friends to the police. But in some cases, it’s truly necessary.
Part of this involves taking right-wing terrorism more seriously. “It’s a very serious threat, and one that’s often underplayed,” Altier said. “There’s an inherent bias in how we frame Islamist versus far right-wing, white supremacist terrorism.” She pointed to the Charlottesville car attack as an example: “The guy ran people over with a car. If a Muslim deliberately ran people over with a car, immediately it would be a terrorist attack.”
By downplaying far-right violence in this way, society diminishes the sense of urgency that people might otherwise have when they hear of their relatives hinting at acting out violently. And that makes it harder to actually prevent these attacks.
Some experts also suggested addressing root causes of radicalization and terrorism, particularly the socioeconomic, mental health, and other issues that turn into the kinds of grievances that can lead to such extremism. But that gets into fairly weedy discussion about how, exactly, you do that: Do you create a stronger social safety net? More jobs? A better mental health care system? What, exactly, is the best prioritization of resources here?
Other experts are skeptical that solutions to root causes would have much effect. There are, after all, people with jobs and families who end up committing terrorist attacks — just look at the perpetrators of the 2015 San Bernardino shooting, who were married and had a child; the husband also held a job. And there are plenty of white supremacists who are not mentally ill, received a good education, and maintain solid jobs — many of them even showed up in Charlottesville. (As my colleague Dylan Matthews explained, there is no good evidence that more jobs can combat racism.)
As with many policy issues, then, adequately confronting radicalization will require a variety of ideas and solutions. Even the best policy approaches won’t stop every attack. But they could, at the very least, help make events like Charlottesville less likely.
An unapologetic ode to gated reverb drums.
There are a handful of clearly recognizable sounds in music that are always pinned to a genre and decade. The surf guitar pioneered by Dick Dale, the wall of sound of Phil Spector, the bass slap of Larry Graham, the boom bap of the golden age of hip hop. These classic sounds are revered, and some of them miraculously transcended the decade in which they were first developed.
But there’s one sound that will always be timestamped to the 1980s and people just love to hate it. It’s called gated reverb.
Over the past few years, a general nostalgia for the ’80s has infiltrated music, film, and television. In pop music, producers have enthusiastically applied gated reverb to drums to create that punchy percussive sound — used by every artist from Phil Collins to Prince — to pay homage to their favorite artists of the 1980s.
I unapologetically love gated reverb, and so for my second episode of Vox Pop’s Earworm I spoke with two Berklee College of Music professors, Susan Rogers and Prince Charles Alexander, to figure out just how that sound came to be, what makes it so damn punchy, and why it’s back.
The video above tells the story of gated reverb and the playlist below — a curated mix of gated reverb drenched songs from the 1980s and today — is just a little Friday gift from me to you.
Subscribe to our YouTube Channel to see all Vox videos and to be notified when the next six episodes of Earworm go live!
Mano Singham is unenthused about the eclipse. Same here. It’s neat, would be an interesting phenomenon to observe, but I’m not going to travel out of my way to witness a few minutes of darkness. I also wouldn’t be seeing it as a scientist, but as a tourist, nothing more.
Fortunately, xkcd seems to share our views.
So if you’re going to make an effort to see the eclipse, have fun! Take pictures!
If you’re not going to see the eclipse, have fun! Enjoy a nice August day!
That’s the myth, anyway. Repressive governments will stir up a rising of protest music and street art and great poetry and literature so we’ll at least have that. But wouldn’t you know it, Donald John Trump is fucking that up, too.
I am surprised to learn that there is a Nazi death metal scene in Minneapolis. It’s small but growing, led by a local patent lawyer named Aaron Wayne Davis in his spare time, through a website called Behold Barbarity Records and Distro.
The site sold a customary catalog headlined by name bands like Slayer and King Diamond. But closer inspection reveals an exhaustive selection of more obscure titles, with album covers sprinkled with permutations of neo-Nazi symbols like swastikas and iron crosses.
Take Deathkey, whose 2010 album is called Behead the Semite. Then there’s Aryanwulf, whose songs include “Kill the Jews” and “At the Dawn of a New Aryan Empire.” There’s also the Raunchous Brothers, whose rhyming poetics include such passages as, “You’re of no use to me, you disgraceful fucking dyke, so I’ll shove you in the oven like the glorious Third Reich.”
There are plenty of similar lyrics quoted at the link, so I’ll spare you. I did learn how to write a Nazi death metal song, at least. It isn’t hard.
He describes the power of hate music: Drop the slogan “White people awake, save our great race” a couple times in a chorus, then quadruple it per song, and you have listeners nodding along to it with every step and stumble of their day.
OK, that’s the dark side. There has to be a light side to oppose it, right? There must be some musical genre that has arisen to oppose Nazi death metal. It’s like some core principle of the universe. And out of the darkness rises a gleaming bright beam of beauty and light.
It’s Insane Clown Posse and the Juggalos. On 16 September, there will simultaneously be two rallies on the mall in DC, one of white nationalists, another of juggalos. They are expected to clash.
Save for this one issue [the FBI has labeled Juggalos a “loosely organized hybrid gang”], ICP is not an explicitly political band, and there are some pro-Trump Juggalos. But the overlap between the Juggalo March and rabid Trumpies is likely to be minimal. Juggalos view their community as a loving family that accepts everyone just as they are, which is the opposite of what Nazi pricks—or, as they prefer to be known, “white nationalists”—advocate. And, in the unlikely venue of a Time magazine editorial on last year’s wave of creepy clown sightings, ICP’s Violent J had this to say about the clowns in Washington:
These clowns threaten the very fabric on which our nation was supposedly founded upon—and for some f—ing crazy-a– reason, they’re getting away with it. From keystone-cop clowns shooting unarmed citizens, to racist clowns burning down Islamic centers or clowns in the NSA spying on us through our cell phones and laptops, America has turned into something far more terrifying than Insane Clown Posse’s Dark Carnival.
So perhaps it shouldn’t be too much of a surprise that radical leftist Juggalos are mobilizing online in opposition to the Trump supporters who are giving clowns a bad name.
I guess I should have expected this, given the nature of their fanbase. But gosh, this could be interesting, come September. The Nazis ought to be worried; they’ll have sticks, but Juggalos have hatchets. Of course, the ICP can be neutralized if the Nazis think to deploy magnets.
The American Civil Liberties Union (ACLU) took a new stance on firearms Thursday, announcing a change in policy that it would not represent hate groups who demonstrate with firearms.
ACLU Executive Director Anthony Romero told the Wall Street Journal that the group would have stricter screenings and take legal requests from white supremacist groups on a case-by-case basis.
“The events of Charlottesville require any judge, any police chief and any legal group to look at the facts of any white-supremacy protests with a much finer comb,” Romero told the Journal. “If a protest group insists, ‘No, we want to be able to carry loaded firearms,’ well, we don’t have to represent them. They can find someone else.”
I’m sure Jay Sekulow’s law firm – The American Center for initials that look a bit like the ACLU would love to help them out.
While I agree with the ACLU’s decision, I can’t see how they’ll implement such a policy. If a bunch of thugs are willing to commit acts of violence to get their way, they’re not going to stick at a lie to the ACLU. And in open carry states they can claim the guys who showed up toting heavy weaponry aren’t with them. But perhaps the existence of a gun-unfriendly policy will be enough to repel them.