The latest (somewhat random) collection of essays and stories from around the web that have caught my eye and are worth plucking out to be re-read.
The US is now betraying the Kurds for the eighth time
Joe Schwartz, The Intercept, 7 October 2019
The U.S. has now betrayed the Kurds a minimum of eight times over the past 100 years. The reasons for this are straightforward.
The Kurds are an ethnic group of about 40 million people centered at the intersection of Turkey, Syria, Iran, and Iraq. Many naturally want their own state. The four countries in which they live naturally do not want that to happen.
On the one hand, the Kurds are a perfect tool for U.S. foreign policy. We can arm the Kurds in whichever of these countries is currently our enemy, whether to make trouble for that country’s government or to accomplish various other objectives. On the other hand, we don’t want the Kurds we’re utilizing to ever get too powerful. If that happened, the other Kurds — i.e., the ones living just across the border in whichever of these countries are currently our allies — might get ideas about freedom and independence.
Here’s how that dynamic has played out, over and over and over again since World War I
1 — Like many other nationalisms, Kurdish nationalism blossomed during the late 1800s. At this point, all of the Kurdish homeland was ruled by the sprawling Ottoman Empire, centered in present day-Turkey. But the Ottoman Empire collapsed after fighting on the losing side of World War I. This, the Kurds understandably believed, was their moment.
The 1920 Treaty of Sèvres completely dismembered the Ottoman Empire, including most of what’s now Turkey, and allocated a section for a possible Kurdistan. But the Turks fought back, making enough trouble that the U.S. supported a new treaty in 1923, the Treaty of Lausanne. The Treaty of Lausanne allowed the British and French to carve off present-day Iraq and Syria, respectively, for themselves. But it made no provision for the Kurds.
This was America’s first, and smallest, betrayal of the Kurds. At this point, the main Kurdish betrayals were handled by the British, who crushed the short-lived Kingdom of Kurdistan in Iraq during the early 1920s. A few years later, the British were happy to see the establishment of a Kurdish ‘Republic of Ararat,’ because it was on Turkish territory. But it turned out that the Turks were more important to the British than the Kurds, so the United Kingdom eventually let Turkey go ahead and extinguish the new country.
This was the kind of thing that gave the British Empire the nickname ‘perfidious Albion.’ Now America has taken up the perfidious mantle.
2 — After World War II, the U.S. gradually assumed the British role as main colonial power in the Mideast. We armed Iraqi Kurds during the rule of Abdel Karim Kassem, who governed Iraq from 1958 to 1963, because Kassem was failing to follow orders.
We then supported a 1963 military coup — which included a small supporting role by a young Saddam Hussein — that removed Kassem from power. We immediately cut off our aid to the Kurds and, in fact, provided the new Iraqi government with napalm to use against them.
Read the full article in The Intercept.
The UN is leaving migrants to die in Libya
Sally Hayden, Foreign Policy, 10 October 2019
Tens of thousands of refugees and migrants have been locked up indefinitely in Libyan detention centers over the past two and a half years, after they were intercepted by the Libyan coast guard trying to reach Italy across the Mediterranean Sea. Since 2017, the Libyan coast guard has been supported with equipment and training worth tens of millions of dollars by the European Union. This money comes from the Trust Fund for Africa – a multibillion-dollar fund created at the height of the so-called migration crisis, with the aim of preventing migration to Europe by increasing border controls and funding projects in 26 African countries.
The EU’s deal with Libya – a country without a stable government where conflict is raging – has been repeatedly condemned by human rights organizations. They say the EU is supporting the coast guard with the aim of circumventing the international law principle of non-refoulement, which would prohibit European ships from returning asylum-seekers and refugees to a country where they could face persecution.
Inside Libya’s detention centers, thousands of refugees and migrants are deprived of food, sunlight, and water, and many become victims of sexual exploitation and assault, forced labor, and even torture or slaying.
In January, dozens of migrants and refugees were sold directly to human traffickers from the Souq al-Khamis detention center in Khoms, soon after they were delivered there by the Libyan coast guard…
In extensive interviews with Foreign Policy, seven aid officials who currently work in Libya or have worked there in the last two years accuse U.N. agencies of ignoring or downplaying systemic abuse and exploitation in migrant detention centers in order to safeguard tens of millions of dollars of funding from the EU. (Since 2016, an EU spokesperson said nearly 88 million euros – $96 million – from the Trust Fund for Africa has gone to IOM in Libya, and 47 million euros – $52 million – to UNHCR.)
They say the EU, in turn, is using U.N. agencies to sanitize a brutal system of abuse that its policies are funneling tens of thousands of vulnerable people directly into.
All of these officials wished to stay anonymous for fear of professional repercussions. They all said while UNHCR and IOM do some important work, they are actively involved in whitewashing the devastating and horrific impacts of hardening European Union policy aimed at keeping refugees and migrants out of Europe. ‘They are constantly watering down the problems that are happening in the detention centers,’ said one aid official. ’They are encouraging the situation to continue. … They are paid by the EU to do [the EU’s] fucking job.’
Read the full article in Foreign Policy.
The poverty of poor economics
Grieve Chelwa & Seán Muller,
Africa is a Country, 17 October 2019
Even though other Nobel prize awards often attract public controversy (peace and literature come to mind), the economics prize has largely flown under the radar with prize announcements often met with the same shrugging of the shoulders as, for example, the chemistry prize. This year has however been different (and so was the year that Milton Friedman, that high priest of neoliberalism, won).
A broad section of commentary, particularly from the Global South, has puzzled over the Committee’s decision to not only reward an approach that many consider as suffering from serious ethical and methodological problems, but also extol its virtues and supposed benefits for poor people.
Many of the trio’s RCTs have been performed on black and brown people in poor parts of the world. And here, serious ethical and moral questions have been raised particularly about the types of experiments that the randomistas, as they are colloquially known, have been allowed to perform. In one study in western Kenya, which is one-half of the epicenter of this kind of experimentation, randomistas deliberately gave some villages more money and others less money to check if villages receiving less would become envious of those receiving more. The study’s authors, without any sense of shame, titled their paper ‘Is Your Gain My Pain?‘ In another study in India, the other half of the epicenter, researchers installed intrusive cameras in class rooms to police teacher attendance (this study was actually favorably mentioned by the Swedish Academy). There are some superficial rationalizations for this sort of thing, but studies of this kind— and there are many – would never have seen the light of day had the experimental subjects been rich Westerners.
There are also concerns around the extractive nature of the RCT enterprise. To execute these interventions, randomistas rely on massive teams of local assistants (local academics, students, community workers, etcetera) who often make non-trivial contributions to the projects. Similarly, those to be studied (the poor villagers) lend their incalculable emotional labor to these projects (it is often unclear whether they have been adequately consulted or if the randomistas have simply struck deals with local officials). The villagers are the ones that have to deal with all the community-level disruptions that the randomistas introduce and then leave behind once they’ve gone back to their cushy lives in the US and Europe.
Read the full article in Africa is a Country.
For political discourse to survive,
we must be more honest about language
Sam Leith, Spectator, 5 October 2019
When I was an English literature undergraduate, we were all very careful to avoid what used to be called the ‘intentional fallacy’. This is the idea that you can use a text to get at what the author ‘really meant’. The so-called New Critics said, quite reasonably, that the text is all you’ve got to go on and, what’s more, it’s impertinent and irrelevant for a critic to start trying to figure out, say, whether Shakespeare is a racist from the evidence in ‘My mistress’ eyes are nothing like the sun’.
This is a useful principle in academic literary criticism (or one sort of academic literary criticism; that’s an argument for another day). But it seems to be trickling out into a place where it is less useful — public life.
An example: the black crime writer Walter Mosley recently quit the writers’ room on Star Trek: Discovery because a fellow writer complained about his use of language. As Mosley reported in a piece for the New York Times, he got a call from human resources. ‘A pleasant-sounding young man said, ‘Mr Mosley, it has been reported that you used the N-word in the writers’ room.’ I replied, ‘I am the N-word in the writers’ room.’’’
That complainant was acting like a true New Critic. He looked at the utterance. He ignored the personal and historical context. He ignored the fact that Mosley was indeed the N-word in the writers’ room, and that he’d been using the word in a piece of reported speech. (Mosley: ‘I had indeed said the word in the room. I hadn’t called anyone it. I just told a story about a cop who explained to me, on the streets of Los Angeles, that he stopped all niggers in paddy neighbourhoods and all paddies in nigger neighbourhoods, because they were usually up to no good. I was telling a true story as I remembered it.’) And he called HR.
Another example — much gone over to this day — is Boris Johnson’s notorious column in which he talked of ‘grinning piccaninnies’ and ‘watermelon smiles’. This is treated by opponents as prima facie evidence of his racism. But the context in which those phrases occurred was one in which Johnson was spoofing a racist-paternalist neocolonial attitude — imagining Tony Blair as a ‘big white chief’ patronisingly gratified by the traditional trappings of imperialism.
From these decontextualised fragments — interpreted by militant amateur critics as if they are pure textual objects rather than part of a conversation — inferences about intention are made. Not just about intention but, in our ferociously identitarian age, about personal essence. This evidence doesn’t just tell us that a person used a racist word or a term associated with racist discourse. This evidence tells us that Boris Johnson is a racist. This evidence tells us that Walter frickin’ Mosley is a racist.
Read the full article in the Spectator.
Fascinated to presume: In defense of fiction
Zadie Smith, New York Review of Books, 24 October 2019
In place of the potential hubris of containment, then, Dickinson offers us something else: the fascination of presumption. This presumption does not assume it is ‘correct,’ no more than I assumed, when I depicted the lives of a diverse collection of people in my first novel, that I was ‘correct.’ But I was fascinated to presume that some of the feelings of these imaginary people – feelings of loss of homeland, the anxiety of assimilation, battles with faith and its opposite – had some passing relation to feelings I have had or could imagine. That our griefs were not entirely unrelated. The joy of writing that book—and the risk of it- was in the uncertainty. I’d never been to war, Bangladesh, or early-twentieth-century Jamaica. I was not, myself, an immigrant. Could I make the reader believe in the imaginary people I placed in these fictional situations? Maybe, maybe not. Depends on the reader. ‘I don’t believe it,’ the reader is always free to say, when confronted with this emotion or that, one action or another. Novels are machines for falsely generating belief and they succeed or fail on that basis. I know I can read the first sentence of a novel and find my reaction is I don’t believe you. And many a reader must surely have turned from White Teeth in exactly the same spirit.
Yet the belief we’re talking about is not empirical. In the writing of that book, I could not be ‘wrong,’ exactly, but I could be—and often was—totally unconvincing. I could fail to make my reader believe, but with the understanding that the belief for which fiction aims is of a very strange kind when we recall that everything in a novel is, by definition, not true. What, then, do we mean by it? In my capacity as a writing teacher, I’ve noticed, in the classroom, the emergence of a belief that fiction can or should be the product of an absolute form of ‘correctness.’ The student explains that I should believe in her character because this is exactly how X type of person would behave. How does she know? Because, as it happens, she herself is X type of person. Or she knows because she has spent a great deal of time researching X type of person, and this novel is the consequence of her careful research. (Similar arguments can be found in the interviews of professional writers.)
As if fiction could argue itself into a reader’s belief system! As if, armed with our collection of facts about what an X type of person feels, is, and does, always and everywhere, a writer could hope to bypass the intimate judgment of a reader, which happens sentence by sentence, moment by moment. Is it this judgment we fear? It’s so uncertain, so risky. You can’t quantify it – it’s not data. It happens between one reader and one writer. It’s a meeting – or sometimes a clash – of sensibilities, which often takes the form, as Dickinson understood, of griefs compared.
Read the full article in the New York Review of Books.
A synagogue attack shocks Germany. But why?
Anna Sauerbrey, New York Times, 10 October 2019
So why are so many people shocked by the Halle attack? It’s not that Germany’s political class isn’t sensitive to the concerns of the country’s Jewish population. But it is still the case that when it comes to concerns about far-right hate crimes, the nation’s attention was focused elsewhere. Why?
One reason is that the most gruesome crimes that have caught the public attention were directed at immigrants, mostly with a Muslim background, or were crimes connected to the immigration crisis. Mr. Lübcke, the politician murdered in June, was known for his pro-immigration views, and nine of the 10 victims of the N.S.U. were from immigrant families (the 10th victim was a police officer).
But there’s more to it than that. As in other countries, German politics are increasingly polarized, with a sober assessment of the facts overtaken by a quick search for easy places to lay blame.
These days, whenever a crime occurs in Germany, social media heats up quickly, as both right-wing-populists and liberals immediately claim the event as a building block in their respective ideological struggles. The far-right Alternative for Germany party is constantly searching for ‘evidence’ to support its belief that immigration is a threat to German society. Liberals, in contrast, look for evidence to exculpate immigrants and cast blame on the right.
A woman and her son are pushed in front of a train in Frankfurt? What’s the nationality of the perpetrator? A man is shot on his porch in Hessen? Was he pro-immigration? It was no different yesterday, after the news of the assault in Halle broke. Was this Arab or German anti-Semitism? seems to have been the first question for many, instead of: what really happened and how could it have been prevented?
In this vain struggle, one group often goes missing: Germany’s Jews.
The rise in anti-Semitic violence does not fit easily into either narrative; the social media attention is more muted, and the events tend to pass with little comment.
Read the full article in the New York Times.
Repressed memories are back, baby!
Katie Herzog, The Stranger, 8 October 2019
And yet, not that many years after the repressed memory craze has ended, it’s coming back. The terms have been updated: Today, ‘repressed memories’ are called ‘traumatic dissociation,’ but the concept is the same, and a new review of the research published in the journal Perspectives on Psychological Science found that 76 percent of clinical psychologists believe traumatic memories can be blocked for many years and then recalled later on, a rate that has grown since the so-called ‘Memory Wars’ of the 1990s. They also found high rates of belief among law enforcement personnel, judges, and jurors. In fact, laypeople were actually less likely to believe in this concept than psychologists and other clinicians. The group least likely to believe it? Scientists working in the field of memory.
One of the more well-known skeptics of repressed memories is Elizabeth Loftus, a former professor at the University of Washington (her time at UW is a story in itself) and a co-author of this latest study. (Loftus also developed the ‘lost in the mall’ experiment, which demonstrates how easily false memories can be implanted. Basically, if you tell people they were lost in the mall as children, they tend to ‘remember’ it, even if the event never happened.)
Today, Loftus serves as an expert witness in abuse cases, and I spoke to her a few months ago, before this latest paper came out. ‘We’ve pretty much demolished the idea of mass repression as having any credible scientific support,’ she told me. ‘So what the clinicians did to try to sell the idea is, they started calling it ‘dissociation.’ You can’t really say there’s no credible evidence against ‘dissociation’ because it’s an umbrella term that includes many phenomena that do exist, things like thinking, ‘Oh my god, I can’t remember if I shut the garage before I left.’ But I don’t care what you call it. Show me any proof that you can be raped for years and be completely unaware of it and reliably recover it later. Just show me the scientific evidence.’
Loftus maintains that memory doesn’t work that way. Instead, she argues, trauma tends to be seared into the memory, and while memories are malleable and can be easily manipulated, evidence suggests that people do tend to remember traumatic, life-changing events.
‘The notion that traumatic events can be repressed and later recovered is the most pernicious bit of folklore ever to infect psychology and psychiatry,’ wrote Harvard psychologist Richard McNally in a letter to the Supreme Court when it was hearing a case that stemmed from recovered memories in 2005. ‘It has provided the theoretical basis for ‘recovered memory therapy’ – the worst catastrophe to befall the mental health field since the lobotomy era.’
Read the full article in The Stranger.
Author’s appearance at Georgia Southern University cancelled after students burn and shred books
Adam Steinbaugh, FIRE, 11 October 2019
On Wednesday, author Jennine Capó Crucet spoke at Georgia Southern University, where first-year students were reading her novel, ‘Make Your Home Among Strangers: A Novel.’ Crucet’s discussion about her award-winning novel, which involves themes of race and class, was — to put it mildly — not well-received by some students, who objected to Crucet’s ‘racism towards white people.’
What was said during the lecture is not entirely clear; the student newspaper, The George-Anne, reported via Twitter that ‘the link that had the full video of Crucet’s lecture has been taken down.’ Crucet’s statement on the matter explained that she had been asked to ‘give a talk on issues concerning diversity and the college experience.’ The George-Anne’s write-up of the event recounted part of the question-and-answer portion of Crucet’s appearance:
‘I noticed that you made a lot of generalizations about the majority of white people being privileged,’ one respondent said into the microphone. ‘What makes you believe that it’s okay to come to a college campus, like this, when we are supposed to be promoting diversity on this campus, which is what we’re taught. I don’t understand what the purpose of this was.’
Crucet immediately responded to the student with audible reactions from the audience.
‘I came here because I was invited and I talked about white privilege because it’s a real thing that you are actually benefiting from right now in even asking this question,’ Crucet said.
‘What’s so heartbreaking for me and what is so difficult in this moment right now is to literally have read a talk about this exact moment happening and it’s happening again. That is why a different experience, the white experience, is centered in this talk.’
After Crucet’s response, more questions regarding the novel and Crucet’s dealing with being a minority in America were asked and Crucet responded politely.
Other students reportedly walked out.
One student told BuzzFeed News that after the talk, some 20 to 30 students gathered around a fire pit to burn copies of the novel. Students tweeted photos of burning or shredded books at Crucet. Video posted by one student depicts students burning at least one book on a park-style grill on campus
Read the full article on FIRE.
Myths from a small island:
the dangers of a buccaneering view of British history
Robert Saunders, New Statesman, 9 October 2019
What such histories have in common is not the celebration of empire but its erasure from the historical record. That ‘forgetting’ fulfils two important functions in Brexit ideology. First, it establishes a continuity between past and present that is uninterrupted by the loss of Britain’s colonies. It creates a useable history of British greatness, anchored not in vanished imperial structures, but in a set of timeless national characteristics that require only liberation from Brussels to burst once more into bloom. As such, it rejects the importance of decolonisation as a rupture: one that might require a recasting of Britain’s geopolitical ambitions or a more bounded, regional identity.
Second, it enables a synthesis between two visions of British history that might otherwise seem at odds: one that casts Britain as a global titan; another that views it as a small island punching above its weight in the world. It treats the empire as something Britain did, not as something Britain was (and is no longer). It reimagines imperial history as an achievement against the odds; the story, as David Cameron put it in 2011, of ‘a small country that does great things’. It casts smallness as an essential ingredient in Britain’s historic success, not as a condition to which Britain has been reduced by the withdrawing of the imperial tide.
That idea of smallness – even at the peak of Britain’s imperial power – has deep roots in British folk memory. It is the story of Francis Drake, standing heroically against the mighty Spanish empire; of Robert Baden-Powell, bravely holding off the Boers at Mafeking; and of the fishing smacks and pleasure boats that defied the Nazi war machine at Dunkirk. In the most famous cartoon of the Second World War, the lone warrior stands on the British coast, his fist clenched in defiance, as the skies blacken beneath the shadow of the Luftwaffe.
In the high days of empire, such memories were a useful dishonesty. They allowed a military superpower to imagine itself as an embattled champion of freedom, engaged in heroic resistance against forces that willed its destruction. During the two world wars, when defeat was indeed a possibility, they served to bolster morale. Today, by contrast, they are put to more dangerous ends. They invoke a glorious past as a model for the future, while wiping from popular memory everything that made Britain such a formidable power. A story that celebrates smallness, that tells of glorious victories against impossible odds, has little room for Britain’s colossal military machine, unmatched economic power and global empire – whose contribution to the war effort it has shamefully forgotten.
This has at least three destructive consequences. It detaches memories of British greatness from the material conditions that made it possible; it overstates what Britain can achieve in the world as a small nation, ‘standing alone’; and it exaggerates the power of positive thinking as a national strategy. Failure can be blamed on those who refuse to cheer along: on ‘doomsters’, ‘pessimists’ and ‘saboteurs’, who simply refuse to believe with sufficient fervour.
Read the full article in the New Statesman.
How Italians became ‘white’
Brent Staples, New York Times, 12 October 2019
Congress envisioned a white, Protestant and culturally homogeneous America when it declared in 1790 that only ‘free white persons, who have, or shall migrate into the United States’ were eligible to become naturalized citizens. The calculus of racism underwent swift revision when waves of culturally diverse immigrants from the far corners of Europe changed the face of the country.
As the historian Matthew Frye Jacobson shows in his immigrant history ‘Whiteness of a Different Color,’ the surge of newcomers engendered a national panic and led Americans to adopt a more restrictive, politicized view of how whiteness was to be allocated. Journalists, politicians, social scientists and immigration officials embraced the habit, separating ostensibly white Europeans into ‘races.’ Some were designated ‘whiter’ — and more worthy of citizenship — than others, while some were ranked as too close to blackness to be socially redeemable. The story of how Italian immigrants went from racialized pariah status in the 19th century to white Americans in good standing in the 20th offers a window onto the alchemy through which race is constructed in the United States, and how racial hierarchies can sometimes change.
Darker skinned southern Italians endured the penalties of blackness on both sides of the Atlantic. In Italy, Northerners had long held that Southerners — particularly Sicilians — were an ‘uncivilized’ and racially inferior people, too obviously African to be part of Europe.
Racist dogma about Southern Italians found fertile soil in the United States. As the historian Jennifer Guglielmo writes, the newcomers encountered waves of books, magazines and newspapers that ‘bombarded Americans with images of Italians as racially suspect.’ They were sometimes shut out of schools, movie houses and labor unions, or consigned to church pews set aside for black people. They were described in the press as ‘swarthy,’ ‘kinky haired’ members of a criminal race and derided in the streets with epithets like ‘dago,’ ‘guinea’ — a term of derision applied to enslaved Africans and their descendants — and more familiarly racist insults like ‘white nigger’ and ‘nigger wop.’
The penalties of blackness went well beyond name-calling in the apartheid South. Italians who had come to the country as ‘free white persons’ were often marked as black because they accepted ‘black’ jobs in the Louisiana sugar fields or because they chose to live among African-Americans. This left them vulnerable to marauding mobs like the ones that hanged, shot, dismembered or burned alive thousands of black men, women and children across the South.
Read the full article in the New York Times.
The US is wrapping its border wall around the world
Todd Miller, The Nation, 24 September 2019
In a broader sense, in the twenty-first century, the border should no longer be considered just that familiar territory between the United States and Mexico (where President Trump now wants to build that ‘big, fat, beautiful wall‘ of his) and the Canadian border to the north. Never mind that, as a start, there already is a wall there or rather, as the US border enforcement officials have long described it, a ‘multi-layered’ enforcement zone. If you were to redefine a wall as obstacles meant to blockade, reroute, and in the end stop (as well as incarcerate) people, then, even before Donald Trump, the equivalent of a wall was that expansive 100-mile-deep zone of defenses. These included sophisticated detection technologies of every sort and increasing numbers of armed border personnel supported by unprecedented budgets over the last 25 years.
In those same years, this country’s borders have, in a sense, undergone a kind of expansion not just into southern Mexico (as I witnessed in 2014), but also into parts of Central America and South America, the Caribbean, and other areas of the world. As Bersin put it, there had been a post-9/11 shift to emphasizing the policing not just of the literal US border but of global versions of the same, a massive, if underreported, ‘paradigm change.’ A US border strategy of ‘prevention through deterrence,’ initiated in 1994, that first militarized and then blockaded urban areas on our actual southern border like Brownsville, El Paso, Nogales, and San Diego, would later spread internationally.
The recent focus on Trump’s wall has hidden such global developments that, since 2003, have, for instance, led to 23 Customs and Border Protection attachés being stationed in places like Bogota, Cairo, New Delhi, Panama City, and Rome. In 2004, CBP commissioner Robert Bonner described this as ‘extending our zone of security where we can… so that American borders are the last line of defense, not the first line of defense.’
In 2003, the 9/11 Commission Report laid out the thinking behind this clearly indeed: ‘9/11 has taught us that terrorism against Americans ‘over there’ should be regarded just as we regard terrorism against Americans ‘over here.’ In this same sense the American homeland is the planet.’
Fourteen years later, retired General John Kelly endorsed just such a strategy at his confirmation hearing as Department of Homeland Security secretary. ‘Border security,’ he assured the senators, ‘cannot be attempted as an endless series of ‘goal line stands’ on the one-foot line at the ports of entry or along the thousands of miles of border between this country and Mexico… I believe the defense of the Southwest border starts 1,500 miles to the south in Peru.’
As it happened, even Kelly was understating just how far the US border already extended into the world.
Read the full article in The Nation.
Now the rich want your pity, too
Richard V Reeves, New York times, 5 October 2019
Seems like it’s getting tough at the top. The winners in America’s meritocracy are suffering. Children in affluent homes are being hothoused through childhood, stress-tested into elite schools and colleges, and pushed to the brink of suicide or breakdown. Their highly educated mothers and fathers are putting in long hours in their chosen professions: money-rich, perhaps, but time-poor.
The whining of the wealthy is getting louder.
Their new complaint is that they, too, are suffering at the hands of ‘the system.’ The system in question is the same meritocracy that in many cases has elevated them to their high perches. True, they have most of the money, wealth, power and opportunity. But they are working very hard for these advantages and they are working just as hard to secure them for their children.
Staying at the top, it turns out, is exhausting and expensive work. Perhaps sick of being cast as villains, some of the rich and successful have decided to declare themselves victims.
The latest diagnostician of this elite malaise is a Yale law professor named Daniel Markovits. In his widely discussed new book, ‘The Meritocracy Trap,’ he shows correctly how the ideology of meritocracy hurts the millions of people who don’t make it to the top. The idea that successful people deserve the economic rewards flowing from that success is a pernicious myth.
But Professor Markovits also claims that meritocracy is as painful for the people on the top rungs of the ladder as it is for those lower down. ‘The elite and the middle class are not coming apart,’ he writes. ‘Instead, the rich and the rest are entangled in a single shared and mutually destructive economic and social logic.’
The idea of meritocracy has long been used by the rich for self-justification. Now it is becoming fuel for their self-pity.
Read the full article in the New York Times.
Impeachment is regime suicide
Daniel McCarthy, Spectator USA, 1 October 2019
Trump was no mere conventional Republican who happened to beat Hillary Clinton. He was a completely unconventional Republican who first beat the party’s own ideological standard-bearers during the primaries, in the course of which he often said things that no Republican had said for a generation or more. Trump’s message in the primaries and general election boiled down to: they screwed you. ‘They’ being the Bushes, the Clintons, the establishment in both parties, the warmongers, the trade-deal architects, the communist Chinese, free-riding allies, and more.
Trump is no ideologue or political theorist, but he launched a comprehensive attack on the domestic and international liberal order. He campaigned against the system as it has existed since the Cold War ended.
Trump’s enemies are not just the left, they are the ancien regime. Anyone who supports the political and economic dispensation of the post-Cold War era is apt to feel threatened by Trump and even more menaced by what stands behind him — a growing anti-consensus, a force that declares every center of power in this country illegitimate and antithetical to the well-being of the people.
That’s why this impeachment attempt is radically different from the Nixon or Clinton episodes. There is no consensus to save this time; there is only an anti-consensus waiting to be radicalized. Trump’s enemies have been in denial about this since the day he first declared for the White House — they have wrongly assumed that a healthy, old-fashioned, pro-establishment consensus must emerge out of sheer revulsion at Trump. Hence all the appeals on the part of anti-Trump pundits to Republican decency and conscience. They assume that, deep down, for all that Republicans are racists and deplorables, they still love the regime, and they will support it over Trump.
In fact, for most Republicans, certainly at the grassroots, the voice of conscience and their sense of decency command them to support Trump, in spite of his sins, against an absolutely illegitimate and malevolent regime.
Impeachment is a regime counter-attack against a man elected to bring about change. And while impeachment is certainly constitutional, it is an elite procedure not a democratic one. The prestige media has passed the first judgment on whether it’s warranted in this case. (It is, they say.)
Read the full article in the Spectator USA.
Behind the razor wire
of Greece’s notorious refugee camp
Daniel Howden & Apostolis Fotiadis,
Observer, 5 October 2019
Last week Moria was in mourning. A deadly fire last Sunday (29 September) killed a woman called Faride Tajik, described by UN officials as a widow with a teenage daughter who has now been taken into care outside the camp. Initial reports suggested a baby had been killed in the blaze that may have been started by refugees protesting over conditions. In this account, widely shared on social media, rioters attacked firemen and complicated efforts to tackle the flames, then fought running battles with the police.
However, this account has been shown to be false. There were clashes between residents and the police and fire service but they came after the blaze when people were angry at a perceived failure to help. The Observer has seen and verified a number of time-stamped videos from the fire showing that the first responders were camp residents who brought an emergency firehose to combat the flames engulfing a cluster of stacked containers. In another clip a small crowd can be seen begging a woman, who witnesses identify as Tajik, to jump from the container window as onlookers throw water bottles at the fire in a doomed effort to help. Clouds of black smoke engulf the window. She does not reappear. Details of Tajik’s death are unclear.
Witnesses claim more than one person was killed, but the claims that a baby died appear unfounded. There are several informal missing-person claims. But deaths in Moria often leave unanswered questions. An Egyptian and a Syrian man died within four days of each other in January 2017 in the same tent; the cause of death has never been established.
Eleni Velivasaki, a lawyer with Refugee Support Aegean, says that Moria exists in a grey zone where no one takes responsibility: ‘Days afterwards it is unclear how many people died [in the latest fire] or if there are any missing people. What stage is the investigation now at? Why have media reports about refugees starting the fire themselves not been retracted?’
Moria is the unwilling centrepiece of a bargain the EU struck with Turkey in 2016 at the height of refugee arrivals. The deal was meant to curb flows across the Aegean in return for aid money and the relocation of some Syrian refugees from Turkey to Europe. Greece let its islands be used as a buffer zone to prevent all but the most vulnerable new arrivals reaching the mainland and vowed to return the bulk of asylum seekers to Turkey. The containment centred on five ‘hot spots’, including Moria, where asylum seekers would be registered and provided with shelter.
Read the full article in the Observer.
Algorithms are people
Sidney Fussell, The Atlantic, 18 September 2019
In aviation, the black box is a source of knowledge: a receptacle of crucial data that might help investigators understand catastrophe. In technology, a black box is the absence of knowledge: a catchall term describing an algorithmic system with mechanics its creators can’t—or won’t—explain. Algorithms, in this telling, are unknowable, uncontrollable, and independent of human oversight, even as they promote extremist content, make decisions affecting our health, or act in potential violation of antitrust law.
In investigative reports and international courts, Amazon, Google, and other tech platforms have been accused of tweaking their search algorithms to boost their own profits and sidestep antitrust regulations. Each company denies interfering with its respective search algorithm, and because of the murky mechanics of how search works, proving the allegations is nearly impossible…
Algorithms interpret potentially millions of data points, and the exact path from input to conclusion can be difficult to make plain. But the effects are clear. This is a very powerful asymmetry: Anyone can notice a change in search results, but it’s extremely difficult to prove what caused it. That gives algorithm designers immense deniability.
In 2016, for example, Bloomberg reported that Amazon Prime was much less likely to offer same-day service to predominantly black and Latino suburbs in Boston and New York. The algorithm that determines eligible neighborhoods, Amazon explained, was designed to determine the cost of providing Prime, based on an area’s distance from warehouses, number of Prime members, and so on. That explanation, Bloomberg reported, was a shield for the human designers’ choice to ignore how race and poverty correlate with housing, and how that inequality is replicated in Amazon’s products.
Because of their opacity, algorithms can privilege or discriminate without their creators designing them to do so, or even being aware of it. Algorithms provide those in power with what’s been termed ‘strategic ignorance’ – essentially, an excuse that arises when it’s convenient for the powerful not to know something. The antitrust movement is built on trying to locate humans somewhere within Big Tech’s enormous, corporatized systems. Try as companies might to minimize personal accountability, it is humans who build, train, and deploy algorithms. Human biases and subjectivities are encoded every step of the way.
Read the full article in The Atlantic.
The medieval myth of Notre-Dame
Danny Smith, LA Review of Books, 7 October 2019
As news of the fire spread on social media, many saw in Notre-Dame the loss of France’s past. Pundits and politicians from around the world lamented the fallen spire, and the destruction of the roof, known as le forêt — the forest — for its thick trunks of ancient wood. But the fire also reveals something fundamental about the history of Notre-Dame — that it is a true restoration as Viollet-le-Duc described it. Notre-Dame is a building that has been constantly restored — both architecturally and metaphorically — reestablished in a modern state countless times throughout its history. Notre-Dame has always been as much a construction of the present as it is a monument of the past. As we begin to restore Notre-Dame again, we must consider our own moment within the long history of restorations and reestablishments to the church.
Viollet-le-Duc restored Notre-Dame. Between 1844 and 1864, he and a team spent millions of francs to rebuild the Gothic sacristy — the room where priestly vestments and liturgical instruments are stored — and to crown the building with a spire, a tower taller, thinner, and better engineered than anything a medieval mason could have constructed. It was Viollet-le-Duc’s spire that collapsed on April 15.
A consultant named Nicolas Marang captured footage of the spire’s fall on his smartphone — one of several videos that have circulated widely online since the fire — as he watched the flames from the right bank of the Seine. The sky is gray with smoke and the timbers of the roof and the tower are already exposed by the flames. Suddenly the spire begins to list toward the right, over the nave of the church, and it falls quickly, taking several of the roof timbers of the nave with it as it tumbles. Where the spire had been, a bundle of timbers still stands erect, dark against the flames.
For a moment in Marang’s video, before the whole tower collapses, the structural framework of the spire is visible. The limestone and lead that faced it have largely fallen away, exposing the skeleton of Viollet-le-Duc’s addition. Just before it collapses, the exposed beams of the spire bear an uncanny resemblance to another Parisian icon: the Eiffel Tower. Although Viollet-le-Duc’s tower was a feat of 19th-century engineering, until April 2019, its structure had been concealed beneath lead sheets, sculpted figures, and limestone blocks decorating it in an exaggerated Gothic style. The devastating fire unmasked the modern construction underpinning the apparently medieval building
Read the full article in the LA Review of Books.
Why can’t we agree on what’s true any more?
Willaim Davies, Guardian, 19 September 2019
The panic surrounding echo chambers and so-called filter bubbles is largely groundless. If we think of an echo chamber as a sealed environment, which only circulates opinions and facts that are agreeable to its participants, it is a rather implausible phenomenon. Research by the Oxford Internet Institute suggests that just 8% of the UK public are at risk of becoming trapped in such a clique.
Trust in the media is low, but this entrenched scepticism long predates the internet or contemporary populism. From the Sun’s lies about Hillsborough to the BBC’s failure to expose Jimmy Savile as early as they might, to the fevered enthusiasm for the Iraq war that gripped much of Fleet Street, the British public has had plenty of good reasons to distrust journalists. Even so, the number of people in the UK who trust journalists to tell the truth has actually risen slightly since the 1980s.
What, then, has changed? The key thing is that the elites of government and the media have lost their monopoly over the provision of information, but retain their prominence in the public eye. They have become more like celebrities, anti-heroes or figures in a reality TV show. And digital platforms now provide a public space to identify and rake over the flaws, biases and falsehoods of mainstream institutions. The result is an increasingly sceptical citizenry, each seeking to manage their media diet, checking up on individual journalists in order to resist the pernicious influence of the establishment.
There are clear and obvious benefits to this, where it allows hateful and manipulative journalism to be called out. It is reassuring to discover the large swell of public sympathy for the likes of Ben Stokes and Gareth Thomas, and their families, who have been harassed by the tabloids in recent days. But this also generates a mood of outrage, which is far more focused on denouncing bad and biased reporting than with defending the alternative. Across the political spectrum, we are increasingly distracted and enraged by what our adversaries deem important and how they frame it. It is not typically the media’s lies that provoke the greatest fury online, but the discovery that an important event has been ignored or downplayed. While it is true that arguments rage over dodgy facts and figures (concerning climate change or the details of Britain’s trading relations), many of the most bitter controversies of our news cycle concern the framing and weighting of different issues and how they are reported, rather than the facts of what actually happened.
The problem we face is not, then, that certain people are oblivious to the ‘mainstream media’, or are victims of fake news, but that we are all seeking to see through the veneer of facts and information provided to us by public institutions. Facts and official reports are no longer the end of the story. Such scepticism is healthy and, in many ways, the just deserts of an establishment that has been caught twisting the truth too many times. But political problems arise once we turn against all representations and framings of reality, on the basis that these are compromised and biased – as if some purer, unmediated access to the truth might be possible instead. This is a seductive, but misleading ideal.
Read the full article in the Guardian.
Document number nine
John Lanchester, London Review of Books, 10 October 2019
If anything unwelcome does get past the multiple layers of censorship and blocking – more like a Giant Onion than a Great Firewall – it runs into the fifty-cent army, the wumao. The effort involved is extensive. An American university study of the Chinese internet counted 448 million fake social media posts in one year, 2016, with the preferred tactic of the fifty-cent army being not to pile on to critics – though they do that too – but to deflect attention, ideally by ‘cheerleading’ for pro-government news. Griffiths quotes the research:
They do not step up to defend the government, its leaders and their policies from criticism, no matter how vitriolic; indeed, they seem to avoid controversial issues entirely. Instead, most posts are about cheerleading and positive discussions of valence issues. We also detect a high level of co-ordination in the timing and content in these posts. A theory consistent with these patterns is that the strategic objective of the regime is to distract and redirect public attention from discussions or events with collective action potential.
These are the pillars of the Chinese internet: ferocious laws; public humiliation as a tool of coercion; a firewall blocking external sites and independent sources of information; a huge, and hugely expensive, army of censors, backed by algorithms and unprecedented levels of surveillance, adding up to the Giant Onion; and a fifty-cent army of trolls and handwavers to pile on, distract and deflect.
The point of the state apparatus is not to silence all debate, but to prevent organisation and co-ordination; the ultimate no-no is the formation of any kind of non-party group. The CCP’s goal is not silence but isolation: you can say things, but you can’t organise. That is why the party has cracked down with such ferocity on the apparently harmless organisation Falun Gong, whose emphasis on collective breathing exercises wouldn’t normally, you would think, represent much of a challenge to CCP control of China. But Falun Gong grew popular, too popular – seventy million by 1999, as many as the CCP itself – and had an unacceptable level of collective organisation. So the party set out to destroy it. Two thousand members of Falun Gong have died in custody since the crackdown began.
Given all this, it is frequently the case that outsiders are surprised by the apparent freedom of the Chinese internet. People do feel able to complain, especially about pollution and food scandals. As Strittmatter puts it, ‘a wide range of competing ideologies continues to circulate on the Chinese internet, despite the blows struck by the censors: Maoists, the New Left, patriots, fanatical nationalists, traditionalists, humanists, liberals, democrats, neoliberals, fans of the USA and various others are launching debates on forums.’ The ultimate goal of this apparatus is to make people internalise the controls, to develop limits to their curiosity and appetite for non-party information. Unfortunately, there is evidence that this approach works: Chinese internet users are measurably less likely to use technology designed to circumvent censorship and access overseas sources of information than they used to be.
Read the full article in the London Review of Books.
Machines beat humans on a reading test.
But do they understand?
John Pavlus, Quanta, 17 October 2019
In July 2019, two researchers from Taiwan’s National Cheng Kung University used BERT to achieve an impressive result on a relatively obscure natural language understanding benchmark called the argument reasoning comprehension task. Performing the task requires selecting the appropriate implicit premise (called a warrant) that will back up a reason for arguing some claim. For example, to argue that ‘smoking causes cancer’ (the claim) because ‘scientific studies have shown a link between smoking and cancer’ (the reason), you need to presume that ‘scientific studies are credible’ (the warrant), as opposed to ‘scientific studies are expensive’ (which may be true, but makes no sense in the context of the argument). Got all that?
If not, don’t worry. Even human beings don’t do particularly well on this task without practice: The average baseline score for an untrained person is 80 out of 100. BERT got 77 — ‘surprising,’ in the authors’ understated opinion.
But instead of concluding that BERT could apparently imbue neural networks with near-Aristotelian reasoning skills, they suspected a simpler explanation: that BERT was picking up on superficial patterns in the way the warrants were phrased. Indeed, after re-analyzing their training data, the authors found ample evidence of these so-called spurious cues. For example, simply choosing a warrant with the word ‘not’ in it led to correct answers 61% of the time. After these patterns were scrubbed from the data, BERT’s score dropped from 77 to 53 — equivalent to random guessing. An article in The Gradient, a machine-learning magazine published out of the Stanford Artificial Intelligence Laboratory, compared BERT to Clever Hans, the horse with the phony powers of arithmetic.
In another paper called ‘Right for the Wrong Reasons,’ Linzen and his coauthors published evidence that BERT’s high performance on certain GLUE tasks might also be attributed to spurious cues in the training data for those tasks. (The paper included an alternative data set designed to specifically expose the kind of shortcut that Linzen suspected BERT was using on GLUE. The data set’s name: Heuristic Analysis for Natural-Language-Inference Systems, or HANS.)
Read the full article in Quanta.
Out of mind: philosopher Patricia Churchland’s
radical approach to the study of human consciousness
Julian Baggini, Prospect, 8 October 2019
Churchland’s work tried to take the philosophical implications of the new brain research seriously without falling into the scientistic traps. It quickly generated a huge amount of interest, from admirers and detractors alike. For her supporters, mostly scientists, studying the brain was essential to understanding how we perceive the world. For her detractors, mostly philosophers, the whole project of ‘neurophilosophy’ was fundamentally naïve and misguided: it was all neuro and no philosophy, reducing humans to mere machines. Churchland still sometimes gets mocked as ‘the Queen of Neuromania,’ as Raymond Tallis acidly described her; Colin McGinn once dismissed her work as ‘neuroscience cheerleading.’
Yet over the years, Churchland has received due recognition for avoiding the traps that lie in each extreme. She was helped by the early endorsement of Francis Crick, one of the discoverers of the double helical structure of DNA, who called Churchland’s first book Neurophilosophy (1986) a ‘pioneering work.’ In 1991 she was honoured with a $500,000 MacArthur Fellowship, widely known as the ‘genius grant,’ and she was President’s Professor of Philosophy, University of California from 1999 until her retirement in 2013. Now 76, she has little left to prove, and yet she is still publishing, this year with Conscience: The Origins of Moral Intuition. Indeed, she made Prospect’s own list of the world’s top 50 thinkers as recently as July.
Her strength is precisely that she is a rare thinker who can be resolutely scientific without ever being scientistic—a distinction that her critics seem unable to make. She is certainly a materialist who rejects the view that consciousness is some kind of mystery which science should not dare to touch. But she denies the claims that neuroscience leaves the mind, the self and free will as mere illusions. We may have to change how we understand these concepts, but philosophers can only do this credibly if they are properly informed by what the science says.
‘I’ve never said ‘my brain made me do it,’’ she tells me via Skype from Canada, where her family is on holiday. In Conscience she rounds on the ‘self-promoters’ who jumped on the ‘oxytocin bandwagon,’ claiming that the hormone was the solution to social awkwardness, bad behaviour at school, obesity and even ‘Congressional inaction on social policy.’
Read the full article in Prospect.
William P Jones, The Nation, 7 October 2019
Four hundred years ago, ‘about the latter end of August,’ an English pirate ship called the White Lion landed at Point Comfort in the Virginia Colony carrying ‘not anything but 20 and odd Negroes,’ wrote colonist John Rolfe. Though this is often viewed as the starting point of slavery in what would become the United States, the anniversary is somewhat misleading. Africans, both enslaved and free, had lived in St. Augustine, in Spanish Florida, since the 1560s, and since slavery was not legally sanctioned in Virginia until the 1640s, early arrivals would have occupied a status closer to indentured servants. But those ambiguities only point to how essential people of African descent were to the establishment and development of the imperial outposts that became the United States. It was their work, as much anyone else’s, that helped build the world we live in today.
In his new book, Workers on Arrival, the historian Joe William Trotter Jr. shows that the history of black labor in the United States is thus essential not only to understanding American racism but also to ‘any discussion of the nation’s productivity, politics, and the future of work in today’s global economy.’ At a time when mainstream political rhetoric and analysis related to economic change still tend to center on white men displaced by job loss in manufacturing and mining, similar challenges faced by black workers are often examined through a distinct lens of racial inequality. As a result, Trotter contends, white workers are viewed as the victims of ‘cultural elites and coddled minorities,’ while African American workers suffering from the very same economic and political conditions are treated as ‘consumers rather than producers, as takers rather than givers, and as liabilities rather than assets.’ Reminding us that Africans were brought to the Americas ‘specifically for their labor’ and that their descendants remain ‘the most exploited and unequal component of the emerging modern capitalist labor force,’ Workers on Arrival provides an eloquent and essential correction to contemporary discussions of the American working class.
Trotter acknowledges that he is not the first to offer this critique and cites generously from ‘nearly a century of research’ and prominent African American scholars in order to demonstrate ‘the centrality of the African American working class to an understanding of U.S. history.’ These include W.E.B. Du Bois’s studies of black working-class communities in Philadelphia, Memphis, and other cities during the turn of the 20th century, as well as Sterling Spero and Abram L. Harris’s 1931 book The Black Worker. But Trotter’s achievement is to synthesize this rich body of historical scholarship into a single volume written with an eye to a general audience.
Read the full article in The Nation.
Scientists designed a drug
for just one patient. Her name is Mila.
Gina Kolata, New York Times, 9 October 2019
Milasen is believed to be the first drug developed for a single patient (CAR-T cancer therapies, while individualized, are not drugs). But the path forward is not clear, Dr. Yu and his colleagues acknowledged.
There are over 7,000 rare diseases, and over 90 percent have no F.D.A.-approved treatment, according to Rachel Sher, vice president of regulatory and government affairs at the National Organization for Rare Disorders.
Tens of thousands of patients could be in Mila’s situation in the United States alone. But there are nowhere near enough researchers to make custom drugs for all who might want them.
And even if there were, who would pay? Not the federal government, not drug companies and not insurers, said Dr. Steven Joffe, professor of medical ethics and health policy at the University of Pennsylvania.
‘Unfortunately, that leaves it to families,’ he added. ‘It feels awfully uncomfortable, but that is the reality.’
That means custom drugs would be an option only for the very wealthy, those with the skills to raise large sums of money, or those who gain the support of foundations.
Mila’s drug development was mostly paid for by the foundation run by her mother, but she and Dr. Yu declined to say how much was spent.
The idea of custom drugs also leads the F.D.A. into uncharted territory. In an editorial published with Dr. Yu’s paper, Dr. Janet Woodcock, director of the F.D.A.’s Center for Drug Evaluation and Research, raised tough questions:
What type of evidence is needed before exposing a human to a new drug? Even in rapidly progressing, fatal illnesses, precipitating severe complications or death is not acceptable, so what is the minimum assurance of safety that is needed?
She also asked how a custom drug’s efficacy might be evaluated, and how regulators should weigh the urgency of the patient’s situation and the number of patients who could ultimately be treated. None of those questions have an easy answer.
Read the full article in the New York Times.
Will Macron’s move against his alma mater
make France’s HE system fairer?
John Morgan, THES, 19 September 2019
France’s public higher education system is tripartite: vocational and technical education; universities (traditionally non-selective and open to any student who passes their high school baccalauréat general); and grandes écoles (where entry is highly selective, usually requiring a two-year classe préparatoire (known colloquially as ‘prépas’), themselves selective, followed by a concours entry exam). About 5 per cent of each age cohort graduates from a grand école, which receive about one and a half times the funding per student granted to universities…
The grandes écoles are ‘a web’, with ENA simply the most visible and ‘gilded’ part of that network, says Jean-Michel Eymeri-Douzans, a professor of political science at Sciences Po Toulouse and a former vice-rector, who wrote his PhD thesis on the ‘sociology of government énarques’ and who has lectured at ENA. The system of grandes écoles builds on an ‘extremely selective’ secondary school system, offering the final ‘barrier in a system of barriers’ to those from poorer backgrounds, he argues.
Data on the social backgrounds of students reveal a clear class hierarchy between the different elements of the French higher education system. The proportion of students from ‘unskilled labour backgrounds’ in prépas for grandes écoles stood at 6.4 per cent in 2015, compared with 49.5 per cent for those from professional backgrounds, according to Ministry of Education figures. The equivalent figures for universities were 10.8 per cent and 30 per cent; and for university institutes of technology 14.6 per cent and 28.8 per cent.
Although the grandes écoles have gradually become less socially exclusive over time, the richest graduates of these institutions reach higher social echelons than their poorer peers, found a 2018 study by University of Lausanne researchers, based on large-scale data on social background and social destination from the French Labour Forces survey. The study concludes: ‘Despite a clear equalization trend in access to the highest educational levels in France, educational merit remains better rewarded on the labour market among the better off.’
French higher education has long been underpinned by a deep faith in educational ‘merit’, a faith invoked by Macron in his speech. This vaunted exam-based, ‘meritocratic’ system is seen in contrast to class-based systems of inherited privilege and is considered part of the egalitarian ideal of the republic.
Read the full article in THES.
Stronger than a man
Elaine Showalter, TLS, 24 September 2019
Among the memorable stories in Benjamin Moser’s engrossing, unsettling biography of Susan Sontag, an observation by the writer Jamaica Kincaid stands out indelibly. In 1982, Sontag’s beloved thirty-year-old son David Rieff endured a number of major crises: cocaine addiction, job loss, romantic break-up, cancer scare and nervous breakdown. At that point, Moser writes, Sontag ‘scampered off to Italy’ with her new lover, the dancer and choreographer Lucinda Childs. ‘We couldn’t really believe she was getting on the plane’, Kincaid told Moser. She and her husband Allen Shawn took David into their home for six months to recover. Later she searched for words to characterize Sontag’s behaviour: ‘Yes, she was cruel, and so on, but she was also very kind. She was just a great person. I don’t think I ever wanted to be a great person after I met Susan’.
In 2013, when Moser signed up to write Sontag’s authorized biography, he took on a hazardous task: how to recount the eventful life, influential ideas and significant achievements of a legendary public intellectual, and assess the overall legacy of an outrageous, infuriating great person? He was not the first to face these challenges. ‘Disappointment with her…’, he notes, ‘is a prominent theme in memoirs of Sontag.’ She was avid, ardent, driven, generous, narcissistic, Olympian, obtuse, maddening, sometimes loveable but not very likeable. Moser has had the confidence and erudition to bring all these contradictory aspects together in a biography fully commensurate with the scale of his subject. He is also a gifted, compassionate writer…
Interview by interview, anecdote by anecdote, chapter by chapter, I had the distressing sense of the flawed but passionate woman of genius fading away and a genderless disappointing great person taking her place. A monument to that woman of genius may be found in the candid, self-questioning journals David Rieff calls her unwritten autobiographical novel, and in the dynamic interviews in which her intelligence seems to rise direct and vigorous from the page. And perhaps it may be found in this book too, although, as Moser says, she always ‘warned against the mystifications of photographs and portraits, including those of biographers’.
Read the full article in TLS.
Tom Crewe, London Review of Books, 10 October 2019
Yet there has been, along the way, some ‘radical’ constitutional reform, including the removal of the vast majority of hereditary peers from the House of Lords, the coming into effect of the Good Friday Agreement and the establishment of devolved parliaments in Scotland and Wales (all in 1999), the setting up of a UK Supreme Court (2009) and the abolition of male primogeniture as applying to the royal succession (2015) – these in the main cheerfully accepted, when noticed at all. In the case of the Good Friday Agreement, the Scottish Parliament and the Supreme Court, the full significance of these constitutional innovations as constitutional innovations has become apparent only in recent years, or recent weeks (there seems to have been general surprise that the UK possesses a Supreme Court, and some healthy curiosity as to how it works). That is, their significance only became apparent when they came into collision with the hallowed authority of the Westminster Parliament: when the SNP took power in Edinburgh and determined to use it as a lever for breaking the Union; when the provisions of the Good Friday Agreement limited Britain’s Brexit options; and when the government’s decision to prorogue Parliament was taken to the courts.
It could be assumed, if you go by my previous criteria, that the Supreme Court’s decision to declare the prorogation illegal is a sign that politics is again on the move, that change is coming. (A very inexact parallel might be the non-guilty verdict returned for the seven bishops who had refused to read James II’s Declaration of Indulgence for Catholics and Protestant dissenters in 1687, a refutation of royal authority which paved the way for William of Orange’s invasion and the Glorious Revolution.) Yet I have become convinced that what is actually worrying about our present situation is not, as most people seem to think, a superfluity of politics (‘chaos’ and ‘madness’ are popular words these days), but its absence. If we are currently living through a ‘constitutional crisis’ it is a supremely legalistic, procedural one. It has so far involved multiple large-scale parliamentary defeats of a government no longer, as a result of the Fixed Term Parliaments Act, obliged to resign; the announced prorogation of Parliament, and Parliament’s subsequent passage of legislation which members of the government suggested might be prevented from getting onto the statute book (in fact it wasn’t) and whose stipulations members of the government now suggest might be evaded or ignored; successive legal judgments on the prorogation by English, Scottish and Northern Irish courts, and most recently the verdict given by the Supreme Court. There is nothing insignificant about any of these matters: some of them are very serious indeed. But when our great popular complaint is that ‘Boris Johnson lied to the queen,’ it seems to me that what Brexit has done is evacuate politics from its proper place. For if politics is the art of the possible, Brexit as understood by Johnson, Farage and co. is a travesty of the possible. As a cultural project it may yet find a political vehicle (which could be Johnson’s Tory Party or Farage’s Brexit Party) but as a strictly political one it cannot be delivered in its pure form and cannot be agreed on: so it has created administrative, procedural, legal gridlock.
Read the full article in the London Review of Books.
The conservative black nationalism of Clarence Thomas
Corey Robin & Joshua Cohen,
Boston Review, 24 September 2019
JC: So, he was a black nationalist first, and his conservatism gets layered in. How, then, would you describe the substance of his nationalism? There are many traditions of black nationalism. What is Thomas’s black nationalism about?
CR: Let me start with the historiography and scholarship on black nationalism that I am drawing on. I was influenced by a couple of books, including Tommie Shelby’s We Who Are Dark (2007). The two traditional emblems of black nationalism are a cultural unity of black people—some shared cultural ethos—and territorial self-determination. Shelby makes a strong case that those two dimensions are important, but often are used in a tactical or pragmatic way to advance the larger program of black solidarity and black self-identification.
Shelby is helpful in understanding Thomas, especially because Thomas becomes politicized in 1968 in the wake of the assassination of Martin Luther King. At that moment, by his own report, Thomas has a realization that nobody is going to do anything for black people. And by nobody, he means white liberals and white leftists.
Thomas is then in college at Holy Cross in Worcester, not far outside of Boston, and he begins thinking about modes of self-organization that won’t be dependent upon white people. So, there’s a very strong consciousness in Thomas that emphasizes the independent and separate organization of black people, that offers a critique of integration and that is suspicious of bringing together the races. There’s a belief in separatism as a necessary condition of black improvement.
There are two other elements as well. One is a belief in black self-defense, including the instruments of violence. This plays a role in his second amendment jurisprudence. And second is his valorization of black men as the saviors and protectors of the black community, which you see a lot in black nationalist thought in the late 1960s.
So, all these pieces are there in his early life, and they continue as he makes his journey to the right in the mid 1970s. Some additional elements come in, but those are the building blocks of his black nationalism—all in place before his conservative turn…
JC: One way to think about Thomas is: this is what a black nationalist project looks like in a world in which you’ve accepted the political defeat of black nationalism.
CR: I think that’s right. I did a panel with Brandon Terry, and he characterized it in a similar way: this is black nationalism in the wake of a great defeat. You have a very strong sense of racial despair and an independent racial consciousness, but the political tools just don’t seem to be there anymore.
Read the full article in the Boston Review.
The painful truth about pain
Travis N Reider, Nature, 11 September 2019
On June 16, 2015, I woke up in agony. The day before, I had been in surgery for nearly nine hours, while three different teams of specialists tried to save my foot by carving muscle, fat, skin, artery and nerve out of my thigh and using it to patch together my shattered lower extremity. It was my fifth surgery following a motorcycle accident the month before, and it left me reeling with fiery, electric, boiling pain.
I begged nurses and doctors to increase my pain medication. When they moved too slowly for my liking, I stopped being quite as compliant a patient as I tend to be. I needed more drugs.
I now think of what happened that day as representing the central problem of pain medicine in the United States right now. It’s often said that overprescribing is the root cause of our troubles with opioids. But people experiencing severe pain will be quick to tell you that fear of opioids now leads to underprescribing, and that they are left to deal with untreated pain. What I would eventually realize is that you find both going on all the time: some clinicians aggressively prescribe opioids without good evidence, whereas others withhold opioids out of fear. It is the worst of both worlds.
On that day in 2015, as my requests for more medication became more obnoxious, the intensive-care doctor tending to me regarded me with suspicion and disdain. She responded to my complaints with a curt comment that my request for more medication had been noted. And yet, when I managed to get my plastic surgeon to call a pain-management consultancy things went very differently. That team medicated me into oblivion, without careful counselling or follow-up — as a result, I formed a dependence on opioids. That dependence, and the withdrawal it eventually precipitated, would come to define my health-care experience, and eventually lead to my scholarly interest in the ethics of pain medicine.
This recognition that clinicians are both overprescribing opioids and undertreating pain is crucial, because it makes it clear that there will be no simple solution to the problem of prescription opioids. We cannot go from the claim that a surplus of prescription opioids helped to spark today’s overdose crisis to the conclusion that we therefore must reduce prescribing. We need to reduce prescribing in the right ways: limit opioids when they really are surplus, but prescribe them when they are the appropriate treatment. This might sound obvious, but think for a moment about what it would take to follow this advice — what it would take, that is, to engage in responsible opioid prescribing.
Read the full article in Nature.
Sex on the brain
Kevin Mitchell, Aeon, 25 September 2019
In a 2015 study that has given rise to the ‘mosaic brain’ hypothesis, the psychologist Daphna Joel at Tel Aviv University and colleagues analysed brain scans from more than 1,400 people, looking for regions of the brain where there was a statistically significant difference in volume between the sexes. They found 10 regions showing such differences, some larger in males, some in females. On the face of it, their findings seemed to support the idea that male and female brains are structurally distinct. However, each of the 10 regions under scrutiny varies in volume across individuals anyway, with the distribution simply shifted slightly higher or lower in the other sex. Joel’s team found that very few individuals showed extreme ‘male’ or ‘female’ values for all 10 regions; instead, most showed a pattern of values falling mainly in the overlapping zones, with only a general trend towards one end or the other.
The authors concluded that the brains of males and females are not categorically distinct. In other words, there is no such thing as a ‘male brain’ or a ‘female brain’. Rather, they suggest that each individual’s brain is a ‘mosaic’ of masculinised and feminised regions, the implication being that we should not expect biologically driven sex differences in behaviour. Yet, within months, multiple other researchers showed that the same data could very reliably be used to categorise individual brains as male or female. While the volume of any individual area is a terrible predictor of sex, a multivariate analysis gives very good discrimination. On this reading, the brains of males and females are not dimorphic, with two completely different forms, like genitalia; instead, they show a correlated set of shifts in the size of various features, similar to what is observed for male and female faces, which are also readily distinguished.
Another neuroimaging study that drew media attention for the contrary readings it spawned was undertaken in 2014 by the neuroscientist Madhura Ingalhalikar and colleagues at the University of Pennsylvania. They measured the connections between brain regions, and found some sex differences in organisation, with females tending to have more connections between the two hemispheres, and males having slightly more running front-to-back within each hemisphere. The data seemed pretty robust, and fit with prior findings of greater cross-hemispheric connectivity in females. Still, the authors were criticised for how they interpreted the findings. They speculated – rather freely – that ‘male brains are structured to facilitate connectivity between perception and coordinated action, whereas female brains are designed to facilitate communication between analytical and intuitive processing modes’. In the press release for their paper, they claimed that the differences could explain why ‘men are more likely better at learning and performing a single task at hand, like cycling or navigating directions, whereas women have superior memory and social cognition skills, making them more equipped for multitasking and creating solutions that work for a group’.
In the absence of any causal link between the observed differences in brain structure and those in behaviour, such claims are purely speculative. Nor were the chosen examples of supposed sex differences in behaviour particularly convincing (are men really psychologically more suited to cycling?). Claims like these rely on unsupported inferences of there being close links between the size of bits of the brain and performance of complex human behaviours.
Read the full article in Aeon.
The knotty question of when humans
made the Americas home
Megan Gannon, Sapiens, 4 September 2019
Humans have long found comfort on Calvert Island, just off the coast of mainland British Columbia. For millennia, they have climbed the island’s rocky outcrops, walked through its rainy conifer forests, and waded through its chilly intertidal pools to collect crabs, mussels, and other marine life.
There, in 2014, a group of Canadian researchers uncovered human footprints pressed into a prehistoric layer of soil. The footprints, 29 in total, are the oldest found in North America. They suggest an intimate scene in which, 13,000 years ago, at least three people may have hopped out of a boat onto the damp shore. One person appears to have slipped as the group walked toward drier land. The footprints also speak to a much larger and contested story – the tale of the humans who first set foot in North America.
North and South America were relatively lonely places for our species 13,000 years ago. The continents were the last major landmasses in the world to be populated by Homo sapiens. But the explanation of how and when this peopling happened has needed to be heavily revised in the last two decades.
‘This field is bonkers right now,’ says anthropological geneticist Jennifer Raff of the University of Kansas. ‘I think there’s a new important paper coming out every three or four months.’ Indeed, no tidy, new framework has arisen to take the place of older theories. Instead, new data, including genetic findings, continue to complicate the story of how these continents came to be peopled.
As San Diego State University archaeologist Todd Braje puts it, ‘We know less … about the peopling of the New World now than we did 20 years ago.’ (Or, as Raff puts it, we know more but are less united in a single consensus model.)
Read the full article in Sapiens.
Harold Bloom, a prolific giant
and perhaps the last of a kind
Dwight Garner, New York Times, 15 October 2019
It was impossible to read deeply in Bloom without him flooring you with feeling. ‘Walt Whitman,’ he wrote, ‘overwhelms me, possesses me, as only a few others — Dante, Shakespeare, Milton — consistently flood my entire being.’ In today’s world, there is competition to be more concerned than anyone else. In Bloom’s, there was competition to be the most exactingly delighted. There were more arrows in him, aesthetically, than in St. Sebastian.
He read like a man picking up crumbs with a moistened index finger. He often considered loneliness in literature. You felt he was attracted to loneliness as a theme for the same reasons that Ishmael, in ‘Moby-Dick,’ liked to join funeral processions. It made him feel more open, invigorated and alive.
Bloom’s most important book, ‘The Anxiety of Influence,’ remains a touchstone. He wrote it quickly, after a personal crisis. He investigated the way poets and other writers struggle to create without being smothered by the work of those who made them want to write in the first place.
‘The Anxiety of Influence’ is among those fortunate books in which thesis is embedded in title. The book’s ideas may be complicated and heavy, but there’s a simple handle with which to pick them up.
The title of another of his major books, ‘The Western Canon,’ is also a kind of planted flag. That book examined the work of 26 writers, largely but not entirely male and white — the list included Jane Austen, Emily Dickinson, George Eliot, Virginia Woolf, Jorge Luis Borges, Pablo Neruda and Fernando Pessoa — whom Bloom saw as particularly sublime, and pivotal to our understanding of what it means to be sentient.
It was also an attack, from a crenelated embankment, on what he called the ‘School of Resentment’ — critics and scholars he had previously described, in a 1991 Paris Review interview, as ‘displaced social workers’ and ‘a rabblement of lemmings.’
‘Literature is not an instrument of social change or an instrument of social reform,’ he said in that same interview. ‘It is more a mode of human sensations and impressions, which do not reduce very well to societal rules or forms.’
Read the full article in the New York Times.
The painting of Notre Dame is by Amrita Sher-Gil; the painting of steel-pouring is one of Jacob Lawrence’s Great migration series (I published the full series in six parts, 1, 2, 3, 4, 5 & 6); The image of Clarence Thomas is by Jose R. Lopez from the New York Times.