The latest (somewhat random) collection of essays and stories from around the web that have caught my eye and are worth plucking out to be re-read.
Aura Bogado, Reveal News, 18 February 2020
A massive Department of Justice seal towers over the bench, flanked by giant windows that allow a glimpse of the downtown skyline. At one table, an attorney representing Immigration and Customs Enforcement faces the judge. Every 10 minutes or so, a new young client makes their way around the table, ready to face the full brunt of the U.S. immigration system. Not one is here with an adult family member. Each time, an attorney steps forward to represent them. Sometimes it’s the same attorney for several clients in a row. The room feels prim, almost quaint, dissonant for a space in which each decision can mean the difference between life and death.
On this cold afternoon in January, there is one girl in particular I’ve come here to see.
The girl, now 17, has been in U.S. immigration custody since she was 10 years old. Since presenting herself at the border and seeking asylum in late 2013, she has been separated from her family, shuttled back and forth between shelters and foster homes across the United States, from Oregon to Massachusetts to Texas to Florida, and back to Texas and Oregon again, from what I’ve been able to piece together.
She’s become a long-term resident of what’s supposed to be a short-term system. I wonder if she’s ever had a friend for more than a few months, if she’s gotten a real education, if she’s learned to speak English. I wonder when she last got a hug from anyone who loves her.
What I do know is that, after all these years, she wants out. She’s come to court today to try and deport herself from the United States…
The entire proceeding lasts around 20 minutes. At the end, Judge Zanfardino gives the 17-year-old what she came here for.
‘You’ve been granted the voluntary departure request that your attorney filed on your behalf,’ he tells her.
The girl is clearly elated. She’s grinning from ear-to-ear as she stands and turns to face the two rows of seats behind her, which are dotted with other children here for their own hearings. Then she steps outside of the courtroom to confer with her attorney.
But there is something the girl doesn’t know, that I’ve just recently learned. She still has a family in the United States, and they want her home. It’s unlikely that I’ll get to talk with the girl, but her family has given me a message for her just in case. An hour or so after the judge’s ruling, in the court’s elevator lobby, I manage to hand the girl a few pieces of paper, just as her chaperone is rushing her away from me.
‘Take them’, I tell her in Spanish. She does.
Among the papers is a photograph of the relatives she hasn’t seen in more than six years. She sees the photo and makes a hard stop.
‘Son ellas’, she says to her chaperone. It’s them – her family.
Read the full article in Reveal News.
Slouching towards dystopia: the rise of
surveillance capitalism and the death of privacy
John Naughton, New Statesman, 26 February 2020
The dynamic interactions between human nature and this toxic business model lie at the heart of what has happened with social media. The key commodity is data derived from close surveillance of everything that users do when they use these companies’ services. Therefore, the overwhelming priority for the algorithms that curate users’ social media feeds is to maximise ‘user engagement’ – the time spent on them – and it turns out that misinformation, trolling, lies, hate-speech, extremism and other triggers of outrage seem to achieve that goal better than more innocuous stuff. Another engagement maximiser is clickbait – headlines that intrigue but leave out a key piece of information. (‘She lied all her life. Guess what happened the one time she told the truth!’) In that sense, social media and many smartphone apps are essentially fuelled by dopamine – the chemical that ferries information between neurons in our brains, and is released when we do things that give us pleasure and satisfaction.
The bottom line is this: while social media users are essential for surveillance capitalism, they are not its paying customers: that role is reserved for advertisers. So the relationship of platform to user is essentially manipulative: he or she has to be encouraged to produce as much behavioural surplus as possible.
A key indicator of this asymmetry is the End User Licence Agreement (EULA) that users are required to accept before they can access the service. Most of these ‘contracts’ consist of three coats of prime legal verbiage that no normal human being can understand, and so nobody reads them. To illustrate the point, in June 2014 the security firm F-Secure set up a free WiFi hotspot in the centre of London’s financial district. Buried in the EULA for this ‘free’ service was a ‘Herod clause’: in exchange for the WiFi, ‘the recipient agreed to assign their first born child to us for the duration of eternity’. Six people accepted the terms. In another experiment, a software firm put an offer of an award of $1,000 at the very end of its terms of service, just to see how many would read that far. Four months and 3,000 downloads later, just one person had claimed the offered sum.
Despite this, our legal systems accept the fact that most internet users click ‘Accept’ as confirmation of informed consent, which it clearly is not. It’s really passive acceptance of impotence. Such asymmetric contracts would be laughed out of court in real life but are still apparently sacrosanct in cyberspace.
Read the full article in the New Statesman.
Immigrants as scapegoats
Chris Dillow, Stumbling & Mumbling, 21 February 2020
Several observers believe that insofar as the government’s points-based immigration system reduces immigration (which as Jonathan says is ‘far from certain’), it will do real economic harm. Ian Dunt says it ‘is a bitterly stupid and small-hearted thing to do.’ And Anthony Painter writes:
The scale of the change and its suddenness risk very significant negative impacts. Businesses and public services could struggle to fill vacancies, seasonal businesses could especially suffer, and costs could escalate impacting business and public service viability.
These harms arise because, to the extent that the new policy is based upon economics at all, it rests upon a fallacy – the idea that, as Iain Duncan Smith told The World at One, immigration has a ‘very negative effect on earnings.’
This is plain false. The nearest thing we have to evidence for it is a paper by Stephen Nickell and Jumana Saleheen which estimates (pdf) that a ‘10 percentage point rise in the proportion of immigrants is associated with a 2 percent reduction in pay.’ A 10 percentage point rise is, however, a humungous number; it’s equivalent to over three million workers across the whole economy. In reality, their estimate implies only a tiny actual effect on the earnings of the low-skilled. As Jonathan says, it suggests that:
the impact of migration on the wages of the UK-born in this sector since 2004 has been about 1 per cent, over a period of 8 years. With average wages in this sector of about £8 an hour, that amounts to a reduction in annual pay rises of about a penny an hour.
This tallies with the general consensus, that migration has little impact on the wages of natives. A survey by the Migration Advisory Committee concluded (pdf):
Migrants have no or little impact on the overall employment and unemployment outcomes of the UK-born workforce…Migration is not a major determinate of the wages of UK-born workers. We found some evidence suggesting that lower-skilled workers face a negative impact while higher-skilled workers benefit, however the magnitude of the impacts are generally small.
This is consistent with my chart. It shows that the share of wages in GDP is higher now than it was in the mid-90s – despite a near-trebling in the number of foreign-born workers during this time.
It’s not just in the UK that migrants don’t depress wages much. Studies of big waves of immigrants in other countries have found a similar thing. For example, the mass migration of Europeans to the US in the 1920s (pdf) ‘had a positive and significant effect on natives’ employment and occupational standing, as well as on economic activity.’ And a study (pdf) of Russians’ migration to Israel after the collapse of the USSR found that it ‘did not have an adverse impact on native Israeli labour market outcomes.’
The evidence then, seem clear.
But it is also counter-intuitive. Surely, if you increase the supply of something, its price will fall.
Read the full article in Stumbling & Mumbling.
Rebottling the Gini: why this headline measure
of inequality misses everything that matters
Angus Deaton & Anne Case, Prospect, 27 January 2020
What really, though, is the Gini coefficient, where does it come from? Corrado Gini was a creative and wide-ranging sociologist, demographer and statistician—but not the sort of human being who is remembered fondly. An eminent Italian economist told one of us that in some circles, no one speaks his name, instead referring to him as ‘Dr G,’ because it is supposedly unlucky to utter his name. Rather like performers who are spooked by Macbeth and will only refer to the ‘Scottish play,’ some Italian statisticians would seem to be blighted by the curse of Gini.
Dr G published his work on inequality in Italian in 1912. His idea was to assess it by looking at the average difference between every pair of people in the population, measured in terms of mean income. If we both have the same, the difference between us is obviously nothing—that is, zero lots of mean income. But if I have everything, and you have nothing, I have twice the mean, and the difference between us is twice mean income, or 2. To fix the index so that it always lies between 0—perfect equality—and 1—the complete inequality of this scenario, we then have to halve this total. That’s essentially it, but the number of two-way comparisons rises rapidly with the population, because each person has to be compared to every other person. With three people, there are three comparison pairs that we tot up differences across, average and then halve; with four people, there are six such pairs. And so on.
You may be beginning to see the allure of the Gini. It offers a single calculation for a single summary statistic which, with its 0 to 1 scale, seems as easy to read as a percentage, and which—to the uninitiated—appears to tell you all you need to know about who is getting what income. It is flexible, too. It can be applied not only to income—about 0.35 for Britain, and about 0.47 for the US—but also to wealth, where the Gini is much larger, because differences in income persist and are cumulated into wealth over the course of life and over time by family dynasties. Indeed, it can also be used to describe the spread of anything else—weight, intelligence, or your number of social media followers—that is unequally distributed, not that you would know that from an ‘inequality debate’ that fixates on the finances alone.
The first time Gini introduced his measure in English was in a comment on a 1920 paper by the British economist Hugh Dalton in the Economic Journal where Dalton had introduced the ‘principle of transfers,’ that social welfare would improve whenever resources were transferred from a richer to a poorer person (provided that the transfer did not switch their positions). Dalton went on to become Chancellor of the Exchequer in 1945, in a government that made great steps to implement progressive transfers by laying the foundations of the welfare state.
Read the full article in Prospect.
‘The intelligence coup of the century’
Greg Miller, Washington Post, 11 February 2020
For more than half a century, governments all over the world trusted a single company to keep the communications of their spies, soldiers and diplomats secret.
The company, Crypto AG, got its first break with a contract to build code-making machines for U.S. troops during World War II. Flush with cash, it became a dominant maker of encryption devices for decades, navigating waves of technology from mechanical gears to electronic circuits and, finally, silicon chips and software.
The Swiss firm made millions of dollars selling equipment to more than 120 countries well into the 21st century. Its clients included Iran, military juntas in Latin America, nuclear rivals India and Pakistan, and even the Vatican.
But what none of its customers ever knew was that Crypto AG was secretly owned by the CIA in a highly classified partnership with West German intelligence. These spy agencies rigged the company’s devices so they could easily break the codes that countries used to send encrypted messages.
The decades-long arrangement, among the most closely guarded secrets of the Cold War, is laid bare in a classified, comprehensive CIA history of the operation obtained by The Washington Post and ZDF, a German public broadcaster, in a joint reporting project.
The account identifies the CIA officers who ran the program and the company executives entrusted to execute it. It traces the origin of the venture as well as the internal conflicts that nearly derailed it. It describes how the United States and its allies exploited other nations’ gullibility for years, taking their money and stealing their secrets.
The operation, known first by the code name ‘Thesaurus’ and later ‘Rubicon,’ ranks among the most audacious in CIA history.
‘It was the intelligence coup of the century,’ the CIA report concludes. ‘Foreign governments were paying good money to the U.S. and West Germany for the privilege of having their most secret communications read by at least two (and possibly as many as five or six) foreign countries.’
From 1970 on, the CIA and its code-breaking sibling, the National Security Agency, controlled nearly every aspect of Crypto’s operations — presiding with their German partners over hiring decisions, designing its technology, sabotaging its algorithms and directing its sales targets.
Read the full article in the Washington Post.
Why faces don’t always tell the truth about feelings
Douglas Heaven, Nature, 26 February 2020
Human faces pop up on a screen, hundreds of them, one after another. Some have their eyes stretched wide, others show lips clenched. Some have eyes squeezed shut, cheeks lifted and mouths agape. For each one, you must answer this simple question: is this the face of someone having an orgasm or experiencing sudden pain?
Psychologist Rachael Jack and her colleagues recruited 80 people to take this test as part of a study1 in 2018. The team, at the University of Glasgow, UK, enlisted participants from Western and East Asian cultures to explore a long-standing and highly charged question: do facial expressions reliably communicate emotions?
Researchers have been asking people what emotions they perceive in faces for decades. They have questioned adults and children in different countries and Indigenous populations in remote parts of the world. Influential observations in the 1960s and 1970s by US psychologist Paul Ekman suggested that, around the world, humans could reliably infer emotional states from expressions on faces — implying that emotional expressions are universal2,3.
These ideas stood largely unchallenged for a generation. But a new cohort of psychologists and cognitive scientists has been revisiting those data and questioning the conclusions. Many researchers now think that the picture is a lot more complicated, and that facial expressions vary widely between contexts and cultures. Jack’s study, for instance, found that although Westerners and East Asians had similar concepts of how faces display pain, they had different ideas about expressions of pleasure.
Researchers are increasingly split over the validity of Ekman’s conclusions. But the debate hasn’t stopped companies and governments accepting his assertion that the face is an emotion oracle — and using it in ways that are affecting people’s lives. In many legal systems in the West, for example, reading the emotions of a defendant forms part of a fair trial. As US Supreme Court judge Anthony Kennedy wrote in 1992, doing so is necessary to ‘know the heart and mind of the offender’.
Decoding emotions is also at the core of a controversial training programme designed by Ekman for the US Transportation Security Administration (TSA) and introduced in 2007. The programme, called SPOT (Screening Passengers by Observation Techniques), was created to teach TSA personnel how to monitor passengers for dozens of potentially suspicious signs that can indicate stress, deception or fear. But it has been widely criticized by scientists, members of the US Congress and organizations such as the American Civil Liberties Union for being inaccurate and racially biased.
Read the full article in Nature.
What do we owe the dead?
Iskra Fileva, New York Times, 27 January 2020
Not all people who die were good people. Yet there is strong social pressure to pretend they were, at least for a period of time. Exactly how long a period of time that should be has never been made clear.
In almost every culture, posthumous praise immediately after a person’s death is the norm. Except in cases of the most obviously evil figures, it is generally accepted that the hours and days after a death should be a time of remembrance, grief and praise.
I believe this is the norm that was violated by the Washington Post reporter Felicia Sonmez, when — shortly after the death of basketball star Kobe Bryant — she tweeted a 2016 article about the sexual assault charge brought against Bryant in 2003. Sonmez reportedly received a swift and intense backlash on social media. Had she written her tweet a week ago, when Bryant was alive and well, I suspect that she would have faced little backlash.
This norm is so firmly ingrained that almost everyone, not only those as widely admired as Bryant, can count on posthumous praise, and there are both fictional and real cases involving people who faked their own deaths in part to gain the benefit of reading their own obituaries. For instance, in Isaac Asimov’s story ‘Obituary,’ a theoretical physicist named Stebbins, frustrated with his own failure to achieve fame, concocts an elaborate plan to fake his own death in an attempt to get publicity and the benefits of an obituary. Stebbins, though not a particularly agreeable character, is right to expect a eulogy.
In general, we pretend that the recently deceased were good or better than they really were. Why? Perhaps we think that we owe it to the deceased person’s family to show deference. This makes sense in cases where the deceased person is a public figure who, while reviled by segments of the public, was deeply loved by his or her own immediate relatives. But it does not tell us what is going on in a large number of other cases. Many of those seen as unsavory characters by outsiders are regarded as morally deficient by their families as well. I have personal knowledge of cases in which the adult children of an abusive now deceased parent were pressured to pretend the deceased were good mothers and fathers. At least one person I know faced harsh criticism for refusing to comply. This suggests that the norm is so strong, we are willing to force people abused in childhood to hide their pain behind a socially acceptable mask.
Read the full article in the New York Times.
Regina Rini, TLS, 1 February 2020
Over the weekend, governments from America to Singapore began banning the arrival of foreigners who have recently visited China. The British Foreign Office has advised against travel to China and British Airways suspended all Chinese flights. Australia’s Monash University will delay the start of the new semester by a week. In China, the city of Wuhan is under military enforced lockdown, with public transport closed and the streets quiet. All of this is meant to prevent spread of a scary new coronavirus. Across the world, our response to the disease reflects a battle between fear and orderliness. But as reports of suspicion toward Chinese people mount, the question is whether we will allow innocent people to become casualties.
Last week the city of Toronto, where I live, announced Canada’s first confirmed case of coronavirus in a fifty-something man who had just returned from visiting Wuhan. He was kept under isolation in hospital; his wife is afflicted as well, though self-quarantining at home. (This weekend the man seems to have recovered enough to be released.) Toronto has a large Chinese Canadian community, particularly in the suburb of Markham. When I heard that someone had arrived with coronavirus from China, I admit that my first thought was to avoid restaurants in Markham for a bit.
My second thought was that my first was dangerous, probably more dangerous than the disease itself. Casually associating illness and fear with members of our communities will quickly drive us towards prejudice. Already, Chinese Canadians in Toronto are reporting fears of racist stigmatization akin to that provoked by the SARS panic nearly 20 years ago. In Ottawa, 400 km from any known case of coronavirus, a Chinese Canadian woman encountered hospital patients gesturing at her and covering their faces. Of course it is not only Canada: on Wednesday, days before the UK’s first case was confirmed, one man from London told Metro he had witnessed a Chinese woman shoved to the ground and told ‘Go back to your country, we don’t want your virus here’.
Racism and fear of disease have been connected for a very long time. Medieval Europeans blamed the Black Death on Jewish communities, and pogroms followed across Germany and Switzerland. In the modern world, with instantaneous global media and daily transcontinental flights, panic can quickly spread far from any realistic danger. In 2014, at the height of the Ebola crisis, a Kentucky teacher was driven out of her job after returning from a trip to Kenya – though Kenya had no cases of the disease and was 5,000 km from the West African outbreak, further than the distance between London and Toronto.
Fear of deadly disease often leads us to make ignorant mistakes like this. Still, you might think: is it really unreasonable to avoid areas where travellers from Wuhan might turn up, like Chinese neighbourhoods in major international cities? After all, if our governments are blocking travel to and from China, aren’t we being told to keep clear? Of course, overt racism is unacceptable, but isn’t it simple self-preservation to avoid increased risk of exposure?
Read the full article in TLS.
One million Britons will be on
zero-hour contracts by end of 2020
Brian Stuart Finlay, The Conversation, 27 February 2020
In a speech at the Labour Party conference in 1995, the then leader of the opposition, Tony Blair, highlighted ‘the need to end zero-hours contracts’ in his bid to stop part-time workers being treated like ‘second-class citizens’. But 20 years later the number of people on zero-hours contracts was almost 700,000 and growing. Today that figure stands at 974,000…
The use of zero-hours contracts has exploded over the past decade. In the figures published by the ONS, just under one million working people are on zero-hours contracts in their ‘main job’. That accounts for 3% of employed people in the UK. Although employment figures seem positive in general, to be classed as employed in the UK you need only work just one hour per week.
In the latest ONS figures, one-third of people on zero-hours contracts are aged between 16 and 24, and 100,000 more women are working this way than in 2018. This could have negative effects on families and young people early on in their career, including financial insecurity and poor mental health.
In academia, increasing casualisation of academic work has been cited as a factor in a dispute taking place at universities across the UK. And zero-hours contracts have been the trigger for industrial action in the hospitality sector in recent years.
Zero-hours contracts are often touted by businesses as being flexible for the likes of students, which might be the case for some employees. But similar flexibility can be achieved with part-time contracts, which would provide the employee with job and financial security.
The Chartered Institute for Personnel Development (CIPD) highlights the importance of providing quality jobs for positive employee well-being. But job quality can also increase workforce productivity, innovation and employee engagement. It can also be linked to reducing absence levels and employee turnover. And, as numerous academics have suggested, job security is a fundamental component of job quality. Zero-hours contracts are not only exploitative but they may also make very little business sense in the long term.
In January 2020, Andy Burnham – the Labour mayor of Greater Manchester – launched the Good Employment Charter, an informal pledge that commits businesses to paying employees more than the minimum wage and banning the use of zero-hours contracts in Manchester. Burnham said the programme was set up to help lift workers out of in-work poverty and provide secure employment.
Unfortunately the Greater Manchester Combined Authority, which is headed by Burnham, does not have the necessary legal powers to force employers to ditch zero-hours contracts. So the informal pledge remains an agreement which could be reneged on.
Read the full article in The Conversation.
Why your brain is not a computer
Matthew Cobb, Guardian, 27 February 2020
The processing of neural codes is generally seen as a series of linear steps – like a line of dominoes falling one after another. The brain, however, consists of highly complex neural networks that are interconnected, and which are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these networks to the behaviour of the animal misses the point of all that processing.
By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function. This view of the brain has been outlined by the Hungarian neuroscientist György Buzsáki in his recent book The Brain from Inside Out. According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.
The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it.
Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology. This could imply that the appearance of new and insightful metaphors for the brain and how it functions hinges on future technological breakthroughs, on a par with hydraulic power, the telephone exchange or the computer. There is no sign of such a development; despite the latest buzzwords that zip about – blockchain, quantum supremacy (or quantum anything), nanotech and so on – it is unlikely that these fields will transform either technology or our view of what brains do.
Read the full article in the Guardian.
You may have more Neanderthal DNA than you think
Maya Wei-Haas, National Geographic, 30 January 2020
SOME 60,000 YEARS ago, a wave of early humans ventured out of Africa, spreading to every other corner of the world. These travelers were met by a landscape of hominins vastly different from those they left behind.
Neanderthals roamed the lands across Europe and the Middle East. Their sister group, the Denisovans, spread through Asia. And whenever these groups met, it seems, they mated.
The genetic fingerprints of this mixing remain apparent in many populations today. Roughly two percent of the genomes of Europeans and Asians are Neanderthal. Asians also carry additional Denisovan DNA, up to 6 percent in Melanesians. But African populations seemed to have largely been left out of this genetic shakeup.
Now a study, published this week in Cell, presents a striking find: Modern African populations carry more snippets of Neanderthal DNA than once thought, about a third of the amount the team identified for Europeans and Asians. What’s more, the model suggests that Neanderthal ancestry in Europeans has also been slightly underestimated.
Study author Joshua Akey, a geneticist at Princeton University, was initially incredulous. ‘Well that can’t be right,’ he recalls thinking at the time. But after a year and a half more of rigorous testing, he and his colleagues are convinced of the find. Some 17 million base pairs of African genomes are Neanderthal, the study reveals, which likely come from, in part, the ancestors of modern Europeans traveling back into Africa and carrying bits of Neanderthal DNA in their genomes.
When thinking about these early migrations, Akey says, ‘there’s this idea that people left Africa, and never went back.’ But these new results, along with past studies, underscore that’s not the case. ‘Clearly there’s no one-way bridge there.’
‘It’s a really nice new piece of the puzzle,’ says Janet Kelso, a computational biologist at Germany’s Max Planck Institute for Evolutionary Anthropology, who was not part of the study team. The new model corrects for previous assumptions about Neanderthal mixing, she notes, revealing how much information is likely still lurking within our genes.
‘The emerging picture is that it’s really complicated—no single gene flow, no single migration, lots of contact,’ Kelso says. While exciting, she adds, it also presents an analytical challenge.
Read the full article in the National Geographic.
Loren B Landau & Roni Amit,
Africa is a Country, 21 February 2020
Echoing global moves to restrict access to asylum, the South African government has recently limited the right of asylum seekers to work, has narrowed the grounds for claiming asylum, and most controversially, has all but silenced refugees’ political voice. Already prevented from voting in South African elections, refugees will now risk losing their status if they vote or campaign for change in their countries of origin. This holds even for those claiming for asylum on grounds of political activism. Should they try it, they would be at risk for speaking up about conditions in South Africa too.
This decision is about far more than the rights of South Africa’s substantial population of refugees and asylum seekers. South Africa’s Constitution promises the country to ‘all who live in it.’ Since the end of apartheid, this has meant legally recognizing the fundamental rights of everyone. Even if constitutional protections have translated poorly into reality for foreigners and South Africans, the principle was sound. Recent changes to the Refugees Act are an assault on this constitutional universalism. What were once guaranteed protections for non-nationals now become privileges.
More alarming is that the government has effectively transformed the Constitution by arbitrary order and regulation, not open parliamentary debate or amendments. Most of these changes do not appear in the amended Refugees Act itself, but in accompanying regulations not vetted through normal legislative amendment process. If left unchallenged, the government is likely to continue reshaping the constitutional landscape in additional areas.
The recent restrictions on refugees—and the limited protests against them—reflect the degree to which many South Africans see ‘xenophobia’ as a legitimate hate. Few see it as akin to the racism and exclusion so many South Africans continue to suffer. Ironically, many claiming to ‘decolonize’ the country’s institutions often reinforce its colonial borders. Many accept opposition leader Mangosuthu Buthelezi’s proposition from all those years ago: that immigration spells the end to positive transformation. Rather than condemn the exclusion, citizens and politicians embrace it.
Read the full article in Africa is a Country.
Friends like these
Jon Baskin, The Point, 28 January 2020
‘What I want to make plain,’ wrote Lionel Trilling in a 1947 letter to a friend, ‘is my deep distaste for liberal culture.’ Coming from a purported liberal and the soon-to-be author of The Liberal Imagination, Trilling recognized such a sentiment was ‘difficult to explain.’ He found himself to be ‘in accord with most of the liberal ideas of freedom, tolerance, etc.,’ and yet
the tone in which these ideals are uttered depress[es] me endlessly. I find it wholly debased, downright sniveling, usually quite insincere. It sells everything out in human life in order to gain a few things it can understand as good. It isn’t merely that I believe that our liberal culture doesn’t produce great art and lacks imagination—it is that I think it produces horrible art and has a hideous imagination.
One could read this passage as demonstrating Trilling’s openness to the criticisms of liberalism, and therefore as exemplifying what the well-known contemporary liberal critic Adam Gopnik calls, in his latest book A Thousand Small Sanities: The Moral Adventure of Liberalism (2019), liberalism’s ‘tolerance for difference.’ But that would be to miss, from the perspective of much of what we call liberal cultural criticism today, what is most striking about it. Trilling was not just open to critics of liberalism; he was one. He did not merely tolerate the distaste others expressed for liberalism’s ‘sniveling’ imagination; he felt it himself.
On the basis of reputation alone, it would be possible to think of Gopnik as a figure who is laboring in Trilling’s drift. The author of several books on modern art and culture, Gopnik has been a polymathic fixture in the New Yorker since 1986, when he introduced himself with an essay about the similarities between his two passions: fifteenth-century Italian painting and the Montreal Expos. For Gopnik, as for Trilling, if we want to understand liberalism as a political ‘program’ we have first to understand it as ‘a temperament.’ And for Gopnik, as for Trilling, it is best to look for the inner nature of that temperament not in Hobbes or Locke but in Montaigne, whose ‘undulating and diverse being’ Trilling cited as an ideal for the liberal critic. Montaigne, Gopnik writes, ‘saw, in the late Renaissance, that we are double in ourselves,’ that ‘we condemn the thing we believe and embrace the thing we condemn.’
But while Gopnik voices appreciation for this doubleness, his writing bears little trace of it. In A Thousand Small Sanities, he does not condemn the thing he believes; he embraces it. And that thing is liberalism. On the first page he tells us liberalism has been ‘proven true by history,’ and the remainder of the book is devoted to harassing us into accepting this proof, with fusillades of superlatives when necessary. (Later in the introduction, Gopnik calls liberalism ‘one of the great moral adventures in human history’; a sentence after that, in what I assume will come as a surprise to people of faith, he calls it ‘the most singular spiritual episode in all of human history.’) This is not to say Gopnik is insensible to the perceived shortcomings of liberalism. He would not be a good liberal, he acknowledges, unless he tried, as ‘eloquently’ as he could, to grapple with the arguments against liberal ideas. Close to half of Small Sanities is taken up with his competent reproductions of the traditional lines of anti-liberal attack. But while Gopnik’s liberal commitment to openness may enjoin him to give the criticisms of liberalism a fair hearing, what never seems to occur to him is what Trilling felt viscerally: that the criticisms of liberalism could be true.
Read the full article in The Point.
Black or white, it’s the same old anti-Semitic pathology
Ralph Leonard, Quilette, 7 January 2020
Anti-Semitism among black people, as among everyone else, comes in different guises. Some of it is old-fashioned Christian anti-Semitism. Some of it is political anti-Semitism of the type rooted in the Protocols of the Elders of Zion and other venerable conspiracist hoaxes. And some of it is traceable to more modern leftist movements, which promote anti-Zionism in a way that can blur into classic anti-Semitic tropes that case Jews as inherently malevolent.
Black nationalists, such as the Nation of Islam and Black Hebrew Israelites, sometimes will integrate the paranoid, pathological aspects of anti-Semitism into a narrative centered on black identity: ‘They’ are the ones who enslaved our ancestors; ‘they’ are the ones who profit from the appropriation of black culture; ‘they’ are the ones who conspired to get Malcolm X killed; and so on. But one could find corresponding claims in other manifestations of anti-Semitism, such as those promoted by white Christian survivalists or Arabs in the Middle East. These parochial details do not serve to define a wholly different species of anti-Semitism. They merely show how age-old conspiracist themes can be adapted into a narrative that suits a particular ethnic or political agenda.
But this boring truth—that anti-Semitism reflects the same basic ideological malignancy, no matter who is spreading it—is apparently unpalatable to some progressives. This includes University of New Hampshire scientist and activist Chanda Prescod-Weinstein, who recently wrote a controversial, widely commented-upon Twitter thread in which she declared that ‘treating violent attacks on Jewish people by Black people like they are equivalent to white antisemitism is intellectually lazy, disingenuous, anti-Black, and dangerous both to non-Jewish Black people and to Jews.’
She added that ‘antisemitism in the United States, historically, is a white Christian problem, and if any Black people have developed antisemitic views it is under the influence of white gentiles,’ and that American Jews are beneficiaries of ‘whiteness,’ which she describes as ‘a fundamentally anti-Black power structure.’ Moreover, ‘Black people become distracted by Jewishness and don’t properly lay blame on whiteness and Jews get attacked in the process. It is a win-win for white supremacy.’
Read the full article in Quilette.
Revolt! Scientists say they’re sick
of quantum computing’s hype
Sophia Chen, Wired, 12 December 2019
This spring, a mysterious figure by the name of Quantum Bullshit Detector strolled onto the Twitter scene. Posting anonymously, they began to comment on purported breakthroughs in quantum computing—claims that the technology will speed up artificial intelligence algorithms, manage financial risk at banks, and break all encryption. The account preferred to express its opinions with a single word: ‘Bullshit.’
The provocations perplexed experts in the field. Because of the detector’s familiarity with jargon and the accounts it chose to follow, the person or persons behind the account seemed be part of the quantum community. Researchers were unaccustomed to such brazen trolling from someone in their own ranks. ‘So far it looks pretty well-calibrated, but […] vigilante justice is a high-risk affair,’ physicist Scott Aaronson wrote on his blog a month after the detector’s debut. People discussed online whether to take the account’s opinions seriously.
‘There is some confusion. Quantum Bullshit Detector cannot debate you. It can only detect quantum bullshit. This is why we are Quantum Bullshit Detector!’ the account tweeted in response.
In the subsequent months, the account has called bullshit on statements in academic journals such as Nature and journalism publications such as Scientific American, Quanta, and yes, an article written by me in WIRED. Google’s so-called quantum supremacy demonstration? Bullshit. Andrew Yang’s tweet about Google’s quantum supremacy demonstration? Bullshit. Quantum computing pioneer Seth Lloyd accepting money from Jeffrey Epstein? Bullshit.
People now tag the detector, @BullshitQuantum, to request its take on specific articles, which the account obliges with an uncomplicated ‘Bullshit’ or sometimes ‘Not bullshit.’ Not everyone celebrates the detector, with one physicist calling the detector ‘ignorant’ and condemning its ‘lack of talent and bad taste’ in response to a negative verdict on his own work. But some find that the account provides a public service in an emerging industry prone to hyperbole. ‘I think it does a good job of highlighting articles that are not well-written,’ says physicist Juani Bermejo-Vega of the University of Granada in Spain.
The anonymous account is a response to growing anxiety in the quantum community, as investment accelerates and hype balloons inflate. Governments in the US, UK, EU, and China have each promised more than $1 billion of investment in quantum computing and related technologies. Each country is hoping to become the first to harness the technology’s potential to help design better batteries or to break an adversary’s encryption system, for example. But these ambitions will likely take decades of work, and some researchers worry whether they can deliver on inflated expectations—or worse, that the technology might accidentally make the world a worse place. ‘With more money comes more promises, and more pressure to fulfill those promises, which leads to more exaggerated claims,’ says Bermejo-Vega.
Read the full article in Wired.
Herat’s restored synagogues
reveal Afghanistan’s Jewish past
Ruchi Kumar, Al Jazeera, 7 February 2020
The narrow road that leads to the Yu Aw synagogue in the ancient city of Herat in western Afghanistan is lined with traditional mud homes that, despite their rough exterior, are fine examples of centuries-old architectural dexterity.
Ghulam Sakhi, the caretaker of some of the heritage sites, leads the way, taking short quick steps and pausing every so often to share an anecdote from the area’s history.
‘This is why I wanted you to walk to the synagogue, so you can see the neighbourhood and how it has been changed since the war,’ he explained before starting the walk of a little over a kilometre between the old city centre – called Chahar Su, or Four Directions – and the synagogue, which was restored in 2009.
About 350 years old, the Jewish place of worship is located near what used to be known as the Iraq Gate, an area of the city from which travellers and merchants embarked on their journey to Iraq.
The shared Jewish history of the two countries goes further back, to more than 2,700 years ago when Jewish tribes from the present territory of Palestine-Israel exiled by Assyrian conquerors travelled through Iraq to Afghanistan, where they settled and built thriving communities in cities like Herat and the Afghan capital of Kabul.
‘The history of Jews in this region goes back to way before the birth of the nation state of Afghanistan,’ said Afghan academic Omar Sadr, who has also authored the book Negotiating Cultural Diversity in Afghanistan.
‘There are mentions in history of Jews living in this region, during the period of Cyrus the Great, the Persian emperor and his conquest of Babylon in 538. During the Islamic era, Tajik historian Jowzjani from the 7th also mentions of Jewish colonies under the Ghurid chief Amir Banji who had recruited Jews as advisers. In more recent history, in 1736, Persian King Nadir Afshar encouraged Jewish settlement in the region because the Jews had good connection in the merchant routes in the subcontinent- between central Asia, and Arabia,’ he explains.
In Herat, this history is more intimate. ‘This neighbourhood has a deep connection with the Afghan Jewish community; they lived around here, they had shops and businesses. They were butchers, they sold spices and clothes. This is their neighbourhood,’ Sakhi narrates, as he unlocks a heavy iron latch on the thick traditional wooden doors.
Read the full article in Al Jazeera.
Truth decay: when uncertainty is weaponized
Felicity Lawrence, Nature, 3 February 2020
But it would be wrong to see truth decay solely as the preserve of today’s populist politicians. Normalizing the production of alternative facts is a project long in the making. Consultancy firms that specialize in defending products from tobacco to industrial chemicals that harm the public and the environment have made a profession of undermining truth for decades. They hire mercenary scientists to fulfil a crucial role as accessories to their misrepresentations.
Michaels was among the first scientists to identify this denial machine, in his 2008 book Doubt is Their Product. His latest work combines an authoritative synthesis of research on the denial machine published since then with his own new insights gleaned from battles to control the toxic effects of a range of substances. He takes on per- and polyfluoroalkyls, widely used in non-stick coatings, textiles and firefighting foams; the harmful effects of alcohol and sugar; the disputed role of the ubiquitous glyphosate-based pesticides in cancer; and the deadly epidemic of addiction to prescribed opioid painkillers. In each case, Michaels records how the relevant industry has used a toolbox of methods to downplay the risks of its products, spreading disinformation here, hiding evidence of harm there, undermining authorities — all tactics from the tobacco industry’s playbook.
The doubt in the title of both Michaels’s books derives from a now-notorious memo written in 1969 by an unnamed executive at a subsidiary of British American Tobacco. It outlined a strategy for maintaining cigarette sales: ‘Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public. It is also the means of establishing a controversy.’ By creating scientific disinformation about links between tobacco and disease, this malign strategy delayed regulation by decades and protected corporate profits…
The principles of scientific inquiry involve testing a hypothesis by exploring uncertainty around it until there is a sufficient weight of evidence to reach a reasonable conclusion. Proof can be much longer in coming, and consensus still longer. The product-defence industry subverts these principles, weaponizing the uncertainty inherent in the process. Its tricks include stressing dissent where little remains, cherry-picking data, reanalysing results to reach different conclusions and hiring people prepared to rig methodologies to produce funders’ desired results.
Read the full article in Nature.
Words we live by
James Jiang, Sydney Review of Books, 27 February 2020
Despite the ingenuity and tact of Bromwich’s readings, it is the last chapter, ‘What Are We Allowed to Say?’, that is most likely to draw readers to How Words Make Things Happen. Originally published in the London Review of Books, this polemic against what Bromwich diagnoses as the censoring and self-censoring tendencies in contemporary progressive culture gains in resonance when placed alongside the preceding lectures. The kind of aestheticism that Auden proposes in his Yeats elegy – the kind that ‘espouse[s] the consoling but false hope that verbal art can be an epitome of pure play, an enactment of wonder and pleasure by which we are uniquely humanized’ – tends simultaneously to underrate and overrate the ability of words to make things happen. It underrates this ability by insisting that poetry is merely play and thus ‘makes nothing happen’; but it also overrates it by committing to a consoling fiction about the necessarily humanizing quality of art qua art.
A variant of this aestheticism seems to have stolen into the debates around Rushdie’s Satanic Verses and the Charlie Hebdo cartoons. As Bromwich notes, these works were defended, both erroneously and in bad faith, by Western commentators through ‘a claim for the moral courage and stature of the artist’ rather than through a ‘straightforward political affirmation of press liberty’ that civil liberties activists would have reached for two generations or so ago. The falsifications of this claim for the progressive tendency of art and artists are particularly apparent in the Charlie Hebdo incident, where Rushdie himself defended the cartoonists by affirming satire’s impeccability as ‘a force for liberty and against tyranny, dishonesty and stupidity’. Even the most cursory scan of a Norton Anthology would suggest otherwise; satire, Bromwich observes, ‘may come from the palace as well as the gutter’ and it is as liable to punch down as punch up.
The contradiction whereby the efficacy of words is both over- and underrated also tends to characterise attitudes towards free speech in Western liberal democracies. On the one hand, free speech is brandished at the level of global politics as a ‘banner-slogan’ of Western tolerance and respect for individual rights, though, as Bromwich wrily remarks, ‘the temptation to strut is not altogether unavoided’. On the other hand, in the identitarian brand of campus politics, he detects new and divergent norms of public discourse emerging: what he characterises as an ‘enforced equability’ and ‘a neutral style of rational euphemism’ on the part of speakers; an ‘expanded field for taking offense’ on the part of audiences. For Bromwich, these developments contribute to ‘a new keenness of censorious distrust’ that comes with ‘a built-in suspicion of the outliers in public discussion’.
Read the full article in the Sydney Review of Books.
Neither person nor cadaver
Sharon Kaufman, Aeon, 6 February 2020
What happens when there are two competing definitions of death, confounding our understanding of the end of life? On 9 December 2013, Jahi McMath, a 13-year-old African-American girl living in Oakland, California, entered the hospital for a tonsillectomy, still one of the most common surgical procedures performed on children and often recommended for sleep apnoea, a condition she had been living with.
Hours after the surgery, Jahi began bleeding, and her mother was worried about the amount of blood Jahi was spitting up. As they begged nurses to call a doctor for more aggressive intervention, neither parent felt that anyone was listening. And then, Jahi’s maternal grandmother, a nurse, noticed that the girl’s oxygen saturation had dangerously dropped. Finally, as The New Yorker reported in 2018, doctors connected Jahi to a mechanical ventilator, or breathing machine. Although the machine was breathing for her, oxygenating her tissues, Jahi continued to decline. Three days after her tonsillectomy, hospital physicians declared her brain dead: that is, with irreversible and permanent cessation of the function of the entire brain, including the brainstem. Observing that Jahi was warm and breathing, her mother didn’t accept their assessment, and the family refused to allow the hospital to withdraw the ventilator. For Jahi’s family, perceptions of racial injustice were at the heart of the tragedy. But what they could not see through their anger and shock was that this was just the most recent development in a half-century of effort to clarify the definition of death.
In the years that followed, Jahi McMath has become a cause célèbre, joining such tragic icons as Karen Ann Quinlan and Terri Marie Schiavo in a movement to take control of the definition and pronouncement of death. These tragic cases, steeped in family pain and political rancour, ask us to consider the true moment of death. Does it arrive, as humans have long held, when the heart stops beating? Or, as modern hospitals and medical science sometimes insist, does it occur earlier, when the brain flatlines, ceasing to produce brainwaves, even as the heart beats on?
In 1968, a committee at Harvard Medical School invented the concept of brain death, adding a second definition to the classic concept of circulatory death – the permanent cessation of breath and heartbeat, and the irreversible loss of function of the heart and lungs. Patients could now be declared dead while a ventilator kept their lungs ‘breathing’ so their tissues and organs would not also ‘die’. The new notion of death opened the door for a growing number of transplant surgeons to harvest organs still bathed in the nurturing warmth of oxygen and blood. With the chance to save the lives of so many sick patients in need of hearts, kidneys and lungs, the concept of brain death was embraced – but not quite.
The ingredients for a perfect storm had coalesced. The long-ago decision of the committee thrust the rest of us into a battle over the contested territory of death, a battle that still rages. The reason is clear: science cannot tell us how to live. Brain death, which bucks millennia of belief, is not an obvious concept; and medicine’s knowledge cannot encompass the meaning that a family imparts to the consciousness, heartbeat, personhood and life of a loved one. These understandings comprised the ethical stakes in the case of Jahi McMath, and reveal a story about how, despite the efforts of ‘science’, two definitions of death live side by side to this day.
Read the full article in Aeon.
Frank Ramsey: A genius by all tests for genius
Cheryl Misak, History News Network, 9 February 2020
It is hard to get our ordinary minds around the achievements of the great Cambridge mathematician, philosopher, and economist, Frank Ramsey. He made indelible contributions to as many as seven disciplines: philosophy, economics, pure mathematics, mathematical logic, the foundations of mathematics, probability theory, and decision theory. My book Frank Ramsey: A Sheer Excess of Powers, tells the story of this remarkable thinker. The subtitle is taken from the words of the Austrian economist Joseph Schumpeter, who described Ramsey as being like a young thoroughbred, frolicking with ideas and champing at the bit out of a sheer excess of powers. Or as another economist, the Nobel Laureate Paul Samuelson said ‘Frank Ramsey was a genius by all tests for genius’.
Ramsey led an interesting life in interesting times. He began his Cambridge undergraduate degree just as the Great War was ending; he was part of the race to be psychoanalyzed in Vienna in the 1920s; he was a core member of the secret Cambridge discussion society, the Apostles, during one of its most vital periods; as well as a member of the Bloomsbury set of writers and artists and the Guild Socialist movement. He lived his life via Bloomsbury’s open moral codes and lived it successfully.
The economist John Maynard Keynes identified Ramsey as a major talent when he was a mathematics student at Cambridge in the early 1920s. During his undergraduate days, Ramsey demolished Keynes’ theory of probability and C.H. Douglas’s social credit theory; made a valiant attempt at repairing Bertrand Russell’s Principia Mathematica; and translated Ludwig Wittgenstein’s Tractatus Logico-Philosophicus, and wrote a critique of the latter alongside a critical notice of hat still stands as one of the most challenging commentaries of that difficult and influential book.
Keynes, in an impressive show of administrative skill and sleight of hand, made the 21-year-old Ramsey a fellow of King’s College at a time when only someone who had studied there could be a fellow. (Ramsey had done his degree at Trinity).
Ramsey validated Keynes’ judgment. In 1926 he was the first to figure out how to define probability subjectively and invented the expected utility that underpins much of contemporary economics. Beginning with the idea that a belief involves a disposition to act, he devised a way of measuring belief by looking at action in betting contexts. But while Ramsey provided us with a logic of partial belief, he would have hated the direction in which it has been taken. Today the theory is often employed by those who want to understand decisions by studying mathematical models of conflict and cooperation between rational and self-interested choosers. Ramsey clearly believed it was a mistake to think that people are ideally rational and essentially selfish. He also would have loathed those who used his results to argue that the best economy is one generated by the decisions of individuals, with minimal government intrusion. He was a socialist who favored government intervention to help the disadvantaged in society.
Read the full article in History News Network.
Gershwin: The quest for an American sound
Anna Harwell Celenza, TLS, February 2020
Porgy and Bess is a world away from Blue Monday. Gershwin spent several months on the South Carolina coast, studying the music and culture of the Gullah community (formerly enslaved African Americans living on the barrier islands near Charleston). As Gershwin explained in letters home, he embraced the culture he encountered in the South, but never fully felt a part of it. Consequently, the music he composed for Porgy and Bess was not a transcription of what he heard, but rather an evocation of the cultural encounter. As he later explained to a reporter:
When I first began to work on the music I decided against the use of original folk material because I wanted the music to be all of one piece. Therefore, I wrote my own spirituals and folk songs. But they are still folk music – and therefore, being an operatic form, Porgy and Bess becomes a folk opera
Gershwin died at the age of thirty-eight, in 1937. Yet his music continues to hold a place in the American consciousness. Ever since the premiere of Rhapsody in Blue, his works have been linked to jazz – some even call Porgy and Bess a ‘jazz opera’ because of its African American characters. Gershwin was not a jazz musician, and the works he composed were not jazz. Nonetheless, after his death, in the hands of musicians including Charlie Parker, Miles Davis and Ella Fitzgerald, songs such as ‘I Got Rhythm’ served as jumping off points for important jazz movements like Bebop.
All of this is to say that Gershwin’s compositions do not fit easily into a single musical category. They are shifting entities, whose content, orchestration, performance style and cultural significance continue to change from one generation to the next. This is especially true of the works that engage with African American culture. Gershwin’s depictions of race – be it in masterworks like Rhapsody in Blue and Porgy and Bess, or ‘blackface’ numbers like ‘Swanee’ – are relevant to our understanding of the American soundscape. Their creation and reception reveal much about the complexities behind Gershwin’s development as a composer, and the continued development of the United States as a nation.
Read the full article in TLS.
Who really killed Malcolm X?
John Leland, New York Times, 6 February 2020
For more than half a century, scholars have maintained that prosecutors convicted the wrong men in the assassination of Malcolm X.
Now, 55 years after that bloody afternoon in February 1965, the Manhattan district attorney’s office is reviewing whether to reinvestigate the murder.
Some new evidence comes from a six-part documentary called ‘Who Killed Malcolm X?,’ streaming on Netflix Feb. 7, which posits that two of the men convicted could not have been at the scene that day.
Instead it points the finger at four members of a Nation of Islam mosque in Newark, N.J., depicting their involvement as an open secret in their city. One even appeared in a 2010 campaign ad for then-Newark mayor Cory Booker.
‘What got us hooked,’ said Rachel Dretzin, a director of the documentary along with Phil Bertelsen, ‘was the notion that the likely shotgun assassin of Malcolm X was living in plain sight in Newark, and that many people knew of his involvement, and he was uninvestigated, unprosecuted, unquestioned.’
The case has long tempted scholars, who see a conspiracy hidden in unreleased government documents. A detective on the case, Anthony V. Bouza, wrote flatly a few years ago, ‘The investigation was botched.’
Yet it has never sparked the widespread obsessive interest of the Kennedy assassination or the equally brazen killing of Tupac Shakur. Attempts to reopen the case — to uncover the possible roles of the F.B.I., New York Police Department and the Nation of Islam leadership, including Louis Farrakhan — have gotten nowhere.
‘The vast majority of white opinion at that time was that this was black-on-black crime, and maybe black-extremist-on-black-extremist crime,’ said David Garrow, a Pulitzer Prize-winning civil rights historian. ‘And there was for decades a consensus in black communities that we are not going to pick up that rock to see what’s underneath it.’
Read the full article in the New York Times.
The seriousness of George Steiner
Adam Gopnik, New Yorker, 5 February 2020
The word ‘awesome’ is most easily used by adolescents these days, but the range of learning that the critic and novelist George Steiner possessed was awesome in the old-fashioned, grown-up sense: truly, genuinely awe-inspiring. Steiner, who died on Monday, at the age of ninety, knew modern languages, ancient languages, classical literature, and modern literature. He had memorized the rhymes of Racine and he could elucidate the puns in Joyce and he could tell you why both were, in his thorny but not cheaply won view, superior to the prolixities of Shakespeare. He was what many people call a human encyclopedia—not in the American sense, a blank vault of facts, but in the French Enlightenment one: a critical repository of significant knowledge. His long book reviews for this magazine, written over thirty years, from 1966 to 1997, were dotted with allusions of the kind that a naturally horizontal thinker couldn’t help but include. But they were never imposed or forced—his mind truly, on its way to Borges, passed through Sophocles and stopped for a moment to take in the view at Heidegger. Steiner was a lifelong traveller of those routes. ‘Pretentious,’ though a word journalists sometimes used to describe him, was the last thing he ever was. He was never pretending. He was a humanities faculty in himself, an academy of one.
And how many and how wide his subjects were: Lévi-Strauss, Cellini, Bernhard, Chardin, Mandelstam, Kafka, Cardinal Newman, Verdi, Gogol, Borges, Brecht, Wittgenstein, Montale, Liszt, Koestler, the linguistics of Noam Chomsky, and the connoisseurship (and craven Stalinism) of Anthony Blunt. (And that’s mostly one collection.) He was not, to be sure, a High and Low guy; he did not cheerfully follow up his essay on Levi-Strauss’s conception of the raw and the cooked with another essay filled with recipes on how to cook the raw. But that was not the moral manner of his generation; born in 1929, he was of the High and Higher and ever Higher kind, the kind who passionately believed, however fragile the belief might seem, in the power of serious art to redeem life.
Though not, to be sure, to redeem the world. Steiner’s seriousness was significantly disrupted by the Holocaust, which he understood to be the central event of modern times. (His family had fled Vienna shortly before the worst began.) It was part of the genuine, and not merely patrician, seriousness of his view to see the war years as a fundamental rupture not just in history but in our faith in culture: educated people did those things to other educated people. It was not ignorant armies clashing by night that shivered George Steiner’s soul; it was intelligent Germans who listened to Schubert murdering educated Jews who had trusted in Goethe, and by the train load. This recognition of the limits of culture to change the world was the limiting condition on his love of literature, and it was what gave that love a darker and more tragic cast than any mere proselytizing for ‘great books’ could supply.
Read the full article in the New Yorker.
The radical lives of abolitionists
Britt Russert, Boston Review, 20 January 2020
American Radicals establishes the truly riotous nature of nineteenth-century activism, chronicling the central role that radical social movements played in shaping U.S. life, politics, and culture. Holly Jackson’s cast of characters includes everyone from millenarian militants and agrarian anarchists to abolitionist feminists espousing Free Love. Rather than rehearsing nineteenth-century reform as a history of bourgeois abolitionists having tea and organizing anti-slavery bazaars for their friends, Jackson offers electrifying accounts of Boston freedom fighters locking down courthouses and brawling with the police. We learn of preachers concealing guns in crates of Bibles and sending them off to abolitionists battling the expansion of slavery in the Midwest. We glimpse nominally free black communities forming secret mutual aid networks and arming themselves in preparation for a coming confrontation with the state. And we find that antebellum activists were also free lovers who experimented with unconventional and queer relationships while fighting against the institution of marriage and gendered subjugation. Traversing the nineteenth-century history of countless ‘strikes, raids, rallies, boycotts, secret councils, [and] hidden weapons,’ American Radicals is a study of highly organized attempts to bring down a racist, heteropatriarchal settler state—and of winning, for a time.
Jackson illuminates how the creative and performative qualities of nineteenth-century public protest sought to interrupt the status quo. When, in 1854, 50,000 people showed up in Boston to protest the return of fugitive slave Anthony Burns back to slavery, an act authorized by the passage of the Fugitive Slave Act in 1850, protestors staged an elaborate funeral for democracy: black crepe adorned the street, a huge U.S. flag was hung upside down, and a coffin labeled ‘Liberty’ was hung out of a building while the crowd below shouted ‘Shame!’ at federal troops deployed to transport Burns back to the South. Then, as now, the threatening spectacle of both police and military were marshalled as a ‘bellicose display’ intended to intimidate the massive political—and creative—energy of protestors who dared to question the nation’s daily acts of anti-democratic violence and violation of its own founding documents.
The antebellum United States was a deeply unstable formation, suffused with the symbolic and physical traces of the Revolutionary War and, government officials feared, teetering on the verge of anarchy. Reframing the nineteenth-century United States as a war society, Jackson helps us to see social movements—from abolitionism and labor to feminism and early environmental activism—as a continuation of the Revolutionary War by other means. In other words, the militancy of the American Revolution lived on in the many factions and revolts that fomented among the nation’s multitude.
Read the full article in the Boston Review.
Ruth Glass: Beyond ‘gentrification’
Divya Subramanian, NYR Daily, 20 January 2020
‘Once this process of ‘gentrification’ starts in a district,’ Glass wrote, in the first recorded use of the term, ‘it goes on until all or most of the original working-class occupiers are displaced, and the whole social character of the district is changed.’ Glass attributed these changes to increasing state support of private real estate development, as well as the relaxing of rent control laws. While the culture of postwar affluence had blurred the old social distinctions, she argued, a new divide was fracturing society.
Glass’s description of gentrification would transcend its origins, becoming shorthand for the spatialization of class struggle. Articles on gentrification inevitably invoke her name before turning to Spike Lee’s Do The Right Thing (1989), which chronicled the early stirrings of gentrification in Bedford-Stuyvesant, Brooklyn; warehouses-turned-condos; and the rise of NIMBYism. The reembourgeoisement of the city is arguably the prevailing dynamic of twenty-first century urbanism. Yet gentrification was just one aspect of Glass’s work—work that was informed by her fight for social justice.
When London: Aspects of Change was first published in 1964, Ruth Glass was fifty-two years old. The author photo that accompanied the book depicts a younger Glass, somewhere in her early thirties. She is wearing heavy enameled earrings and her dark hair is pulled back. She looks strikingly glamorous, but her expression contains the hint of a smirk. The photograph suggests a woman with absolute confidence in her abilities.
Glass was part of a wave of Jewish intellectuals that fled Nazi Germany for Britain in the 1930s. Born Ruth Lazarus in 1912, she began her career as a teenage journalist for a radical paper in Weimar Berlin, mingling with young intellectuals at the bohemian Romanisches Café and studying at the University of Berlin. She left Germany in 1932, just before the Nazis came to power, and studied in Prague and then London, where she finished her degree at the London School of Economics.
She was young, foreign, a committed Marxist, and, for a brief period between her marriages to, Henry Durant and David Glass, respectively, a divorcée. She was also a pioneer in her field. Her first major publication, Watling: A Study of Social Life on a New Housing Estate (1939), published when she was twenty-seven, examined ‘associational life’ on a London County Council estate, whose residents had been relocated from the cramped living conditions of the East End. From 1948 to 1950, Glass worked for the Ministry of Town and Country Planning, conducting research on New Towns – planned communities intended to rehouse metropolitan populations in better living conditions—before returning to academia. Her later work on community life in the East London neighborhood of Bethnal Green, coauthored with the sociologist Maureen Frenkel, led to a flurry of academic interest in the area, culminating in the founding of the Institute of Community Studies in 1954. Led by the pioneering sociologist Michael Young (himself later known for popularizing a famous coinage: meritocracy), the Institute conducted studies of local life via interviews and ethnographic research, becoming the center of ‘community studies’ in urban sociology.
Read the full article in NYR Daily.
‘I’ve earned my reputation out of other people’s downfall’ – an interview with Don McCullin
Daniel Trilling, Apollo Magazine, 22 February 2020
The pictures in this room are documents of historical events, but they are also history in their own right. McCullin’s photos helped create the visual language of suffering that pervades media culture in our own century, from humanitarian appeals to modern-day refugee crises. He has talked openly in the past about the guilt that comes with taking such photos – that you are ‘stealing’ people’s stories, that you are watching as they suffer or even die in front of you – and the manipulation involved in their selection and presentation to audiences. Yet there is something about his pictures that goes beyond their immediate context. You could see this in the Tate exhibition last year; even among the crowds of tourists, these are images that stop you in your tracks, forcing you to confront the people or places they show. They are things that neither glossy magazines during the heyday of print journalism nor the accolades showered on him by the British establishment can quite contain. (McCullin has been knighted and made a CBE.) It’s telling that in 1982 he was refused permission by the government to cover the Falklands War.
McCullin is reluctant to place himself in the company of artists, partly because he never wants to feel that he’s ‘arrived’ – ‘The moment that happens, I know I’m finished’ – but also because of the nature of his material. ‘There’s a shadow that comes over my life when I think […] that I’ve earned my reputation out of other people’s downfall. I’ve photographed dead people and I’ve photographed dying people, and people looking at me who are about to be murdered in alleyways. So I carry the guilt of survival, the shame of not being able to help dying people.’ Yet the artist’s name that comes up most frequently in discussion of his work is Goya. McCullin first saw Goya’s Black Paintings on a visit to the Prado in Madrid in the early 1980s and was shocked to find a painter who ‘saw what I saw. When I look at some of his drawings, people are looking up for salvation, just before they’re being shot. I’ve seen that with my own eyes.’ He is adamant, however, that he is only a ‘trespasser’ in the art world, and would rather be known simply as a photographer. ‘I’m not an artist, even though I compose my pictures as best I can. In a split second under fire, some people wouldn’t bother, but I’ve stood up in battles and put up the exposure meter first, because I’m not going to get killed for an underexposed negative. But the great thing about landscape is that you owe nothing to fear. It’s all yours, no one can say you’re doing the wrong thing morally, there’s not a human being that can come up and say ‘Why are you taking my picture?’’
In terms of direct influences, McCullin names the early 20th-century photographer Alfred Stieglitz. His pictures were ‘mostly about beauty’, but McCullin says they taught him about ‘the dignity of photography’. Stieglitz was of German-Jewish origin, a fact that prompts McCullin to remark on how many of the photographers and picture editors he encountered early on in his career were from similar backgrounds; refugee intellectuals who brought European modernism with them to London. ‘They’re the people who gave me life, you could say.’
Read the full article in Apollo Magazine.
The neighbourhoods we will not share
Richard Rothstein, New York Times, 20 January 2020
In the mid-20th century, federal, state and local governments pursued explicit racial policies to create, enforce and sustain residential segregation. The policies were so powerful that, as a result, even today blacks and whites rarely live in the same communities and have little interracial contact or friendships outside the workplace.
This was not a peculiar Southern obsession, but consistent nationwide. In New York, for example, the State legislature amended its insurance code in 1938 to permit the Metropolitan Life Insurance Company to build large housing projects ‘for white people only’ — first Parkchester in the Bronx and then Stuyvesant Town in Manhattan. New York City granted substantial tax concessions for Stuyvesant Town, even after MetLife’s chairman testified that the project would exclude black families because ‘Negroes and whites don’t mix.’ The insurance company then built a separate Riverton project for African-Americans in Harlem.
A few years later, when William Levitt proposed 17,000 homes in Nassau County for returning war veterans, the federal government insured his bank loans on the explicit condition that African-Americans be barred. The government even required that the deed to Levittown homes prohibit resale or rental to African-Americans. Although no longer legally enforceable, the language persists in Levittown deeds to this day.
State-licensed real estate agents subscribed to a code of ethics that prohibited sales to black families in white neighborhoods. Nationwide, regulators closed their eyes to real estate boards that prohibited agents from using multiple-listing services if they dared violate this code.
In many hundreds of instances nationwide, mob violence, frequently led or encouraged by police, drove black families out of homes they had purchased or rented in previously all-white neighborhoods. Campaigns, even violent ones, to exclude African-Americans from all but a few inner-city neighborhoods were often led by churches, universities and other nonprofit groups determined to maintain their neighborhoods’ ethnic homogeneity. The Internal Revenue Service failed to lift tax exemptions from these institutions, even as they openly promoted and enforced racial exclusion.
Each of these policies and practices violated our Constitution — in the case of federal government action, the Fifth Amendment; in the case of state and local action, the 14th. Our residential racial boundaries are as much a civil rights violation as the segregation of water fountains, buses and lunch counters that we confronted six decades ago.
Read the full article in the New York Times.
An existential crisis in neuroscience
Grigori Guitchounts, Nautilus, 23 January 2020
A complete human connectome will be a monumental technical achievement. A complete wiring diagram for a mouse brain alone would take up two exabytes. That’s 2 billion gigabytes; by comparison, estimates of the data footprint of all books ever written come out to less than 100 terabytes, or 0.005 percent of a mouse brain. But Lichtman is not daunted. He is determined to map whole brains, exorbitant exabyte-scale storage be damned.
Lichtman’s office is a spacious place with floor-to-ceiling windows overlooking a tree-lined walkway and an old circular building that, in the days before neuroscience even existed as a field, used to house a cyclotron. He was wearing a deeply black sweater, which contrasted with his silver hair and olive skin. When I asked if a completed connectome would give us a full understanding of the brain, he didn’t pause in his answer. I got the feeling he had thought a great deal about this question on his own.
‘I think the word ‘understanding’ has to undergo an evolution,’ Lichtman said, as we sat around his desk. ‘Most of us know what we mean when we say ‘I understand something.’ It makes sense to us. We can hold the idea in our heads. We can explain it with language. But if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’ ‘
‘But we understand specific aspects of the brain,’ I said. ‘Couldn’t we put those aspects together and get a more holistic understanding?’
‘I guess I would retreat to another beachhead, which is, ‘Can we describe the brain?’ ‘ Lichtman said. ‘There are all sorts of fundamental questions about the physical nature of the brain we don’t know. But we can learn to describe them. A lot of people think ‘description’ is a pejorative in science. But that’s what the Hubble telescope does. That’s what genomics does. They describe what’s actually there. Then from that you can generate your hypotheses.’
Read the full article in Nautilus.
The discovery of nuclear fission
Jeremy Bernstein, Inference, December 2019
In addition to collecting files, records, and other evidence, the Alsos personnel also took the most important German scientists into custody. The group of scientists they captured – including Hahn, Werner Heisenberg, and Weizsäcker, among others – were flown to England and interned at Farm Hall, a manor house near Cambridge.24 The group were unaware that microphones had been placed throughout the house. All their conversations were recorded and transcribed, including their reactions to the first atomic bomb being dropped at Hiroshima on August 6, 1945. The group appeared genuinely shocked at the news. Hahn remarked, ‘I don’t believe it… They are 50 years further advanced than we.’
Upon further reflection, the German scientists felt compelled to produce their own account – a Lesart – of the research they had undertaken during the war. All in all, it is a deplorable document. Dated two days after the Hiroshima attack, the Lesart begins by noting the appearance in the media of ‘partly incorrect statements regarding the alleged work carried out in Germany on the atomic bomb,’ and expresses a desire to ‘set out briefly the development of the work on the uranium problem.’ It is an account with many shortcomings.
The Hahn discovery was checked by many laboratories, particularly in the United States, shortly after publication. Various research workers—Meitner and Frisch were probably the first – pointed out the enormous energies which were released by the fission of uranium. On the other hand, Meitner had left Berlin six months before the discovery and was not concerned herself in the discovery.
I wrote about this particular passage in Hitler’s Uranium Club:
This document is remarkable for what it does not say. Meitner is given lukewarm credit – ‘probably the first’ – and her departure from Berlin is made to seem like some sort of natural event. No mention is made as to why she was forced to leave. There is also no mention of the fact that Hahn wrote her frequently asking for and obtaining her advice and even proposing collaboration, and that she and Frisch interpreted Hahn’s data to mean that fission had occurred, thus contributing in an essential way to the discovery.
As it turned out, Meitner died before these documents and the transcripts from Farm Hall were declassified. Nonetheless, she had other reasons to be displeased with Hahn.
Some weeks after the Hiroshima attack, the 1944 Nobel Prize in Chemistry was awarded to Hahn, and Hahn alone, for ‘his discovery of the fission of heavy atomic nuclei.’ Meitner’s contribution was not recognized. Despite appearing as a coauthor with Hahn on a landmark 1939 paper, Strassmann’s name is also conspicuous by its absence.
Read the full article in Inference.
Has listening become a lost art?
Kate Murphy, Literary Hub, 7 January 2020
To research this You’re Not Listening, I interviewed people of all ages, races, and social strata, experts and non-experts, about listening. Among the questions I asked was: ‘Who listens to you?’ Almost without exception, what followed was a pause. Hesitation. The lucky ones could come up with one or two people, usually a spouse or maybe a parent, best friend, or sibling. But many said, if they were honest, they didn’t feel like they had anyone who truly listened to them, even those who were married or claimed a vast network of friends and colleagues. Others said they talked to therapists, life coaches, hairdressers, and even astrologers—that is, they paid to be listened to. A few said they went to their pastor or rabbi, but only in a crisis.
It was extraordinary how many people told me they considered it burdensome to ask family or friends to listen to them—not just about their problems but about anything more meaningful than the usual social niceties or jokey banter. An energies trader in Dallas told me it was ‘rude’ not to keep the conversation light; otherwise, you were demanding too much from the listener. A surgeon in Chicago said, ‘The more you’re a role model, the more you lead, the less permission you have to unload or talk about your concerns.’
When asked if they, themselves, were good listeners, many people I interviewed freely admitted that they were not. The executive director of a performing arts organization in Los Angeles told me, ‘If I really listened to the people in my life, I’d have to face the fact that I detest most of them.’ And she was, by far, not the only person who felt that way. Others said they were too busy to listen or just couldn’t be bothered. Text or email was more efficient, they said, because they could pay only as much attention as they felt the message deserved, and they could ignore the message or delete the message if it was uninteresting or awkward. Face-to-face conversations were too fraught. Someone might tell them more than they wanted to know, or they might not know how to respond. Digital communication was more controllable.
So begets the familiar scene of 21st-century life—at cafés, restaurants, coffeehouses, and family dinner tables, rather than talking to one another, people look at their phones. Or if they are talking to one another, the phone is on the table as if a part of the place setting, taken up at intervals as casually as a knife or fork, implicitly signaling that the present company is not sufficiently engaging. As a consequence, people can feel achingly lonely, without quite knowing why.
Read the full article in Literary Hub.
The images are, from top down: Photographs from Guillaume Duchenne’s ‘Mécanisme de la Physionomie Humaine‘ (1862); Reconstruction of the face of a Neanderthal Man from the Natural History Museum; Dome inside the Aw Yu Synagogue (Hikmat Noori/Al Jazeera); original cover of the piano score for ‘Summertime’ from ‘Porgy and Bess’; ‘The Road to the Somme, France’ © Don McCullin.