The latest (somewhat random) collection of recent essays and stories from around the web that have caught my eye and are worth plucking out to be re-read.
The death of the public square
Franklin Foer, The Atlantic, 6 July 2018
At the time of Milton’s birth, in 1608, there wasn’t much of a public sphere in England. By the time he wrote Areopagitica, it was robust: coffee houses, newspapers, bookstores, theatres, and meeting places – the locales that allowed individuals to come together to form a public. These were spaces largely outside the grasp of church and state—and, in fact, many of these institutions emerged with the express purpose of liberating society from the grasp of church and state.
Nobody designed the public sphere from a dorm room or a Silicon Valley garage. It just started to organically accrete, as printed volumes began to pile up, as liberal ideas gained currency and made space for even more liberal ideas. Institutions grew, and then over the centuries acquired prestige and authority. Newspapers and journals evolved into what we call media. Book publishing emerged from the printing guilds, and eventually became taste-making, discourse-shaping enterprises. What was born in Milton’s lifetime lasted until our own.
Nothing was perfect about this public sphere. It could be jealously exclusive, intolerant of new opinion, a guild that protected the privileges of its members and blocked worthy outsiders. Its failings are legion. Still, the public sphere provided the foundation for Western democracy; it nurtured a faith in reason and the intellectual powers of the individual; it was the platform for free expression and a shield against tyranny.
Ye cannot make us now lesse capable, lesse knowing, lesse eagarly pursuing of the Truth, unlesse ye first make yourselves that made us so, lesse the lovers, lesse the founders of our true Liberty. We can grow ignorant again, brutish, formall, and slavish as ye found us, but you then must first become that which ye cannot be, oppressive, arbitrary, and tyrannous as they were from whom ye have free’d us. [Milton’s Areopagitica]
It took centuries for the public sphere to develop – and the technology companies have eviscerated it in a flash. By radically remaking the advertising business and commandeering news distribution, Google and Facebook have damaged the economics of journalism. Amazon has thrashed the bookselling business in the U.S. They have shredded old ideas about intellectual property—which had provided the economic and philosophical basis for authorship.
The old, enfeebled institutions of the public sphere have grown dependent on the big technology companies for financial survival. And with this dependence, the values of big tech have become the values of the public sphere. Big tech has made a fetish of efficiency, of data, of the wisdom of the market. These are the underlying principles that explain why Google returns such terrible responses to the God query. Google is merely giving us what’s popular, what’s most clicked upon, not what’s worthy. You can hurl every insult at the old public sphere, but it never exhibited such frank indifference to the content it disseminated.
This assault on the public sphere is an assault on free expression. In the West, free expression is a transcendent right only in theory – in practice its survival is contingent and tenuous. We’re witnessing the way in which public conversation is subverted by name-calling and harassment. We can convince ourselves that these are fringe characteristics of social media, but social media has implanted such tendencies at the core of the culture. They are in fact practiced by mainstream journalists, mobs of the well meaning, and the president of the United States. The toxicity of the environment shreds the quality of conversation and deters meaningful participation in it. In such an environment, it becomes harder and harder to cling to the idea of the rational individual, formulating opinions on the basis of conscience. And as we lose faith in that principle, the public will lose faith in the necessity of preserving the protections of free speech.
Read the full article in the Atlantic
How to survive America’s Kill List
Matt Taibi, Rolling Stone, 19 July 2018
A few weeks later, he survived another explosion, he says, outside a Syrian artillery college that had recently fallen into rebel hands.
Kareem now had no doubt he was on America’s infamous ‘Kill List.’ Most Americans don’t even know we have such a thing. We do. Officially, it goes by the ghoulish bureaucratic euphemism ‘Disposition Matrix.’
Seemingly conceived in the Obama years, the lethal list – about which little is known outside a few leaks and court pleadings – appears to sort people into targeting for capture, interrogation, or assassination by drone. It was run by a star-chamber of two-dozen security officials and the president. According to a 2012 New York Times report, they met once a week to decide which targets around the world lived or died.
These meetings became known as ‘Terror Tuesdays.’
As Obama was preparing to leave office, candidate Donald Trump was promising to jack up the number of bombings in the Middle East. ‘You have to take out their families’, he said.
It’s one of the few promises he’s fulfilled. Reports vary, but some estimate that Trump has upped the pace of drone attacks by about four or five times the Obama rate, which itself was 10 times the rate of Bush.
We kill suspects whose names we know, and whose names we don’t; we kill the guilty and the not guilty; we kill men, but also women and children; we kill by day and by night; we fire missiles at confirmed visual targets, but also at cellphone numbers we hope belong to targets.
When he first heard he was on this list, Kareem was aghast. This was no situation like the siege of Aleppo, where a quick joke might turn the crowd. How could anyone reverse the decision of a deadly bureaucracy so secret and inaccessible that even if it had an off switch, few in the civilian world would know where to find it? How could he talk his way out of this one?
Read the full article in Rolling Stone.
Whatever happened to moral rigor?
Lee Siegel, New York Times, 25 July 2018
In his 1963 book the The Fire Next Time James Baldwin describes meeting Elijah Muhammad, the controversial leader of the Nation of Islam. Baldwin felt alienated by Muhammad’s black separatism and by his universal hatred of white people; at the same time, he admired Muhammad’s acute understanding of the countless ways in which institutionalized white power continued to injure and suppress African-Americans.
‘I felt very close to him’, he writes about Muhammad, ‘and really wished to be able to love and honor him as a witness, an ally and a father.’ Yet, reflecting on the moment when the two men said goodbye, Baldwin writes, ‘we would always be strangers, and possibly, one day, enemies.’
Baldwin was as committed as any writer has ever been. But the stuff of his commitment was a moral clarity steeped in intellectual difficulties and ethical complications — a labyrinthine clarity that he refused to sacrifice to prescribed attitudes.
Today we still revere Baldwin, but by and large we no longer follow his lead as a thinker. There is little patience now for such a rigorous yet receptive moral and intellectual style; these days we prefer ringing moral indictment, the hallmarks of which are absolute certainty, predetermined ideas and conformity to collective sentiments.
In the process of abandoning the type of complex moral clarity that Baldwin practiced, we have made behavior that is unacceptable the equivalent of behavior that is criminal. An equal amount of fury is directed toward actions as morally — and legally — distinct from each other as rape, harassment, rudeness, boorishness and incivility. The outrage over a police shooting of an unarmed black teen unfolds at the same level of intensity as the outrage over what might or might not be a case of racial profiling by a salesperson in a small Brooklyn boutique.
This is intentional: The general feeling seems to be that distinguishing between degrees of morally repugnant conduct will lead to some sort of blanket pardon of all such conduct; that to understand is always to forgive. Such concern is understandable, but misplaced — it flattens and obfuscates, rather than clarifies.
Read the full article in the New York Times.
Malaria’s ticking time bomb
Amy Maxmen, Nature, 26 July 2018
After years in decline, malaria infection rates seem to be on the rise in northeastern Cambodia, where people are moving deeper into lush, mosquito-ridden territories in search of timber and seasonal goods such as samrong (Scaphium affine). Their movements provide opportunities for P. falciparum — which requires both human and insect hosts — to thrive. There are other contributors as well, such as treatment delays that allow the parasites to linger and spread, and an alarming decline in the potency of gold-standard malaria drugs called artemisinin-based combination therapies (ACTs).
What happens next here matters for the entire world; malaria remains one of the biggest killers in low-income countries. Estimates of the number of deaths each year range from 450,000 to 720,000 — and ACT pills keep that toll from being much higher. And although southeast Asia accounts for just 7% of malaria cases worldwide, it has a notorious history as the breeding ground for strains of malaria parasites that resist every drug thrown at them and then spread to other regions.
In 2015, reports of drug resistance prompted the governments of five countries in the Greater Mekong Subregion — Cambodia, Thailand, Vietnam, Laos and Myanmar — to pledge to banish P. falciparum from the region by 2025. Together with the World Health Organization (WHO), the countries drew up plans and budgets. This year, the nations’ governments have committed US$41 million towards the effort; the Global Fund to Fight AIDS, Tuberculosis and Malaria also backed elimination efforts in the region, with a 3-year, $243-million grant. Donors such as the Bill & Melinda Gates Foundation and the Asian Development Bank will add more than $20 million to the fight this year.
But the rise of cases in northeastern Cambodia shows how difficult getting to zero will be — and how crucial. As long as P. falciparum exists, it can resurge. And the last parasites remaining are the hardest to find. They reside in the hinterlands, borderlands and war zones — and in people who show no signs of the disease. ‘Malaria is very clever — it hides out where you don’t know and comes back when you aren’t ready,’ says Ladda Kajeechiwa, a malaria-programme manager at a branch of the Mahidol Oxford Tropical Medicine Research Unit (MORU) in Mae Sot, Thailand.
Read the full article in Nature.
Changing the concept of ‘woman’
will cause unintended harms
Kathleen Stock, The Economist, 6 July 2018
In public discourse, there’s a lot of focus on whether trans women should be counted as women. Whatever the ultimate answer, that’s obviously a reasonable question, despite trans activists’ attempts to count it as ‘transphobic’. But I think we should also ask whether self-declaration alone could reasonably be the only criterion of being trans. There’s little precedent elsewhere. In a superficially comparable case, such as coming out as gay, there is still another underlying factor, sexual orientation, that secures your membership. It’s not just a matter of saying that you are gay. And though, as in the notorious case of Rachel Dolezal, a person might ‘self-declare’ that she is ‘trans racial’, it has seemed clear to nearly everybody responding to this case that such a declaration would be not only false, but also offensive to genuinely oppressed members of the race in question. There is no such thing as being ‘trans racial’; there is only thinking falsely that you are.
If it’s not self-declaration, but some other factor, that makes you trans, what’s that factor? Not all trans people seek surgery or take hormones; not all consistently dress or self-adorn in a stereotypical feminine or masculine way; not all have gender dysphoria; and neither trans women, as a group, nor trans men as a group, have any common sexual orientation. Equally, some people have gender dysphoria and yet resist calling themselves trans; some surgically add feminised features without being trans. And so on. All we seem left with is self-declaration.
If we agree, and also accept that trans women should be categorised as women, then ultimately this leaves us with: anyone who isn’t a natal woman, and who sincerely self-declares as a woman, should be counted as a woman. Both Theresa May, Britain’s prime minister, and Jeremy Corbyn, the leader of the opposition Labour party, have apparently enthusiastically taken up this conclusion. They want to change the law to allow gender self-identification via an administrative process of self-certification as the only criterion for legally changing the sex recorded on one’s birth certificate. However, I’ll now suggest that such a move is not cost-free. In particular, certain harms to original members of the category ‘woman’ should be weighed against any gains.
One problem is that, since ‘woman’ and ‘female’ are interchangeable in many people’s minds, we’re likely to lose a secure understanding of the related concept ‘female’. (Indeed, some activists advocate stretching this concept to include trans women, too.) Yet the existing concept is in good order. It designates a person with XX chromosomes, and for whom ovaries, womb, vagina and so on are a statistical norm, even if some females are born without some of these, or lose one or more later. That intersex people exist doesn’t seriously threaten this category, since most categories have statistical outliers.
Read the full article in the Economist.
Physics needs philosophy / philosophy needs physics
Carlo Rovelli, Scientific American, 18 July 2018
And here we come back to Aristotle: Philosophy provides guidance how research must be done. Not because philosophy can offer a final word about the right methodology of science (contrary to the philosophical stance of Weinberg and Hawking). But because the scientists who deny the role of philosophy in the advancement of science are those who think they have already found the final methodology, they have already exhausted and answered all methodological questions. They are consequently less open to the conceptual flexibility needed to go ahead. They are the ones trapped in the ideology of their time.
One reason for the relative sterility of theoretical physics over the last few decades may well be precisely that the wrong philosophy of science is held dear today by many physicists. Popper and Kuhn, popular among theoretical physicists, have shed light on important aspects of the way good science works, but their picture of science is incomplete and I suspect that, taken prescriptively and uncritically, their insights have ended up misleading research.
Kuhn’s emphasis on discontinuity and incommensurability has misled many theoretical and experimental physicists into disvaluing the formidable cumulative aspects of scientific knowledge. Popper’s emphasis on falsifiability, originally a demarcation criterion, has been flatly misinterpreted as an evaluation criterion. The combination of the two has given rise to disastrous methodological confusion: the idea that past knowledge is irrelevant when searching for new theories, that all unproven ideas are equally interesting and all unmeasured effects are equally likely to occur, and that the work of a theoretician consists in pulling arbitrary possibilities out of the blue and developing them, since anything that has not yet been falsified might in fact be right.
This is the current ‘why not?’ ideology: any new idea deserves to be studied, just because it has not yet been falsified; any idea is equally probable, because a step further ahead on the knowledge trail there may be a Kuhnian discontinuity that was not predictable on the basis of past knowledge; any experiment is equally interesting, provided it tests something as yet untested.
I think that this methodological philosophy has given rise to much useless theoretical work in physics and many useless experimental investments. Arbitrary jumps in the unbounded space of possibilities have never been an effective way to do science. The reason is twofold: first, there are too many possibilities, and the probability of stumbling on a good one by pure chance is negligible; more importantly, nature always surprises us and we, limited critters, are far less creative and imaginative than we may think. When we proudly consider ourselves to be ‘speculating widely,’ we are mostly playing out rearrangements of old tunes: true novelty that works is not something we can just find by guesswork.
Read the full article in Scientific American.
Self-invasions and the invaded self
Rochelle Gurstein, The Baffler, No 40, July 2018
No doubt the most famous nineteenth-century article on the subject is ‘The Right to Privacy’ by future Supreme Court Justice Louis Brandeis and his law partner at the time, Samuel Warren, published in The Harvard Law Review in 1890. By this time, the violation of ‘the sacred precincts of private and domestic life’ by ‘the instantaneous photographs and newspaper enterprise’ had gone so far, and the press had so radically ‘overstepp[ed] in every direction the obvious bounds of propriety and decency’ that Brandeis and Warren believed the law would have to intervene.
To this end, they invented a legal right to privacy dedicated to protecting the ‘spiritual precincts’ of ‘inviolate personality.’ Godkin, too, recoiled at the ‘vulgarity, indecency, and reckless sensationalism’ of this new journalism. And he spoke of the significance of privacy in much the same way as Brandeis and Warren in an influential essay, ‘The Rights of the Citizen—to His Own Reputation’ published the same year in the more popular Scribner’s Magazine. The legal recognition of Sir Edward Coke’s famous dictum, ‘A man’s house is his castle,’ according to Godkin, was ‘but the outward and visible sign of the law’s respect for his personality as an individual, for that kingdom of the mind, that inner world of personal thought and feeling in which every man passes some time.’ Here Godkin was drawing on the liberal understanding of privacy as articulated by J S Mill in On Liberty (1859). ‘The appropriate region of human liberty,’ Mill declared, is ‘the inward domain of consciousness.’ For Mill, privacy was essential for the development of one’s individuality and autonomy.
The defenders of reticence also held that respect for privacy was essential for the cultivation and preservation of ‘personal dignity’. This was meant on the most basic, human level of preserving a secure realm in which one could conduct one’s daily life unobserved. Critics were revolted by the very fact of exposure, noting, for example, the way journalists subjected the figure of the American president (in this case, Grover Cleveland) to constant, near-forensic public scrutiny: ‘the President is photographed and described in all possible and impossible places and positions, dignified and otherwise’, complained a critic for The Arena in 1897, ‘and his family are pictured in detail, mostly from imagination.’ Repeatedly, reticence-minded critics wondered aloud about what would happen to ‘decency, modesty, sanctity – conceptions which, after many painful centuries, the more civilized minority of the human race has begun to venerate’ in the new society of ‘presumptuous’ and incessant publicity.
In addition to losing the protected space where we think and feel – the very foundation of our ‘inviolate personality’ and ‘personal dignity’ – the party of reticence spoke with alarm about another loss. It does not concern the person whose privacy has been violated but rather all of us who willingly look on. One of the most memorable expressions of this apprehension appears in a lesser-known novel by Henry James, The Reverberator (1888). Reflecting on what a steady diet of sensational, gossipy newspapers has done to her and her family, the novel’s heroine, Francie Dosson, worries that she and they have become ‘coarse and callous’, that they have ‘lost their delicacy, the sense of certain differences and decencies.’
Moral coarsening – the wearing away of the capacity to recognize what one has become – was both the deepest anxiety and the deepest insight of the party of reticence. If it is our very capacity for sensitivity, our feeling for ‘certain differences and decencies’ – what used to be regarded as a sense of shame – that we lose as a consequence of inhabiting a world where no one is guaranteed the refuge of privacy and no subject is afforded the protection of silence, then this goes a long way toward explaining why more than a century later – after the invention and proliferation of the radio, television, cell phones, twenty-four-hour news cycles, and the internet – so many of us today have such a hard time recognizing what we lose when we lose our privacy. It turns out that the very atmosphere in which we move and breathe deprives us of the perception we need to recognize our predicament.
Read the full article in the Baffler.
If the Supreme Court is nakedly political, can it be just?
Lee Epstein & Eric Posner, New York Times, 9 July 2018
The court has recently entered a new era of partisan division. If you look at close cases — 5 to 4 or 5 to 3 — going back to the 1950s to illustrate this division, you will see that the percentage of votes cast in the liberal direction by justices who were appointed by Democratic presidents has skyrocketed. And the same trajectory applies on the other side: The percentage of votes cast in the conservative direction by justices who were appointed by Republican presidents has also shot up.
The trend is extreme — and alarming. In the 1950s and 1960s, the ideological biases of Republican appointees and Democratic appointees were relatively modest. The gap between them has steadily grown, but even as late as the early 1990s, it was possible for justices to vote in ideologically unpredictable ways. In the closely divided cases in the 1991 term, for example, the single Democratic appointee on the court, Byron White, voted more conservatively than all but two of the Republican appointees, Antonin Scalia and William Rehnquist. This was a time when many Republican appointees — like Sandra Day O’Connor, Harry Blackmun, John Paul Stevens and David Souter — frequently cast liberal votes.
In the past 10 years, however, justices have hardly ever voted against the ideology of the president who appointed them. Only Justice Kennedy, named to the court by Ronald Reagan, did so with any regularity. That is why with his replacement on the court an ideologically committed Republican justice, it will become impossible to regard the court as anything but a partisan institution.
It is hard to think of any historical precursors. The most famous period of ideological division on the court was in the 1930s, when it repeatedly struck down liberal legislation. But what is remarkable is that the division was not strongly partisan. Among the ‘four horsemen’ — the die-hard opponents of the New Deal — one was appointed by a Democratic president, and another was a Democrat appointed by a Republican president. Among the three justices who typically voted to uphold New Deal programs, two were appointed by Republican presidents.
Read the full article in the New York Times.
Why we can’t ignore the working-class identity crisis
James Bloodworth, Unherd, 9 July 2018
In Rugeley, as in many other working-class towns, identity – particularly male identity – was at one time something that was forged by work, something that was shared. However the organisations through which a person could collectively organise have withered on the vine. Trade unions are bereft of members, while social clubs are shells of the institutions that once provided an opportunity for workers to self-educate and self-organise.
It can be difficult to grasp the scale of the change this represents unless you speak to the people that have lived through it. A culture has effectively disappeared. The new jobs lack the solidarity and social networks of the past. Yet the memory of a more cooperative past lingers, compounding the sense of present loss.
When I travelled to South Wales in 2017, I was told almost upon arrival that the old factories which once stood in towns like Ebbw Vale and Brynmawr had been like ‘extensions of your family’. These work-related networks were once modest outlets of democratic expression, allowing working people to act on the world – as opposed to simply being acted upon. By this I mean that there existed forums of working class democracy through which the individual and the group could interpret wider events – and more importantly, could exert a pressure of their own on their situation. The owner of a pit couldn’t simply cut your wages or lay you off arbitrarily, or else members of the union would walk out on strike. This feels a long way from companies such as Amazon and Sports Direct.
The disappearance of much of this – and the consequent atomisation of social and economic life – has produced a climate propitious to populism. The more chaotic and tumultuous that life appears to be, the easier it is for demagogic politicians to channel the resultant anger toward their own obsessions. The demagogue succeeds by pointing at the processes of globalisation – the forces that close the local factory, or which produce jobs that only Romanian migrants are willing to do – and proposes simple solutions. These are very often ugly, cynical and violent. It seems clear that we are some way down this path already.
Read the full article in Unherd.
The country with no left
Max Lane, Jacobin, 15 July 2018
More than anything, though, the elections once again threw into sharp relief the barren terrain of Indonesian politics — the vicious result of the 1965 massacres, which saw hundreds of thousands of leftists slaughtered at the hands of the Indonesian military. The Communist Party, then one of the largest in the world, was wiped out in one fell swoop. Leftist ideas became verboten. Popular mass-based organizations were exterminated. Since the massacres, a tight control on historical knowledge has eliminated almost all memory of past popular struggles or left-wing thought. Marxism remains legally banned, with significant penalties (including imprisonment) for ‘spreading widely’ such ideas.
This isn’t to say that there’s a shortage of parties in present-day Indonesia. Fourteen different formations participated in the recent elections, deploying a range of symbols and rhetorical styles to differentiate themselves. Some adopted a semi-secular, moderate nationalist rhetoric; others a religious, mainly Islamic rhetoric; still others tried to merge the two, proclaiming themselves ‘national religious.’
But the differences, for the most part, are not deep. There is no formation articulating a left politics — no social-democratic or labor party, no party of class disaffection. The union movement, while much more active than a few decades ago, is still small and divided, with the biggest unions coopted by one or another of the registered elite-owned parties. There is no peasant movement, despite a substantial village and rural population and frequent peasant protests over land confiscation. Outspoken social-democratic or socialist public figures are nonexistent on the national political stage, even at the margins.
Read the full article in Jacobin.
BD McClay, Hedgehog Review, Summer 2018
When, around 2015, ‘virtue’ began to be appended to ‘signaling,’ its main function was to make the unspokenaim of the signaling in question explicit. Whereas before you might have been signaling that you were smart, now you were signaling that you were a good person. But whatever you’re doing, it is, and will always be, about what people think about you, either to the exclusion of any other reason or before any other reason. (The degree to which this diagnosis can also apply to rationalists and neo-reactionaries remains unclear.)
To what extent is ‘virtue signaling’ a useful, or at least meaningful, phrase? That the desire to be thought of a certain way can preclude the desire to be a certain way, or at least complicates the latter, is certainly true. That sometimes people say and do things just to be seen saying and doing them is also true.
Take rich white parents who profess to believe in the importance of desegregation of schools but who send their own children to segregated-in-all-but-name schools. Both of these actions (the professed antiracism, the choice of school) involve signaling of a kind, since the name of the school you send your children to can sometimes carry more heft than the substance of their education. At the same time, choosing to send your children to an integrated school could also be understood to be a virtue signal – that you’re so obsessed with appearing right-minded that you will make decisions that might penalize your children.
People being – for millennia, as the saying goes – social creatures, things begin to get muddy right around here. Anything can function as a signal, and to some extent does: the clothes you wear, your taste in books, the car you drive, the food you eat, the religion you practice, the organizations to which you give time and money, and so on. Unless you take great pains to make sure nobody ever sees you doing any of these things – which some people have been known to do – they’re all information by which other people judge you.
In other words, maybe you like the novels of Leo Tolstoy because they’re good; or maybe you like them because you’ve been told that’s what smart people like, and you want to be thought of as smart. But most likely, your reaction is an inscrutable mixture of the two, because your taste doesn’t exist in a vacuum but also is probably not purely developed for cynical reasons. And if every action you can take in a given situation can be a virtue signal – whether in accord with your principles or against them—then as a diagnostic tool, ‘virtue signaling’ isn’t very useful.
‘Who I am’ and the problem of offence
Piers Benn, Areo, 4 July 2018
Offensiveness is a multi-faceted concept. To begin with, there is offense to the senses, such as foul smells and tastes, putrescent slime and rotting flesh. These things produce disgust, which appears to be an evolutionary adaptation that protects us from disease. The revulsion has little conceptual content. It is hard to say what it is about a foul smell that we find disgusting, apart from its disgustingness – we might not even believe that the foul-smelling thing poses a danger to health. Then there are more complex cases. Take pornography. Although society is increasingly tolerant of it, some people find its brazen display of genitalia and sexual performances offensive. When pressed to say why, they tend to change the subject, talking instead about how porn encourages misogyny and sexual crime, or causes male sexual dysfunction. These ‘dialectical displacement tactics’, to coin a phrase, lead to an abundance of inconclusive social scientific debates about the harm caused, or not caused, by porn. These do not address people’s visceral revulsion towards porn, because it is so difficult to articulate to those who do not share it. A further example is blasphemy. Although most people in the West do not recognize the concept, for many religious believers it is the ultimate offense—the mockery of God, or at least the desecration of what is holy. As with porn, it is implausible to describe the wrong mainly in terms of harm – God cannot be harmed, and I suspect people’s religious conviction is as likely to be strengthened as weakened by the encounter with it.
All this suggests to me that offensiveness, as opposed to harm, needs more analysis in the free speech debate. The offensiveness of Nazi displays consists in the unpleasant reminder of what the Nazis stood for, and especially – for those whom the Nazis hated – in being made to imagine themselves the way the Nazis viewed them. But these are hazy thoughts and the truth may be subtler.
To address the question of whether offensiveness should be tolerated, we should look at more everyday instances of what people find offensive and notice the trend for seeing things in a deeply personal way. In the case of Nazism, this makes sense – it is quite coherent for Jews, Slavs and homosexuals to take Nazi attitudes personally. But many cases discussed today are far less clear cut. For example, should you be allowed to stand on a busy street with placards bearing biblical passages condemning homosexuality? Or to mock, publicly, the fundamental beliefs of a major world religion? Or to say that transgender women are not women? There are mixed reactions. Some people want to stop the activities of the street fundamentalist with his verses from Leviticus, thinking him harmful or at least profoundly offensive to gay people. There are also those, including some non-Muslims, who perceive mockery or even straightforward criticism of Islam as ‘Islamophobic.’ And some of the most bitterly fought battles, at present, are about the transgender question.
Read the full article in Areo.
A new type of museum for an age of migration
Jason Farago, New York Times, 11 July 2018
Here’s an example: Two pieces of blue-and-white pottery are on display — a vase ringed with Persian script and a porcelain dish decorated with Chinese characters. They both date from around the late 16th century. But it turns out that the ‘Persian’ one was made in China, while the ‘Chinese’ one comes from Iran, and on both of them the characters are nonsense. Their meaning lies not in the gobbledygook written on their surfaces, but on the trade routes they map and the relationships they signify.
‘Mobile Worlds’ is filled with objects like these — some lovely, some disturbing — displayed together. All of them replace the fiction of cultural authenticity — and, by implication, the oversimplified idea of ‘cultural appropriation’ — with a far broader constellation of terms: translation, simulation, exchange, conquest, recombination, hybridity.
Museums, especially ones born in the 19th century like the Museum für Kunst und Gewerbe and the Victoria & Albert, do more than just store objects from the past. They classify them and, implicitly, rank them, too often with European works at the top. The vast majority of objects made and sold and cherished in this world, however, defy the taxonomies of the museum. They migrate. They get copied, modified, remixed. They flirt with each other, and intermarry. These museum misfits, not clearly representative of ‘a culture’ or ‘a people,’ are the objects that make up ‘Mobile Worlds’: Meissen porcelain with Indian motifs, kimonos with Nazi insignia, Congolese ivory statuettes of figures in Western dress.
Among the most fascinating objects in ‘Mobile Worlds’ is one of the newest, and least precious: a box of what look like Ladurée macarons, made of joss paper and meant to be burned as offerings for the dead. These ‘French’ luxuries, made with sugar from the Caribbean and almonds from the Middle East, have been transmuted into paper imitations — and now appear to German museumgoers as something ‘Chinese.’
There is a hazard, here. Mr. Buergel in places seems so eager to jettison the logic of the imperial museum that he risks recreating an earlier model of artistic display: the cabinet of curiosities, in which 17th-century princes and potentates showed off small, surprising objects from a range of arts and sciences. Most of the objects in ‘Mobile Worlds’ appear in old-style display cases rescued from the museum’s storage, and the juxtapositions can get too precious for their own good. To put Kurdish handicrafts with the latest fashions from Comme des Garçons may be a step too far.
But by and large, ‘Mobile Worlds’ delivers on its contention that European museums need to do much more than just restitute plundered objects in their collections, important as that is. A 21st-century universal museum has to unsettle the very labels that the age of imperialism bequeathed to us: nations and races, East and West, art and craft. It’s not enough just to call for ‘decolonization,’ a recent watchword in European museum studies; the whole fiction of cultural purity has to go, too. Any serious museum can only be a museum of our entangled past and present. The game is to not to tear down the walls, but to narrate those entanglements so that a new, global audience recognizes itself within them.
Read the full article in the New York Times.
Homage to Lanzmann
Paul Berman, Tablet, 12 July 2018
Claude Lanzmann’s special genius was a spectacular brusqueness, which allowed him to reveal, as if with no effort at all, the patterns of thought that protect enormities under a cloak of niceties. Sometimes he was faintly droll and mordant in how he went about doing this. Everyone who has seen the 9 1/2 hours of Shoah will remember the scene in which an old SS Unterscharführer at Treblinka named Franz Suchomel, who does not know that he is on camera, agrees to recount his history at the camp and says, ‘But don’t use my name.’ Lanzmann replies, ‘No, I promised. All right, you’ve arrived at Treblinka. …’ The Unterscharführer begins to speak – and, in subtitles on the screen, his name and identity appear.
It is a little shocking to see the subtitles. You wonder for a flicker of an instant if you aren’t watching a crime take place, which is Lanzmann’s baldfaced lie to the old Nazi. But then, in that same flicker of an instant, you do recognize, if you have half a brain, that the crime in this particular case belongs to the Nazi, and not to the man interviewing the Nazi. You might even find yourself shocked to have been shocked – shocked to have been confused even for a micromoment about the rights and wrongs of manipulating an old SS man into revealing the scale of his criminality. Does the micromomentary confusion overshadow what the Nazi recounts? Maybe it does, for its own micromoment. I notice that right now I am recounting the amusing story of Lanzmann’s deception, instead of the realities of Treblinka – where, I should add, my grandfather’s innumerable cousins were murdered, quite probably at the direction of the Unterscharführer whose face is identified on screen. But there is something to be gained from having undergone an instant of confusion, so long as you give it some thought. You have had a collaborator’s experience, in a specific version – the experience of believing that researching a mass extermination is fine and good, but other considerations ought to take precedence.
In his memoir, The Patagonian Hare, Lanzmann recounts a slapstick version of the same deception. He and a colleague persuaded an old SS man with a much higher rank, Obersturmführer Heinz Schubert, from the family of the 19th-century composer, to speak to them about the war years, and they brought along a secret camera, concealed in a bag, into Schubert’s villa. The camera was an elaborate device that transmitted images and sounds to a larger machine, which itself was concealed in a minivan down the block, manned by a couple of additional members of Lanzmann’s team. Only, the volume on the machine was tuned too high. The neighbors overheard the interview as it proceeded, and they and Schubert’s son figured out what was going on and burst into the villa, enraged. Lanzmann and his colleagues had to make a run for it, and they had to throw away the expensive equipment, too. It is a funny story. Lanzmann was a good fellow. But the interview was botched. It was a victory for the Schubert family and the indignant neighbors—a victory for the people who observed the proprieties of neighborhood solidarity and respect for a family’s privacy. The Schubert family pressed legal charges against Lanzmann, too, in token of the fact that neighborliness and law stand united.
Read the full article in Tablet.
WEB Du Bois and his politics:
A complicated and controversial legacy
Justin F Jackson, The Berkshire Edge, 8 July 2018
Du Bois made extraordinary and lasting contributions to academic knowledge across several disciplines, including history and sociology. He fostered African-American arts and culture, wrote prolifically as a journalist, made forays into literature and poetry and, above all, acted as a leader in struggles for racial equality and civil rights.
In recent months, all this has finally been acknowledged in the hometown he loved so much. Banners bearing his image and noting the causes of civil rights, education and economic justice that he championed adorned Main Street’s lampposts. Stirring musical performances and speeches filled halls and churches. Scholarly lectures enlightened the curious. The Mason Library hosted an exhibit of artifacts from his life.
The Commonwealth even embraced the festive mood. In February, the state Legislature passed a resolution, co-signed by Rep. William ‘Smitty’ Pignatelli, D-Lenox, marking the sesquicentennial of Du Bois’ birth on Church Street in 1868, only three years after the end of the Civil War and the abolition of slavery. It seemed like Du Bois had, in a sense, finally come home to the welcoming New England community from which his family — and often paternalist but genuinely encouraging white neighbors, teachers and churchgoers — had ushered him onto the world stage.
Yet like Banquo’s ghost, the controversy surrounding Du Bois — and particularly his political ideology and affiliation at the end of his life — will not go down. Just as a specter whose troubled spirit aches for eternal rest, the legacy of his proverbially last-minute decision in 1961 to apply for membership in the American Communist Party at age 93 is revivified yet again in Great Barrington. Proving that the Cold War continues to haunt American life and politics well into the 21st century long after the rivalries, passions and violence it generated have dissipated, some local residents are protesting the vote of the town library trustees to welcome on the Mason Library front lawn a statue of Du Bois, the funds for which are being raised by private citizens.
What is their grievance? They believe that accepting such a statue and placing it on town property somehow implicates the town in the endorsement of Du Bois’ Communist affiliation. By extension, goes their logic, this would defame the patriotic service of American veterans who combated communism during the Cold War, and perhaps even endorse the crimes of communist regimes.
Read the full article in the Berkshire Edge.
Classless utopia versus class compromise
Michael Lind, American Affairs, Summer 2018
For its part, the neoliberal center-left, like the libertarian center-right, treats modern Western societies as near-meritocracies. The critique of inequality on the left tends to take two forms. One is what the journalist Mickey Kaus calls ‘money liberalism’. For some on the center-left, the problem is not persistent inherited family privilege in all its forms, but mere differences in income among individuals, which can be remedied simply, if expensively, by more redistribution of income funded by higher taxes.
Others on the center-left focus on inequality among individuals of different races and genders, rather than among families in different classes. In this narrative, diversity means that categories defined by race or gender should be represented in elite institutions (corporate management, prestigious universities, etc.) in rough proportion to their numbers in the population. The ideology of managerial-elite diversity holds that working-class African Americans, Latinos, and women can be adequately ‘represented’ in elite institutions by affluent African Americans, affluent Latinos, and affluent women. In reality, however, there may be vast social distance between, and inequality of life chances among, the designated category representatives and others who share their ancestries or genders.
Even when it comes to intergenerational inequalities of wealth, the quasi-official intelligentsia of the academy, the media, and the nonprofit sector focuses on crude statistical differences among racial categories as a whole, ignoring differences among classes within racial groups. But averages based on race which ignore class are misleading. As Matt Bruenig has pointed out, the top 10 percent of white families in the United States owns 65.1 percent of national wealth, while the bottom half of white families owns only 2 percent, and the bottom 32.1 percent of white families has a net worth of zero. These numbers contradict the popular center-left argument that white working-class Republicans enjoy significant benefits in wealth and income from white privilege.
In reality, the truly privileged white economic elite has been shifting toward the Democrats, even as the white working class has become more Republican. The disproportionately Democratic media spins this as a shift of ‘educated’ voters toward Democrats. The more honest interpretation is perhaps too discomfiting: ‘A look at affluent suburban returns on a district and town level suggests that some combination of income, education, culture, and geography – in a word, ‘class’ – drove Clinton’s most dramatic gains’, wrote Matt Karp of the Democratic Socialists of America.
Thus both the mainstream Right and the mainstream Left in America are deeply invested in the claim that a classless society can be achieved in the near future, if only the government would get out of the way (the Right), or redistribute more income (money liberalism), or remove residual racial and gender barriers to individual achievement (identity-politics progressivism). The gap between these perceptions and the reality of class mobility in Western societies is enormous.
Read the full article in American Affairs.
To remember, the brain must actively forget
Dalmeet Singh Chawla, Quanta Magazine,
24 July 2018
Decades of research have focused on how the brain acquires information, resulting in theories that suggest short-term memories are encoded in the brain as patterns of activity among neurons, while long-term memories reflect a change in the connections between neurons.
What hasn’t received nearly as much attention from memory researchers is how the brain forgets. ‘The vast majority of the things that are happening to me in my life — the conscious experience I’m having right now — I’m most likely not going to remember when I’m 80’, said Michael Anderson, a memory researcher at the University of Cambridge, who has been studying forgetting since the 1990s. ‘How is it that the field of neurobiology has actually never taken forgetting seriously?’
‘Without forgetting, we would have no memory at all’, said Oliver Hardt, who studies memory and forgetting at McGill University in Montreal. If we remembered everything, he said, we would be completely inefficient because our brains would always be swamped with superfluous memories. ‘I believe that the brain acts as a promiscuous encoding device,’ he said, noting that at night many people can recall even the most mundane events of their day in detail, but then they forget them in the following days or weeks.
The reason, he thinks, is that the brain doesn’t know straight away what is important and what isn’t, so it tries to remember as much as possible at first, but gradually forgets most things. ‘Forgetting serves as a filter’, Hardt said. ‘It filters out the stuff that the brain deems unimportant.’
Experiments in the past few years are finally beginning to make the nature of that filter clearer.
Read the full article in Quanta Magazine.
Why identity politics benefits
the right more than the left
Sheri Berman, Guardian, 14 July 2018
Rather than being directly translated into behavior, psychologists tell us beliefs can remain latent until ‘triggered’. In a fascinating study, Karen Stenner shows in The Authoritarian Dynamic that while some individuals have ‘predispositions’ towards intolerance, these predispositions require an external stimulus to be transformed into actions. Or, as another scholar puts it: ‘It’s as though some people have a button on their foreheads, and when the button is pushed, they suddenly become intensely focused on defending their in-group … But when they perceive no such threat, their behavior is not unusually intolerant. So the key is to understand what pushes that button.’
What pushes that button, Stenner and others find, is group-based threats. In experiments researchers easily shift individuals from indifference, even modest tolerance, to aggressive defenses of their own group by exposing them to such threats. Maureen Craig and Jennifer Richeson, for example, found that simply making white Americans aware that they would soon be a minority increased their propensity to favor their own group and become wary of those outside it. (Similar effects were found among Canadians. Indeed, although this tendency is most dangerous among whites since they are the most powerful group in western societies, researchers have consistently found such propensities in all groups.)
Building on such research, Diana Mutz recently argued that Trump’s stress on themes like growing immigration, the power of minorities and the rise of China highlighted status threats and fears particularly among whites without a college education, prompting a ‘defensive reaction’ that was the most important factor in his election. This ‘defensive reaction’ also explains why Trump’s post-election racist, xenophobic and sexist statements and reversal of traditional Republican positions on trade and other issues have helped him – they keep threats to whites front and center, provoking anger, fear and a strong desire to protect their own group.
Understanding why Trump found it easy to trigger these reactions requires examining broader changes in American society. In an excellent new book, Uncivil Agreement, Lilliana Mason analyzes perhaps the most important of these: a decades-long process of ‘social sorting’. Mason notes that although racial and religious animosity has been present throughout American history, only recently has it lined up neatly along partisan lines. In the past, the Republican and Democratic parties attracted supporters with different racial, religious, ideological and regional identities, but gradually Republicans became the party of white, evangelical, conservative and rural voters, while the Democrats became associated with non-whites, non-evangelical, liberal and metropolitan voters.
This lining up of identities dramatically changes electoral stakes: previously if your party lost, other parts of your identity were not threatened, but today losing is also a blow to your racial, religious, regional and ideological identity. (Mason cites a study showing that in the week following Obama’s 2012 election, Republicans felt sadder than American parents after the Newtown school shooting or Bostonians after the Boston Marathon bombing.) This social sorting has led partisans of both parties to engage in negative stereotyping and even demonization. (One study found less support for ‘out-group’ marriage among partisan Republicans and Democrats than for interracial marriage among Americans overall.)
Read the full article in the Guardian.
Tools from China are oldest hint
of human lineage outside Africa
Colin Barras, Nature, 11 July 2018
Hominins reached Asia at least 2.1 million years ago, researchers assert in an 11 July Nature paper1. Stone tools they found in central China represent the earliest known evidence of humans or their ancient relatives living outside Africa.
Other scientists are convinced that the tools were made by hominins and are confident that they are as old as claimed. And although the tools’ makers are unknown, the discovery could force researchers to reconsider which hominin species first left Africa — and when. ‘This is a whole new palaeo ball game’, says William Jungers, a palaeoanthropologist at Stony Brook University, New York.
Most researchers say that hominins — the evolutionary line that includes humans — first left their African homeland around 1.85 million years ago. This is the age of the oldest hominin fossils discovered beyond Africa — from Dmanisi, Georgia, in the Caucasus region of Eurasia. The oldest hominin remains from East Asia, two incisors from southwest China, are around 1.7 million years old (see ‘Travelling Hominins’).
Archaeological finds made between 2004 and 2017 at a site called Shangchen in central China now challenge that orthodoxy. By studying and dating a sequence of ancient soils and deposits of wind-blown dust, a team of Chinese and British geologists and archaeologists led by Zhaoyu Zhu at the Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, has uncovered dozens of relatively simple stone tools. The youngest tools are 1.26 million years old, and the oldest date back to 2.12 million years.
The 2.12-million-year-old geological layers might not even represent the earliest hominin occupation of the region. John Kappelman, an anthropologist and geologist at the University of Texas at Austin and one of the paper’s referees, points out that the deepest — and so oldest — layers at the site are currently inaccessible because the region is actively farmed. Investigating them should be a priority, he says.
Read the full article in Nature.
By turning down 23andMe, immigration activists
are actually being responsible about genetic privacy
Dan Robitzski, Futurism, 25 June 2018
House Representative Jackie Speier (D-CA) — who referred to the practice of separating migrant children under the age of two as ‘a war crime’ — reached out to her friend, 23andMe CEO Ann Wojcicki to see if the company could help. On the surface, it seemed like a simple way to use existing technology to bring people back together, especially since some migrant children have been separated from their parents at ages too young to know their or their parents’ names.
23andMe jumped at the idea, almost immediately offering to provide genetic testing kits to families and separated children. MyHeritage, another direct-to-consumer genetics company, quickly followed suit. Because the government failed to keep accurate records of separated children to later reunify families, the genetic tests could fill in the gaps to match separated families.
RAICES Texas, one of the biggest nonprofit organizations that has been raising money and providing legal support to these refugee and immigrant families, decided Monday to turn 23andMe and MyHeritage away, according to KQED.
There’s no question that Spier, 23andMe, and MyHeritage had good intentions (though they surely also saw the chance at the public relations move of a lifetime). But, as publications like The Verge and USA Today noted, the project came with far more probably-unintended dangers. Because it means compiling a genetic database of asylum seekers and other immigrants who were separated from their families upon entering the U.S. and are desperate to find their families again.
Collecting this genetic information would give these companies — and the government, if the records were subpoenaed — the ability to trace these families for purposes far beyond reuniting parents with children. It would create a store of private information about migrants that could be devastating if leaked or sold.
Read the full article in Futurism.
Was a renowned literary theorist also a spy?
Richard Wolin, Chronicle of Higher Education,
20 June 2018
Throughout the recent, highly public exchange of claims and counterclaims, accusations and counteraccusations, Kristeva has expressed concern that her reputation as an engaged intellectual would be unfairly and permanently harmed. In truth, however, the damage had been done long ago, the cumulative effect of her uncritical support, as a member of the Tel Quel circle, for left-wing dictatorships during the 1960s and 1970s: the Soviet Union, from 1968 to 1971; and Cultural Revolutionary China, from 1971 to 1976. What makes Kristeva’s partisanship for those regimes and their draconian practices so hypocritical and so objectionable is that Kristeva, as a young woman, had experienced firsthand the ultra-repressive nature of Soviet-style, bureaucratic socialism in her native Bulgaria, where, from 1954 to 1989, First Secretary Todor Zhivkov of the Bulgarian Community Party ruled with an iron fist.
In 1968 the Tel Quel ensemble — which, in 1967, had myopically allied itself with the political fortunes of the French Communist Party (PCF) — endorsed the Soviet Union’s military invasion of Czechoslovakia: an act of tyranny that succeeded in crushing the last vestiges of hope for ‘socialism with a human face’ embodied by the Prague Spring. Aping the ideological rationalizations of the Stalin era, the Telquelians argued that those who criticized Soviet ruthlessness were merely providing aid and comfort to the bourgeoisie.
As it turned out, the Warsaw Pact’s brutal Prague incursion proved to be the final nail in the coffin of the French Communist Party. Thereafter, it was substantially discredited in the eyes of French youth and intellectuals. Whereas the 1960s had been an effervescent celebration of ‘youth culture’ and ‘youth revolt’, the bureaucratic Communism of the Brezhnev era was overseen by a clique of senescent septuagenarians.
At the time of the invasion, Tel Quel’s editorial director, Philippe Sollers, expressed a ‘secret enthusiasm’ for the Bulgarian tanks that had participated in the lopsided Warsaw Pact assault. Identifying with the Bulgarian strike force was, he explained, a way of honoring his love for Kristeva.
The decision to endorse Soviet brutality in Prague was merely one among many political blunders that the Tel Quel group committed during this period. During May 1968, Sollers, Kristeva, and company sided with the PCF’s condemnation of the French student uprising. Echoing the official party line, the Telquelians dismissed the student revolt due to its ‘insufficiently proletarian character.’ In mid-May, when a group of prominent French literati — Jean-Paul Sartre, Simone de Beauvoir, Jean Genet, Nathalie Sarraute, and Marguerite Duras — sought to form a new writers’ union in support of the student strike, the Tel Quel ’salon Bolsheviks’ expressed their disapproval by demonstratively walking out of the assembly. As Sollers, who, as it turns out, hailed from a family of wealthy Bordeaux industrialists, pontificated at the time: ‘All revolution can only be Marxist-Leninist!’
Read the full article in the Chronicle of Higher Education.
Following a new trail of crumbs to agriculture’s origins
Tobias Richter & Amaia Arranz-Otageeu,
Sapiens, 16 July 2018
Bread is the most common staple food in most parts of the world, apart from some areas of Asia where rice is king. It is also one of the most diverse food products: Each region makes its own distinct varieties using doughs made from water mixed with wheat, rye, corn, or other common plant-derived ingredients. Bread also has significant cultural, even national, connotations: What would France be without its baguette and croissant? Denmark without its rugbrød (rye bread) and smørrebrød (open-faced sandwiches)? The Arab world without pita? Every culture has developed its own types of bread that have, in many cases, become a culinary expression of identity. Baguettes, rye breads, tortillas, bagels, pitas, chapatis, focaccia, malooga (a flatbread found in Yemen), or nan—if you live in Europe, the Americas, Africa, or most parts of Asia, there’s a good chance that you’ll eat bread on any given day.
Yet the origins of bread, and why and how it became such a popular and versatile staple food, have been shrouded in something of a mystery.
Although labor intensive, making simple bread is relatively straightforward: One only needs water, flour, and a suitable place to bake—something as simple as a hot, flat stone will work. Archaeologists have detected traces of starch on grinding tools dating to the Upper Paleolithic (11,500–50,000 years ago) at a number of archaeological sites in modern-day Russia, Italy, the Czech Republic, and Israel. Although some archaeologists think this means that humans started producing flour quite early on, these tools could have been used to squash or pound starchy plants for other reasons, such as to make gruel or porridge. Grinding tools are also quite rare in the Upper Paleolithic, so even if these tools were used only for making flour, it doesn’t appear to have been a very common activity.
Most archaeologists put the beginning of bread making in the Neolithic era, which first began around 11,500 years ago in southwest Asia. In the Fertile Crescent—an arch-shaped region stretching from the Nile Valley through the Levant into Anatolia and Mesopotamia—charred plant remains excavated from archaeological sites, along with abundant grinding tools, flint sickles blades, and storage facilities, suggest that people had begun to cultivate wild wheat, rye, and barley, as well as legumes. But what they produced from these plants has also been long debated. Some archaeologists assume it was bread, but other suggestions include porridge or beer.
Conclusive evidence has been hard to come by – until now.
Read the full article in Sapiens.
Why politics needs hope (but no longer inspires it)
Titus Stahl, Aeon, 17 July 2018
Even so, the question remains whether political hope is really a good thing. If one of the tasks of government is to realise social justice, would it not be better for political movements to promote justified expectations rather than mere hope? Is the rhetoric of hope not a tacit admission that the movements in question lack strategies for inspiring confidence?
The sphere of politics has particular features, unique to it, that impose limitations on what we can rationally expect. One such limitation is what the American moral philosopher John Rawls in 1993 described as the insurmountable pluralism of ‘comprehensive doctrines’. In modern societies, people disagree about what is ultimately valuable, and these disagreements often cannot be resolved by reasonable argument. Such pluralism makes it unreasonable to expect that we will ever arrive at a final consensus on these matters. To the extent that governments should not pursue ends that cannot be justified to all citizens, the most we can rationally expect from politics is the pursuit of those principles of justice on which all reasonable people can agree, such as basic human rights, non-discrimination, and democratic decision-making. Thus, we cannot rationally expect governments that respect our plurality to pursue more demanding ideals of justice – for example, via ambitious redistributive policies that are not justifiable relative to all, even the most individualistic, conceptions of the good.
This limitation stands in tension with another of Rawls’s claims. He also argued, in 1971, that the most important social good is self-respect. In a liberal society, the citizens’ self-respect is based on the knowledge that there is a public commitment to justice – on the understanding that other citizens view them as deserving fair treatment. However, if we can expect agreement on only a narrow set of ideals, that expectation will make a relatively small contribution to our self-respect. Compared with possible consensus on more demanding ideals of justice, this expectation will do relatively little to make us view other citizens as being deeply committed to justice.
Fortunately, we need not limit ourselves to what we can expect. Even though we are not justified in expecting more than limited agreement on justice, we can still collectively hope that, in the future, consensus on more demanding ideals of justice will emerge. When citizens collectively entertain this hope, this expresses a shared understanding that each member of society deserves to be included in an ambitious project of justice, even if we disagree about what that project should be. This knowledge can contribute to self-respect and is thus a desirable social good in its own right. In the absence of consensus, political hope is a necessary part of social justice itself.
So it is rational, perhaps even necessary, to recruit the notion of hope for the purposes of justice. And this is why the rhetoric of hope has all but disappeared. We can seriously employ the rhetoric of hope only when we believe that citizens can be brought to develop a shared commitment to exploring ambitious projects of social justice, even when they disagree about their content. This belief has become increasingly implausible in light of recent developments that reveal how divided Western democracies really are. A sizable minority in Europe and the US has made it clear, in response to the rhetoric of hope, that it disagrees not only about the meaning of justice but also with the very idea that our current vocabulary of social justice ought to be extended. One can, of course, still individually hope that those who hold this view will be convinced to change it. As things stand, however, this is not a hope that they are able to share.
Read the full article in Aeon.
‘It’s the book that gave me freedom’:
Michael Ondaatje on The English Patient
Aida Edemariam, Guardian, 9 July 2018
What an extraordinary afterlife the book has had. ‘Well, it already had a second afterlife with the film, right? And that was a bolt of lightning that I wasn’t expecting. And then this – suddenly redoing the whole thing again. Another horse race, you know?’ He laughs. Though both were a fillip, really, on what the first prize gave him, which was the most precious thing: ‘Freedom. I had been teaching for many, many years up to that point.’ Teaching full-time, in fact, and ‘trying to write a complicated novel’, and that had become too much to manage. ‘I thought I was going to lose it – and I had quit my job. I just needed to finish the book. It was a bet.’ Which could not have come off more handsomely.
Penelope Lively, in her speech earlier in the evening, had mused about how different a person she was, at 85, from the one who in her mid-50s had written Moon Tiger. What did Ondaatje think of the self who wrote The English Patient (which he has not reread since it was published)? ‘Well, I still like him.’ More interesting, he thinks, is the way in which each book he’s written is like ‘a time capsule’.
When he was writing The English Patient, between about 1985 and 1992, there was an argument going on in Canada about nationalism and integration. ‘They didn’t want Sikhs to wear turbans if they were policemen and stuff like that. That was in the air.’ The striking thing is how contemporary his concerns – how to release oneself from the imposition of nationalism; how to rediscover one’s essential individuality or true, often artistic allegiances – now feel. Contemporary, and somehow, in a harsher time, impossibly idealistic.
Ondaatje is fond of a quotation from John Berger: ‘Never again will a story be told as if it were the only one.’ It is a moral imperative, isn’t it? Especially in current western politics, which seems so determined to cancel the multiplicity of viewpoints from all over the world, or at least to pretend that they don’t exist. ‘Oh, absolutely. The Berger quote is very interesting because it’s a political statement, but it’s also an artistic statement.’
So wasn’t the ending of The English Patient, in which the Sikh Kip (whose relationship with the Canadian nurse, Hana, Ondaatje describes as being like ‘continents meeting’) drops everything and returns home when he hears of the bombs falling on Hiroshima and Nagasaki, a failure of nerve? A reimposition of the nationalisms dissolved through the rest of the novel, where, as Kamila Shamsie put it: ‘Ondaatje’s imagination acknowledges no borders’?
Read the full article in the Guardian.
The images are, from top down: Portrait de la Femme by Pablo Picasso; Detail from Diego Rivera’s Detroit Industry mural (photo mine); Aiko Tezuka, ‘The Warp and Weft’ ‘Wired up’ by Inga Popesco, winner of the ‘Best Representation of the Human Connectome’ in the 2014 Brain Art competition; Julia Kristen (photographer unknown).