web 67

The latest (somewhat random) collection of essays and stories from around the web that have caught my eye and are worth plucking out to be re-read.


We’re banning facial recognition.
We’re missing the point.

Bruce Schneier, New York Times, 20 January 2020

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that’s just one identification technology among many. People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to ‘follow’ us as we move throughout our day. It can be purchasing data, internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are — using surveillance data collected by all sorts of companies and then sold without our knowledge or consent…

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heart beats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Read the full article in the New York Times.


Uncivil disobedience in Hong Kong
Candice Delman, Boston Review, 13 January 2020

Initially sparked by the government’s introduction in February 2019 of an extradition bill that would have allowed criminal suspects to stand trial in mainland China, the movement in Hong Kong quickly turned into a rally for democracy. Its slogan: ‘Liberate Hong Kong! Revolution of our times!’ Participants in the movement insist they have learned their lesson from the failure of the last pro-democracy protests, Hong Kong’s 2014 Occupy Central (formally, Occupy Central with Love and Peace), also known as the Umbrella Movement. Like a number of the worldwide Occupy movements that erupted after the financial crisis of 2008, these protests followed the civil disobedience textbook. One of the movement’s leaders, law professor Benny Tai, even argued that Occupy Central sought to emulate the classical, liberal conception of civil disobedience found in John Rawls’s A Theory of Justice (1971). Participants sought to be peaceful, rational, and nonviolent, or woh-leih-fei in the Cantonese shorthand.

Protesters occupied the city’s main thoroughfares for seventy-nine days against proposed reforms to the electoral system. They peacefully marched and sat-in at the government headquarters. They remained public, nonviolent, and respectful throughout—in a word: civil. Nevertheless, the government refused to negotiate. It arrested and aggressively prosecuted the movement’s leaders. Joshua Wong and Nathan Law were convicted of unlawful assembly and incitement to assembly unlawfully, and Benny Tai of conspiracy to cause public nuisance. Participants willingly accepted arrest and punishment. Popular support, initially very high, faltered with the prolonged occupation engulfing the city.

Initially sparked by the government’s introduction in February 2019 of an extradition bill that would have allowed criminal suspects to stand trial in mainland China, the movement in Hong Kong quickly turned into a rally for democracy. Its slogan: ‘Liberate Hong Kong! Revolution of our times!’ Participants in the movement insist they have learned their lesson from the failure of the last pro-democracy protests, Hong Kong’s 2014 Occupy Central (formally, Occupy Central with Love and Peace), also known as the Umbrella Movement. Like a number of the worldwide Occupy movements that erupted after the financial crisis of 2008, these protests followed the civil disobedience textbook. One of the movement’s leaders, law professor Benny Tai, even argued that Occupy Central sought to emulate the classical, liberal conception of civil disobedience found in John Rawls’s A Theory of Justice (1971). Participants sought to be peaceful, rational, and nonviolent, or woh-leih-fei in the Cantonese shorthand.

Participants in today’s protests, such as the visual artist Kacey Wong, contrast the civil disobedience of the Umbrella Movement with the current movement’s uncivil disobedience. It does not abide the strictures of civility—publicity, nonviolence, non-evasion of law enforcement, and decorum—nor does it seek to. ‘It was you who taught me that peaceful marches are useless,’ one protester scrawled on the wall of the Legislative Council Complex, which was stormed on July 1, 2019.

Read the full article in the Boston Review.


Why historical analogy matters
Peter E. Gordon, NYR Daily, 7 January 2020

On June 24, 2019, the United States Holocaust Memorial Museum issued a formal statement that it ‘unequivocally rejects the efforts to create analogies between the Holocaust and other events, whether historical or contemporary.’ The statement came in response to a video posted by Alexandria Ocasio-Cortez, the Democratic congresswoman from New York, in which she had referred to detention centers for migrants on the US southern border as ‘concentration camps.’ If the historical allusion wasn’t already apparent, she added a phrase typically used in reference to the genocide of European Jewry: ‘Never Again.’ Always a favorite target of right-wing politicians, Ocasio-Cortez drew a scolding retort from Liz Cheney, the Republican Congresswoman from Wyoming, who wrote on Twitter: ‘Please @AOC do us all a favor and spend just a few minutes learning some actual history. 6 million Jews were exterminated in the Holocaust. You demean their memory and disgrace yourself with comments like this.’ In the ensuing social media storm, the statement by the Holocaust Memorial Museum against historical analogies gave the unfortunate appearance of partisanship, as though its directors meant to suggest that Cheney was right and Ocasio-Cortez was wrong.

Much of this might have been a tempest in the tweet-pot were it not for the fact that, on July 1, 2019, an international group of scholars published an open letter on The New York Review of Books website expressing their dismay at the Holocaust Memorial Museum’s statement and urging its director to issue a retraction. ‘The Museum’s decision to completely reject drawing any possible analogies to the Holocaust, or to the events leading up to it, is fundamentally ahistorical,’ they wrote. ‘Scholars in the humanities and social sciences rely on careful and responsible analysis, contextualization, comparison, and argumentation to answer questions about the past and the present.’ Signed by nearly 600 scholars, many working in fields related to Jewish studies, the letter was restrained but forthright. ‘The very core of Holocaust education,’ it said, ‘is to alert the public to dangerous developments that facilitate human rights violations and pain and suffering.’ The museum’s categorical dismissal of the legitimacy of analogies to other events was not only ahistorical, it also inhibited the public at large from considering the moral relevance of what had occurred in the past. Granting the possibility of historical analogies and ‘pointing to similarities across time and space,’ they warned, ‘is essential for this task.’

I was neither an author of this letter nor an original signatory, but like many others, I later added my name, as I felt the issues it raised were of great importance. The debate involves an enormous tangle of philosophical and ethical questions that are not easily resolved: What does it mean when scholars entertain analogies between different events? How is it even possible to compare events that occurred in widely different circumstances? The signatories of the open letter to the USHMM were entirely right to say that analogical reasoning is indispensable to the human sciences. But it’s worth turning over the deeper, more philosophical question of how analogies guide us in social inquiry, and why they cannot be dismissed even when some comparisons may strike critics as politically motivated and illegitimate.

Read the full article in the NYR Daily.


The enemies of writing
George Packer, The Atlantic, 23 January 2020

What are the enemies of writing today?

First, there’s belonging. I know it sounds perverse to count belonging as an enemy of writing. After all, it’s a famously lonely life—the work only gets done in comfortless isolation, face-to-face with yourself—and the life is made tolerable and meaningful by a sense of connection with other people. And it can be immensely helpful to have models and mentors, especially for a young person who sets out from a place where being a writer might be unthinkable. But this solidarity isn’t what I mean by belonging. I mean that writers are now expected to identify with a community and to write as its representatives. In a way, this is the opposite of writing to reach other people. When we open a book or click on an article, the first thing we want to know is which group the writer belongs to. The group might be a political faction, an ethnicity or a sexuality, a literary clique. The answer makes reading a lot simpler. It tells us what to expect from the writer’s work, and even what to think of it. Groups save us a lot of trouble by doing our thinking for us.

Politicians and activists are representatives. Writers are individuals whose job is to find language that can cross the unfathomable gap separating us from one another. They don’t write as anyone beyond themselves. But today, writers have every incentive to do their work as easily identifiable, fully paid-up members of a community. Belonging is numerically codified by social media, with its likes, retweets, friends, and followers. Writers learn to avoid expressing thoughts or associating with undesirables that might be controversial with the group and hurt their numbers. In the most successful cases, the cultivation of followers becomes an end in itself and takes the place of actual writing.

As for the notion of standing on your own, it’s no longer considered honorable or desirable. It makes you suspect, if not ridiculous. If you haven’t got a community behind you, vouching for you, cheering you on, mobbing your adversaries and slaying them, then who are you? A mere detached sliver of a writing self, always vulnerable to being punished for your independence by one group or another, or, even worse, ignored…

Among the enemies of writing, belonging is closely related to fear. It’s strange to say this, but a kind of fear pervades the literary and journalistic worlds I’m familiar with. I don’t mean that editors and writers live in terror of being sent to prison. It’s true that the president calls journalists ‘enemies of the American people,’ and it’s not an easy time to be one, but we’re still free to investigate him. Michael Moore and Robert De Niro can fantasize aloud about punching Donald Trump in the face or hitting him with a bag of excrement, and the only consequence is an online fuss. Nor are Islamist jihadists or white nationalists sticking knives in the backs of poets and philosophers on American city streets. The fear is more subtle and, in a way, more crippling. It’s the fear of moral judgment, public shaming, social ridicule, and ostracism. It’s the fear of landing on the wrong side of whatever group matters to you. An orthodoxy enforced by social pressure can be more powerful than official ideology, because popular outrage has more weight than the party line.

Read the full article in The Atlantic.


How Helene, a Jewish Holocaust survivor, became Leila,
the matriarch of a Palestinian Muslim clan

Steve Hendrix & Ruth Eglash,
Washington Post, 22 January 2020

Leila Jabarin looked every inch the matriarch of the Muslim family that surrounded her on a recent morning, encircled by some of her 36 grandchildren in a living room rich with Arabic chatter and the scent of cardamom-flavored coffee.

But Jabarin, her hair covered with a brown headscarf, was talking to visitors in Hebrew, not Arabic, and telling a story that not even her seven children knew until they were grown. She was born not Leila Jabarin, but Helene Berschatzky, not a Muslim but a Jew. Her history began not in this Arab community where she has made her life with the Palestinian man she fell in love with six decades ago, but in a Nazi concentration camp where her Jewish parents had to hide their newborn from the Nazis.

As world leaders — including U.S. Vice President Pence — gather in Jerusalem this week to mark the 75th anniversary of the liberation of Auschwitz, Jabarin was sharing a survivor’s memory unlike any other, a history of love and hate that exposes not just the power of transformation, but also the blindness of prejudice.

‘First I was persecuted because I was a Jew, and now I am persecuted because I am a Muslim,’ said Jabarin, who has watched the recent rise of both anti-Semitism and Islamophobia with alarm.

Jabarin took note of the massacre of 11 worshipers at a Pittsburgh synagogue in 2018 and another 51 last year at mosques in Christchurch, New Zealand. She attributed to both killers the same motivation, a hatred of the other, and is telling her story to show that love for the other is possible as well.

‘When I was in school, they taught us that Arabs had tails,’ she said, looking around at her Arab husband and her Arab family, as the Muslim call to prayer sounded across the neighborhood outside. ‘Everyone should know what happened to the Jews because it could happen to the Arabs.’

Among those listening in her living room was Erez Kaganovitz, a Tel Aviv photographer who is crisscrossing Israel to document as many such stories and images he can from the rapidly dwindling number of living Holocaust survivors. Through histories like Jabarin’s, he hopes to keep the knowledge of those horrors from disappearing with those who endured them.

‘Ten years from now, what will be the memory of the Holocaust when the last survivor is no longer with us?’ asked Kaganovitz. ‘If well tell the human stories, not just what happened in the camps but how they lived after, they appeal to humans in the way that numbers cannot. Six million Jews killed; it’s too big.’

Read the full article in the Washington Post.


Bharat Mata

The far-right is going global
Eviane Leidig, Foreign Policy, 21 January 2020

In October 2019, 23 members of the European Parliament (MEPs) visited Kashmir, just two months after the Indian government removed the region’s special autonomous status. The trip sparked controversy when it was revealed that most of the MEPs belonged to far-right political parties, including France’s National Rally (formerly National Front) and Germany’s Alternative für Deutschland (AfD). It wasn’t just the affiliations of these visitors that drew attention: The MEPs had been granted access to Kashmir even as foreign journalists and domestic politicians were barred access to the region, and the Indian-administered government had imposed an internet shutdown since August.

This visit was the latest example of the growing ties between the far-right in India and Europe, a connection that is rooted primarily in a shared hostility toward immigrants and Muslims, and couched in similar overarching nationalistic visions. Today, with the populist radical right ascendant in India and in several European democracies, the far-right agenda has been increasingly normalized and made a part of mainstream political discourse.

The link between far-right ideologies in these regions long predates the relatively recent rise of right-wing populist leaders. In the 1930s, Hindu nationalists collaborated with key figures in Fascist Italy and Nazi Germany in order to help advance their extreme right-wing projects. One of the pioneers of Hindu nationalism, V.D. Savarkar, once wrote that India should model its approach to its ‘Muslim problem’ on that used by the Nazis to deal with their ‘Jewish problem.’

Similarly, European ideologues like Savitri Devi (born in France as Maximiani Portas) described Hitler as an incarnation of the Hindu god Vishnu. Nearly four decades after she died, her ideology remains popular among American white nationalists. The manifesto of Anders Behring Breivik, the Norwegian terrorist who killed 77 people in 2011, also expressed an affinity for the Hindu nationalist approach to Islam that highlights many contemporary European attitudes toward Muslim immigrant populations.

‘The only positive thing about the Hindu right wing is that they dominate the streets. They do not tolerate the current injustice and often riot and attack Muslims when things get out of control, usually after the Muslims disrespect and degrade Hinduism too much,’ Breivik wrote before bombing a government building in Oslo and killing dozens of children at a summer camp. ‘India will continue to wither and die unless the Indian nationalists consolidate properly and strike to win. It is essential that the European and Indian resistance movements learn from each other and cooperate as much as possible. Our goals are more or less identical.’

Read the full article in Foreign Policy.


The equality conundrum
Joshua Rothman, New Yorker, 6 January 2020

According to the Declaration of Independence, it is ‘self-evident’ that all men are created equal. But, from a certain perspective, it’s our inequality that’s self-evident. A decade ago, the writer Deborah Solomon asked Donald Trump what he thought of the idea that ‘all men are created equal.’ ‘It’s not true,’ Trump reportedly said. ‘Some people are born very smart. Some people are born not so smart. Some people are born very beautiful, and some people are not, so you can’t say they’re all created equal.’ Trump acknowledged that everyone is entitled to equal treatment under the law but concluded that ‘All men are created equal’ is ‘a very confusing phrase to a lot of people.’ More than twenty per cent of Americans, according to a 2015 poll, agree: they believe that the statement ‘All men are created equal’ is false.

In Waldron’s view, though, it’s not a binary choice; it’s possible to see people as equal and unequal simultaneously. A society can sort its members into various categories – lawful and criminal, brilliant and not – while also allowing some principle of basic equality to circumscribe its judgments and, in some contexts, override them. Egalitarians like Dworkin and Waldron call this principle ‘deep equality.’ It’s because of deep equality that even those people who acquire additional, justified worth through their actions – heroes, senators, pop stars – can still be considered fundamentally no better than anyone else. By the same token, Waldron says, deep equality insures that even the most heinous murderer can be seen as a member of the human race, ‘with all the worth and status that this implies.’ Deep equality – among other principles -b ought to tell us that it’s wrong to sequester the small children of migrants in squalid prisons, whatever their legal status. Waldron wants to find its source.

In the course of his search, he explores centuries of intellectual history. Many thinkers, from Cicero to Locke, have argued that our ability to reason is what makes us equals. (But isn’t this ability itself unequally distributed?) Other thinkers, including Immanuel Kant, have cited our moral sense. (But doesn’t this restrict equality to the virtuous?) Some philosophers, such as Jeremy Bentham, have suggested that it’s our capacity to suffer that equalizes us. (But then, many animals suffer, too.) Others have nominated our capacity to love. (But what about selfish, hard-hearted people?) It would be helpful, on a practical level, if there were a well-defined basis for our deep equality. Such a basis might guide our thinking. If deep equality turned out to be based on our ability to suffer, for example, then Michael and Angela might feel better about giving their daughter Alexis, who risks blindness, more money than her siblings. But Waldron finds none of these arguments totally persuasive

Read the full article in the New Yorker.


Technology can’t fix algorithmic injustice
Annette Zimmermann, Elena di Rosa & Hocham Kim
Boston Review, 9 January 2020

A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly expressed their concerns about the advent of this kind of ‘strong’ (or ‘general’) AI – and the associated existential risk that it may pose for humanity. In Hawking’s words, the development of strong AI ‘could spell the end of the human race.’

These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that ‘weak’ (or ‘narrow’) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society.

Read the full article in the Boston Review.


There is no mental illness epidemic
Polly MacKenzie, UnHerd, 20 January 2020

My father is a psychoanalytic psychotherapist, so I grew up with the opposite world view: that we’re all a bit damaged, dysfunctional and deluded. On more than one occasion, when I expressed some teenage angst, he chuckled and — with great affection in his voice — said I was a fascinating bundle of psychopathology. When I accidentally stabbed myself in the hand with a pencil while revising for my A Levels, he described this as a ‘particularly primitive piece of acting out.’ He was probably right.

Slowly, the world has come round to my father’s point of view. We have new shibboleths proclaimed by mental health advocates: mental health exists on a spectrum, just like physical health, they say. Sometimes we’re up, and sometimes we’re down. One in four of us is experiencing a diagnosable mental health problem at any one time: two thirds of us will experience mental illness in our lifetimes.

All this is technically true. But as a public narrative it’s creating its own disasters. If everyone is a little bit crazy, and everyone should ask for help, it creates a wall of demand that medicalised mental health services will never be able to meet, and that is compromising our ability to help those most in need. We are not in the middle of a mental illness epidemic. We are in the middle of a treatment crisis, where there is not enough help to go around.

Yes: about ‘one in four’ of us has a diagnosable mental health problem. But it was about one in four back in 1993 when we first asked this question in a survey of what’s called ‘Adult Psychiatric Morbidity’. It was one in four in 2000, in 2007 and in 2014. Of course, it’s absurd that this survey is only done once every seven years: we won’t have the next survey results until 2023 at the earliest, so maybe things have changed. But long term, the pattern is clear. Numbers of people with poor mental health are pretty stable.

So what’s changed? Why all the noise about a crisis? We’ve done something wonderful by stripping the stigma from mental health problems. We’ve empowered people to come forward and ask for help, instead of suffering in silence. Between 2007 and 2014 the number of people in treatment rose from 25% of those with symptoms of a common mental disorder to 40%. That’s an extra 1.5 million UK adults who felt able to ask for help, and got some.

So why am I not celebrating? Because providing treatment for those ‘common mental disorders’ too often comes at the expense of help for those with less common, but more serious, conditions.

Read the full article in UnHerd.


The achievement gap in education:
Racial segregation versus segregation by poverty
Dick Startz, Brookings, 20 January 2020

While racial achievement gaps have been reduced since the days of King’s campaigns, the remaining gaps are still large. There are many reasons for persistence in the achievement gap, including that the legacy of separate and unequal may cause segregation in the past to have continued effects today. Looking directly at today’s situation—how important is today’s segregation per se? Black students mostly go to school with other Black students. Black students also mostly go to school with low-income students. Do either of these forms of segregation contribute to the racial achievement gap? The new work by Reardon, et al., ‘Is Separate Still Unequal? New Evidence on School Segregation and Racial Academic Achievement Gaps,’ suggests that it is primarily poverty segregation rather than race segregation that accounts for segregation’s effect on the achievement gap.

The authors use a massive dataset that covers achievement in grades 3-8 in about half the school districts in the United States; notably, these districts include 96% of all Black public school students. The outcome variable studied is the Black/white achievement gap on test scores in math and English language achievement. (The results discussed below are for third graders.) The central question studied is the extent to which the achievement gap is explained by four differences in the average experience of Black versus white children measured by exposure to: minority schoolmates; poor schoolmates; minority neighbors; and poor neighbors.

The authors control for the usual measures that help explain the achievement gaps, and then focus on segregation per se. So the authors are telling us how much segregation matters over and above other differences that explain achievement.

The authors’ first result is that differences in exposure to minority schoolmates appears to matter a lot—if taken alone. According to my calculations using the authors’ reported outcomes, a one standard deviation increase in this measure accounts for about 9% of the total achievement gap. If exposure to minority neighbors is accounted for, the effect is even a bit larger, with the effect of schoolmates being about twice as large as the effect of neighbors. In other words, the apparent effects of racial segregation are pretty much what you’d expect.

But the authors point out that schools are also highly segregated by income level, specifically by the fraction of students living in poverty. And measures of racial segregation and ‘poverty segregation’ are highly correlated. (Depending on the exact measures used, the reported correlation coefficients are between 0.82 and 0.93.) Is the apparent effect of race really an effect of poverty? According to the authors’ work, yes.

Read the full article in Brookings.



Why Silicon Valley fell in love with
an ancient philosophy of austerity
Jacob Rosenberg, Mother Jones, Jan/Feb 2020

Stoicism begins with a wayward merchant. Legend has it that in 312 B.C. a dye salesman named Zeno was shipwrecked in the Aegean Sea. He washed ashore in Athens, read a book of Socratic dialogues, and found philosophical purpose. His brush with misfortune guided him to the idea of self-control in an uncontrollable world. ‘Man conquers the world by conquering himself,’ Zeno is said to have said. He developed a following.

Soon he and his acolytes were convening in the Athenian marketplace on a porch that would become their namesake, the Stoa Poikile, where they were surrounded by others who wished to draw a crowd—fire eaters, sword swallowers, and the like. Zeno’s doctrines of personal mastery would trickle down to Seneca, a sort of philosopher laureate of the Roman imperial period, and Marcus Aurelius, the Roman emperor—and eventually, two millennia later, to an opportunistic writer named Ryan Holiday, himself a kind of wayward merchant.

‘At the core of it,’ Holiday, a former marketer for American Apparel, is telling me, ‘I think there’s something in Stoicism that connects to the philosopher mindset’—he catches himself—’or, sorry, the entrepreneurial mindset.’ Zeno is ‘an entrepreneur,’ Holiday explains, ‘a person trying to make their way in the real world.’ Then he begins to rattle off the new mantras of the philo­sopher-entrepreneur: ‘You’re responsible for yourself. No one’s coming to rescue you. Make the most of this. Don’t get caught sleeping. Put in the work.’

Holiday is, most recently, the author of Stillness Is the Key—not to be confused with The Obstacle Is the Way or Ego Is the Enemy, his two other this-is-the-that Stoicism titles. His book sales number in the millions; last January, Holiday’s 2016 book of meditations on Stoicism sold nearly 33,000 copies, he tells me. He also runs a popular website called Daily Stoic (plus a video channel), and he claims his daily email of Stoic advice reaches hundreds of thousands of people, including senators, billionaires, and ‘just a lot of ordinary people.’ That’s to say nothing of the speaking engagements with NFL players and coaches, Navy SEALs, and government agencies. There’s Stoic merch, too: a memento mori medallion for $26, a ring for $245. Holiday’s project is straightforward: to boil down the work of ancient thinkers into pithy advice—philosophy as lifehack. ‘Have Your Best Week Ever With These 8 Lessons of Stoicism,’ proclaims one of his video titles.

Along with tech lifestyle guru Tim Ferriss, Holiday is the principal evangelist of modern Stoicism, which is enjoying a vogue these days, especially among the Silicon Valley set. Gwyneth Paltrow’s personal book curator says, ‘Stoic philosophers are having a moment now.’ A mini–book industry has sprung up around Stoicism that includes Holiday, Donald Robertson, and William B. Irvine. Author Elif Batuman took a Stoic turn while living alone in Istanbul. Susan Fowler, the software engineer who blew the whistle on Uber’s sexist culture, said she’d found the courage to come forward from Stoic teachings. Elizabeth Holmes held close a copy of Meditations as her blood-testing company crumbled. Twitter CEO Jack Dorsey is regularly taking cold baths. And there are the hundreds of thousands of people on Reddit and Twitter and Y Combinator’s Hacker News message board, ingesting bite-sized Stoic quotes about indifference and control and holding themselves up as a lonesome but formidable bulwark against ‘modern-day Oprah culture.’

Read the full article in Mother Jones.


The way we write history has changed
Alex C Madrigal, The Atlantic, 21 January 2020

History, as a discipline, comes out of the archive. The archive is not the library, but something else entirely. Libraries spread knowledge that’s been compressed into books and other media. Archives are where collections of papers are stored, usually within a library’s inner sanctum: Nathaniel Hawthorne’s papers, say, at the New York Public Library. Or Record Group 31 at the National Archives—a set of Federal Housing Administration documents from the 1930s to the ’70s. Usually, an archive contains materials from the people and institutions near it. So, the Silicon Valley Archives at Stanford contains everything from Atari’s business plans to HP co-founder William Hewlett’s correspondence.

While libraries have become central actors in the digitization of knowledge, archives have generally resisted this trend. They are still almost overwhelmingly paper. Traditionally, you’d go to a place like this and sit there, day after day, ‘turning every page,’ as the master biographer Robert Caro put it. You might spend weeks, months, or, like Caro, years working through all the boxes, taking extensive notes and making some (relatively expensive) photocopies. Fewer and fewer people have the time, money, or patience to do that. (If they ever did.)

Enter the smartphone, and cheap digital photography. Instead of reading papers during an archival visit, historians can snap pictures of the documents and then look at them later. Ian Milligan, a historian at the University of Waterloo, noticed the trend among his colleagues and surveyed 250 historians, about half of them tenured or tenure-track, and half in other positions, about their work in the archives. The results quantified the new normal. While a subset of researchers (about 23 percent) took few (fewer than 200) photos, the plurality (about 40 percent) took more than 2,000 photographs for their ‘last substantive project.’

The driving force here is simple enough. Digital photos drive down the cost of archival research, allowing an individual to capture far more documents per hour. So an archival visit becomes a process of standing over documents, snapping pictures as quickly as possible. Some researchers organize their photos swiping on an iPhone, or with an open-source tool named Tropy; some, like Alex Wellerstein, a historian at Stevens Institute of Technology, have special digital-camera setups, and a standardized method. In my own work, I used Dropbox’s photo tools, which I used to output PDFs, which I dropped into Scrivener, my preferred writing software.

Read the full article in The Atlantic.


Why the drugs don’t work
Tom Chivers, UnHerd, 21 January 2020

Imagine that you’re trying to decide which school you want to send your child to. Of course, your little darling is the most gifted and brilliant child in the world — anyone can see that! That time he set the headteacher’s hair on fire was only because he wasn’t feeling sufficiently challenged. Anyway, it’s time to find somewhere that will really push him. So you’re looking at the exam results of the various schools in your area.

Most of the schools report that 80% or so of their children achieve A-to-C grades in all their exams. But one school reports 100%. They all appear to be demographically similar, so you assume, reasonably enough, that the teaching is much better in that one school, and so you send little Mephiston there.

But a year later, his grades have not improved, and he is once again in trouble for dissecting a live cat in biology class. You dig a little deeper into the exam results, and someone tells you that the school has a trick. When a child doesn’t get a result between A and C, the school simply doesn’t tell anyone! In their reports, they only mention the children who get good grades. And that makes the results look much better.

Presumably, you would not feel that this is a reasonable thing to do.

It is, however, exactly what goes on a lot in actual science. Imagine you do a study into the efficacy of some drug, say a new antidepressant. Studies are naturally uncertain — there are lots of reasons that someone might get better or not get better from complex conditions like depression, so even in big, well-conducted trials the results will not perfectly align with reality. The study may find that the drug is slightly more effective than it really is, or slightly less; it may even say that an effective drug doesn’t work, or that an ineffective one does. It’s just the luck of the draw to some degree.

That’s why — as I’ve discussed before — you can’t rely on any single study. Instead, the real gold standard of science is the meta-analysis: you take all the best relevant studies on a subject, combine their data, and see what the average finding is. Some studies will overestimate an effect, some will underestimate it, but if the studies are all fair and all reported accurately, then their findings should cluster around the true figure. It’s like when you get people to guess the number of jelly beans in a jar: some people will guess high, some low, but unless there’s some reason that people are systematically guessing high or low, it should average out.

But what if there is such a reason? What if — analogous to the school example above — the studies that didn’t find a result just weren’t ever mentioned? Then the meta-analyses would, of course, systematically find that drugs were more effective than they are.

Read the full article in UnHerd.


Profile evidence, fairness,
and the risks of mistaken convictions
Marcello Di Bello & Collin O’Neil, Ethics, January 2020

In 1992 a package containing raw opium was delivered to an apartment rented by Neng Vue and Lee Vue, two brothers of Hmong ancestry who lived in the city of Minneapolis. The police monitored the delivery, and the brothers were arrested and brought to trial on opium trafficking charges. To bolster the case against them, the prosecution called an expert witness to the stand who testified that 95 percent of the opium smuggling cases in the Minneapolis area involved people of Hmong ancestry.  When paired with the fact that the Hmong compose only 6 percent of the population in the area, the 95 percent estimate yields the following correlation:

Ethnicity: In the Minneapolis area, someone who is of Hmong ancestry is 297 times more likely to be trafficking drugs as compared to someone who is not of Hmong ancestry.

The brothers were convicted of opium smuggling, and while the correlation between ethnicity and drug trafficking would obviously not have been enough on its own to convict them, it might well have been the tipping point for the jury. The convictions were reversed on appeal, on the grounds that the jury had been improperly invited to ‘put the Vues’ racial and cultural background into the balance in determining their guilt.’ Setting aside the legal details of the court’s argument, there is surely something intuitively troubling about admitting evidence like Ethnicity at trial. Allowing such evidence to be used against a defendant would seem to wrong the defendant, even when it plays only a supplementary role.

The use of statistics about ethnicity, especially statistics about a disadvantaged and stigmatized ethnicity, may raise special concerns. But there is also intuitive resistance to admitting the following correlations as evidence against defendants:

Prior Burglary: Someone who has previously been convicted of burglary is 125 times more likely to commit burglary than someone in the general population.

Bad Environment: 20 percent of males brought up in broken homes, addicted to drugs, and unemployed go on to commit serious acts of violence, while only 0.1 percent of people in the general population commit such crimes.

EthnicityPrior Burglary, and Bad Environment are all examples of what we shall call ‘profile evidence.’ In its incriminating form, profile evidence expresses a positive, nonaccidental statistical correlation between bearing a certain property and committing a type of crime. When the correlation is reliable and the defendant has the property, this evidence is probative of guilt—that is, its addition to the body of evidence would increase, sometimes even substantially, the probability that the fact finders should assign to the defendant’s guilt.

Read the full article in Ethics.


Rediscovering the lost power of reading aloud
Meghan Cox Gurdon, LitHub, 22 January 2020

It is a marvelous thing: simple, profound, and very, very ancient. What Salman Rushdie calls ‘the liquid tapestry’ of storytelling is one of the great human universals. So far as we can tell, starting in Paleolithic times, in every place where there are or have been people, there has been narrative. Here is Gilgamesh, the Sumerian epic recorded on clay tablets in cuneiform script 1,500 years before Homer. Here are the Mahabharata and the Ramayana, vast Sanskrit poems dating from the 9th and 8th centuries BC. Here too is the thousand-year-old Anglo-Saxon legend Beowulf, the Icelandic Völsunga saga, the Malian epic Sundiata, the Welsh Mabinogion, the Persian, Egyptian, and Mesopotamian ferment of The Thousand and One Nights, and the 19th-century Finnish and Karelian epic the Kalevala. This list is necessarily partial.

Once upon a time, none of these stories had yet been fixed on a page (or a clay tablet), but were carried in the physical bodies of the people who committed them to memory. Long before Johannes Gutenberg and his printing press, and 1,000 years before cloistered monks and their illuminated manuscripts, the principal storage facility for history, poetry, and folktales was the human head. And the chief means of transmitting that cultural wealth, from generation to generation, was the human voice…

Silent reading of the sort we practice with our books and laptops and cellphones was once considered outlandish, a mark of eccentricity. Plutarch writes of the way that Alexander the Great perplexed his soldiers, around 330 BC, by reading without utterance a letter he had received from his mother. The men’s confusion hints at the rarity of the spectacle. Six hundred years later, Augustine of Hippo witnessed the Milanese bishop (and fellow future saint) Ambrose contemplating a manuscript in his cell. Augustine was amazed by the old man’s peculiar technique. ‘When he read,’ Augustine marveled in his Confessions, ‘his eyes scanned the page and his heart sought out the meaning, but his voice was silent and his tongue was still.’

For Augustine, as Alberto Manguel observes, ‘the spoken word was an intricate part of the text itself.’ We don’t think that way now. For us, the written word has the real weight and gravitas. We joke: ‘It must be true, I saw it on the Internet,’ in an echo of the old line about the sanctity of print.

Yet as Dante observed, speech – the words we say, the pauses between them, and our inflection – is our native language. Writing is the crystallizing of liquid thought and speech, and therefore a kind of translation. When a girl in modern times listens to her mother or father read an abridged version of The Iliad or The Odyssey, in a curious way she is hearing Homer translated at least four times over. What began as spoken Greek became written Greek, which was translated into written English, and then, in a final transformation, was freed from the page and set loose in the air as spoken English.

Read the full article in LitHub.


Ritwik Ghatak, Subarnarekha

A new look at Ritwik Ghatak’s Bengal
Ratik Asokan, NYR Daily, 25 January 2020

In February 1972, three months after the close of East Pakistan’s bloody war of secession, the Indian filmmaker Ritwik Ghatak traveled to Dhaka, capital of the new nation of Bangladesh, as a state guest. It was a kind of homecoming. Born in 1925 in the eastern part of Bengal province, then an undivided state in British-ruled India, Ghatak grew up in the region before he relocated west to Calcutta in the early 1940s. Since 1947, when the province was split in two during Partition, its western half going to India and eastern half to Pakistan, he had not returned east.

As his flight crossed over the River Padma, which runs down Bangladesh, and on whose banks Ghatak had played as a child, the director was so moved that he burst into tears. ‘I felt that Bengal in plenty and beauty as I knew her years back was still there untransformed,’ he recalled years later. Inspired, he returned to Bangladesh the same year to shoot a film, his first in a decade, A River Called Titas. But up close, he saw that the world of his childhood had all but disappeared. ‘Everything has changed out of recognition—people’s thought, mind, and soul,’ he admitted. ‘They have lost culture.’

The Partition of Bengal looms large over the cinema of Ritwik Ghatak. In eight grave, ravishing films, made in an intermittent career that spanned the first three decades of Indian independence, he returned time and again to the calamitous event, which had deeply scarred him. Sometimes, as in his wayward masterpiece Subarnarekha (1962), he reckoned with the suffering unleashed on the refugees of Partition; in other movies he approached the subject more obliquely. Always he drenched his plots in old Bengali literary and mythic allusions, as well as the region’s classical and folk music, as if expressing a nostalgia for an undivided past…

Ritwik Ghatak was born into the Bhadralok (roughly ‘gentlefolk’), a small class of educated, upper-caste Bengalis who exercise an outsize influence on the region’s politics and culture. His father, a magistrate, was a scholar of Sanskrit; his oldest brother, Manish, was a well-regarded poet. Like a good Bhadralok child, Ghatak grew up immersed in Bengali literature, music, and art – all of which he loved and drew on in his films. (‘In his heart and soul, Ritwik was a Bengali film director, a Bengali artist, much more of a Bengali than myself,’ Satyajit Ray observed of his friend and sometimes rival.) Yet, for the Bhadralok itself, Ghatak had little admiration. His films expose the arrogance of their casteism.

Ghatak came of age during the 1940s, a decade of seismic change in Bengal. It was a period bookended by a famine created by the Crown in 1943, in which more than three million villagers starved to death, and the horrific riots in 1946 that foreshadowed Partition. These events dispelled any illusions he harbored about the future of India. In all his films, and particularly the Partition Trilogy, the new nation is presented as something fractured, unstable, permeated with violence, and doomed from the outset. The contrast with Ray—see his glowing Apu Trilogy – could not be starker.

Ghatak’s pessimism is already evident in his third film, The Runaway (1959). It is the story of a child who flees his dreary village and tyrannical schoolmaster father to wander through Calcutta. The journey begins with a long, early-morning sequence – shots of the Howrah Bridge emerging against a dim sky; bustling streets seen from a low angle—that wonderfully evokes the dreamlike immediacy of childhood. But the plot soon darkens as its hero becomes a kind of street urchin, mingling with migrant workers, beggars, victims of famine, and even thugs who try to kidnap him. ‘Why is there so much pain here?’ he wonders late in the film, before returning home chastened.

Read the full article in the NYR Daily.


What people get wrong about Bertrand Russell
Julian Baggini, Prospect, 31 December 2019

In philosophical circles, there are two Bertrand Russells, only one of whom died 50 years ago. The first is the short-lived genius philosopher of 1897-1913, whose groundbreaking work on logic shaped the analytic tradition which dominated Anglo-American philosophy during the 20th century. The second is the longer-lived public intellectual and campaigner of 1914-1970, known to a wider audience for his popular books such as Why I Am Not a Christian, Marriage and Morals and A History of Western Philosophy.

The public may have preferred the second Russell but many philosophers see this iteration as a sell-out who betrayed the first. This view is best reflected in Ray Monk’s exhaustive biography. The first volume, which went up to 1921, was almost universally acclaimed, but some (unfairly) condemned the second as a hatchet-job. It was as though Monk had become exasperated by his subject.

Monk admired the logician Russell who ‘supports his views with rigorous and sophisticated arguments, and deals with objections carefully and respectfully.’ But he despaired that in the popular political writings that dominated the second half of Russell’s life, ‘these qualities are absent, replaced with empty rhetoric, blind dogmatism and a cavalier refusal to take the views of his opponents seriously.’ In Monk’s view, Russell ‘abandoned a subject of which he was one of the greatest practitioners since Aristotle in favour of one to which he had very little of any value to contribute.’

Monk’s assessment has become orthodoxy among professional philosophers. But although it is true that Russell’s political writings were often naive and simplistic, so is the neat distinction between the early philosopher and the later hack. Russell changed tack because his work in logic reached the end of the line and he thought he had a greater contribution to make as a public intellectual. History has vindicated him: much of his popular writing stands the test of time better than his academic work.

Read the full article in Prospect.


Africa’s genetic material is still being misused
Keymanthri Moodley, The Conversation, 20 December 2019

Biopiracy – the act of directly or indirectly taking undue advantage of research participants and communities in global health research – has a long and contentious history in Africa. A recent case occurred during the West African Ebola outbreak between 2014 and 2016 when thousands of biological specimens left the continent without consent. Very often there is minimal benefit sharing.

The issue has been in the news again in South Africa. Accusations have been levelled against the Wellcome Sanger Institute in the UK for allegedly attempting to commercialise data obtained from various African universities. This has reignited questions around models of consent in research, donor rights, biopiracy and genomic sovereignty.

The latest revelations show that legislation as well as academic research governance bodies have failed to adequately safeguard the rights of vulnerable participants in genomics research.

One missing piece of the puzzle is the limited empirical data on the views of people whose biosamples are taken in the name of research. This would include issues of ownership, future use, export, benefit-sharing and commercialisation.

In 2011 and 2012 we surveyed participants to better understand their views. We recruited participants who had experience with research, the consent process and use of biological samples. They were engaged in studies at academic research units attached to public hospitals and private research centres.

Our findings remain relevant today as many of the issues raised by the people we spoke to have still not been addressed.

Our study was conducted over a 10 month period from September 2011 to June 2012. We sampled 200 participants in the Western Cape and Gauteng provinces in South Africa. Participants who had already consented to use of their blood for research were asked several questions including the following: how they felt about their samples being stored for future use and about them being sent abroad to foreign countries, as well as the possibility of future commercialisation.

Most participants were supportive of research. But many expressed concerns about export of their blood samples and data out of South Africa.

For their part, researchers viewed the biosamples as donations. But participants believed they had ownership rights and were keen on benefit sharing. Almost half of the participants were not in favour of broad consent delegated to a research ethics committee. Their preference was to be contacted again for consent in the future.

Read the full article in The Conversation.


Pinker’s pollyannish philosophy
and its perfidious politics
Jessica Riskin, LA Review of Books, 15 December 2019

‘Intellectual;s hate reason,’ ‘Progressives hate progress,’ ‘War is peace,’ ‘Freedom is slavery.’ No, wait, those last two are from a different book, but it’s easy to get mixed up. Steven Pinker begins his latest — a manifesto inspirationally entitled Enlightenment Now — with a contrast between ‘the West,’ which he says is critical of its own traditions and values, and ‘the Islamic State,’ which ‘knows exactly what it stands for.’ Given the book’s title, one expects Pinker to be celebrating a core Enlightenment ideal: critical skepticism, which demands the questioning of established traditions and values (such as easy oppositions between ‘the West’ and ‘the bad guys’). But no, in a surprise twist, Pinker apparently wants us over here in ‘the West’ to adopt an Islamic State–level commitment to our ‘values,’ which he then equates with ‘classical liberalism’ (about which more presently). You begin to see, reader, why this review — which I promised to write last spring — took me all summer and much of the fall to finish. Just a few sentences into the book, I am tangled in a knot of Orwellian contradictions.

Enlightenment Now purports to demonstrate by way of ‘data’ that ‘the Enlightenment has worked.’ What are we to make of this? A toaster oven can work or not by toasting or failing to toast your bagel. My laser printer often works by printing what I’ve asked it to print, and sometimes doesn’t by getting the paper all jammed up inside. These machines were designed and built to do particular, well-defined jobs. There is no uncertainty, no debate, no tradition of critical reflection, no voluminous writings regarding what toaster ovens or laser printers should do, or which guiding principles or ideals should govern them.

On the other hand, uncertainty, debate, and critical reflection were the warp and woof of the Enlightenment, which was no discrete, engineered device with a well-defined purpose, but an intellectual and cultural movement spanning several countries and evolving over about a century and a half. If one could identify any single value as definitive of this long and diverse movement, it must surely be the one mentioned above, the value of critical skepticism. To say it ‘worked’ vitiates its very essence. But now the Enlightenment’s best-selling PR guy takes ‘skepticism’ as a dirty word; if that’s any indication, then I guess the Enlightenment didn’t work, or at any rate, it’s not working now. Maybe it came unplugged? Is there a paper jam?

Read the full article in the LA Review of Books.


Predatory journals: no definition, no defence
Agnes Grudniewicz et al, Nature, 11 December 2019

When ‘Jane’ turned to alternative medicine, she had already exhausted radiotherapy, chemotherapy and other standard treatments for breast cancer. Her alternative-medicine practitioner shared an article about a therapy involving vitamin infusions. To her and her practitioner, it seemed to be authentic grounds for hope. But when Jane showed the article to her son-in-law (one of the authors of this Comment), he realized it came from a predatory journal — meaning its promise was doubtful and its validity unlikely to have been vetted.

Predatory journals are a global threat. They accept articles for publication — along with authors’ fees — without performing promised quality checks for issues such as plagiarism or ethical approval. Naive readers are not the only victims. Many researchers have been duped into submitting to predatory journals, in which their work can be overlooked. One study that focused on 46,000 researchers based in Italy found that about 5% of them published in such outlets1. A separate analysis suggests predatory publishers collect millions of dollars in publication fees that are ultimately paid out by funders such as the US National Institutes of Health (NIH).

One barrier to combating predatory publishing is, in our view, the lack of an agreed definition. By analogy, consider the historical criteria for deciding whether an abnormal bulge in the aorta, the largest artery in the body, could be deemed an aneurysm — a dangerous condition. One accepted definition was based on population norms, another on the size of the bulge relative to the aorta and a third on an absolute measure of aorta width. Prevalence varied fourfold depending on the definition used. This complicated efforts to assess risk and interventions, and created uncertainty about who should be offered a high-risk operation.

Everyone agrees that predatory publishers sow confusion, promote shoddy scholarship and waste resources. What is needed is consensus on a definition of predatory journals. This would provide a reference point for research into their prevalence and influence, and would help in crafting coherent interventions.

Read the full article in Nature.


james baldwin

The radical James Baldwin
Laura Tanenbaum, Jacobin, 13 December 2019

In an interview featured in the PBS documentary The Price of the Ticket, Baldwin recalls how the country broke his father: ‘A proud man who could not feed his children,’ Baldwin noted, his father wanted power but could only find it through attempts to dominate his family or assert his religiosity. ‘He could not bend, he could only be broken.’ As the oldest child, with an unknown birth father, Baldwin would come to see his stepfather’s struggles as a stunted authoritarian masculinity to which US capitalism pushed so many.

In 1944, having graduated from high school and barely getting by, Baldwin felt little assurance he would meet a different fate than his recently deceased father, writing to his friend Tom Martin: ‘I have been in and out of the Village for three years now — from seventeen to twenty. I think the time is fast approaching when I must get out for good. There is death here. Everywhere people are sick or dying or dead.’ It was one particular death two years later that pushed him toward the first of many periods outside the United States: the 1946 suicide of his close friend Eugene Worth, who had recruited Baldwin to the Young People’s Socialist League (YPSL). The question of what Baldwin’s early engagement with socialism might have become had Worth not taken his own life, and McCarthyism not suffocated the US political landscape, hangs over Mullen’s book.

As things happened, however, it was survival, not simply the luxury of distance, that led Baldwin to finish his first novel about his Harlem childhood in Paris and Switzerland. He would complete Another Country, about his early years in the Village, in Istanbul, a detail he found significant enough to include at the novel’s end, as if assigning a dateline to a journalistic piece. Centering on a fictionalized version of Worth’s suicide, the novel portrayed bohemians who were not the dropouts from middle-class society often associated with coffeehouses, but an uneasy coalition of outcasts whose race, sexuality, or poverty set them apart from the start — exiles in their own country. (The FBI took note and considered banning the book, an impulse born of their obsession with queer and African-American literature.)

Baldwin would return to Istanbul and France throughout his life, and his final completed work, a play called The Welcome Table, depicted a dinner party populated by writers, activists, and artists from around the world, many of them exiles or stateless. One declares: ‘I hope to God never to see another flag, as long as I live. I would like to burn them all.’ This contempt for borders, states, and, especially, American violence around the world is one of the most powerful aspects of Baldwin’s legacy to emerge in the book.

Read the full article in Jacobin.


The secret history of facial recognition
Shaun Raviv, Wired, 21 January 2020

But it was another front company, called the King-Hurley Research Group, that bankrolled Woody’s most notable research at Panoramic. According to a series of lawsuits filed in the 1970s, King-Hurley was a shell company that the CIA used to purchase planes and helicopters for the agency’s secret Air Force, known as Air America. For a time King-Hurley also funded psychopharmacological research at Stanford. But in early 1963, it was the recipient of a different sort of pitch from one Woody Bledsoe: He proposed to conduct ‘a study to determine the feasibility of a simplified facial recognition machine.’ Building on his and Browning’s work with the n-tuple method, he intended to teach a computer to recognize 10 faces. That is, he wanted to give the computer a database of 10 photos of different people and see if he could get it to recognize new photos of each of them. ‘Soon one would hope to extend the number of persons to thousands,’ Woody wrote. Within a month, King-Hurley had given him the go-ahead.

Ten faces may now seem like a pretty pipsqueak goal, but in 1963 it was breathtakingly ambitious. The leap from recognizing written characters to recognizing faces was a giant one. To begin with, there was no standard method for digitizing photos and no existing database of digital images to draw from. Today’s researchers can train their algorithms on millions of freely available selfies, but Panoramic would have to build its database from scratch, photo by photo.

And there was a bigger problem: Three-dimensional faces on living human beings, unlike two-dimensional letters on a page, are not static. Images of the same person can vary in head rotation, lighting intensity, and angle; people age and hairstyles change; someone who looks carefree in one photo might appear anxious in the next. Like finding the common denominator in an outrageously complex set of fractions, the team would need to somehow correct for all this variability and normalize the images they were comparing. And it was hardly a sure bet that the computers at their disposal were up to the task. One of their main machines was a CDC 1604 with 192 KB of RAM—about 21,000 times less working memory than a basic modern smartphone.

Fully aware of these challenges from the beginning, Woody adopted a divide-and-conquer approach, breaking the research into pieces and assigning them to different Panoramic researchers. One young researcher got to work on the digitization problem: He snapped black-and-white photos of the project’s human subjects on 16-mm film stock. Then he used a scanning device, developed by Browning, to convert each picture into tens of thousands of data points, each one representing a light intensity value—ranging from 0 (totally dark) to 3 (totally light)—at a specific location in the image. That was far too many data points for the computer to handle all at once, though, so the young researcher wrote a program called NUBLOB, which chopped the image into randomly sized swatches and computed an n-tuple-like score for each one.

Read the full article in Wired.


The long war against slavery
Casey Cep, New Yorker, 20 January 2020

‘Here’s to the next insurrection of the negroes in the West Indies,’ Samuel Johnson once toasted at an Oxford dinner party, or so James Boswell claims. The veracity of Boswell’s biography—including its representation of Johnson’s position on slavery—has long been contested. In the course of more than a thousand pages, little mention is made of Johnson’s long-term servant, Francis Barber, who came into the writer’s house as a child after being taken to London from the Jamaican sugar plantation where he was born into slavery. Some of the surviving pages of Johnson’s notes for his famous dictionary have Barber’s handwriting on the back; there are scraps on which a twelve-year-old Barber practiced his own name while learning to write. Thirty years later, Johnson died and left Barber a sizable inheritance. But Boswell repeatedly minimizes Johnson’s abiding opposition to slavery—even that startling toast is characterized as an attempt to offend Johnson’s ‘grave’ dinner companions rather than as genuine support for the enslaved. Boswell was in favor of slavery, and James Basker, a literary historian at Barnard College, has suggested that this stance tainted his depiction of Johnson’s abolitionism, especially since Boswell’s book appeared around the time that the British Parliament was voting on whether to end England’s participation in the international slave trade.

Johnson’s abolitionist views were likely influenced by Barber’s experience of enslavement. For much of the eighteenth century, Jamaica was the most profitable British colony and the largest importer of enslaved Africans, and Johnson once described it as ‘a place of great wealth and dreadful wickedness, a den of tyrants, and a dungeon of slaves.’ He wasn’t the only Englishman paying close attention to rebellion in the Caribbean: abolitionists and slavers alike read the papers anxiously for news of slave revolts, taking stock of where the rebels came from, how adroitly they planned their attacks, how quickly revolts were suppressed, and how soon they broke out again.

In a new book, the historian Vincent Brown argues that these rebellions did more to end the slave trade than any actions taken by white abolitionists like Johnson. ‘Tacky’s Revolt: The Story of an Atlantic Slave War’ (Belknap) focusses on one of the largest slave uprisings of the eighteenth century, when a thousand enslaved men and women in Jamaica, led by a man named Tacky, rebelled, causing tens of thousands of pounds of property damage, leaving sixty whites dead, and leading to the deaths of five hundred of those who had participated or were accused of having done so. Brown’s most interesting claim is that Tacky and his comrades were not undertaking a discrete act of rebellion but, rather, fighting one of many battles in a long war between slavers and the enslaved. Both the philosopher John Locke and the self-emancipated Igbo writer Olaudah Equiano defined slavery as a state of war, but Brown goes further, describing the transatlantic slave trade as ‘a borderless slave war: war to enslave, war to expand slavery, and war against slaves, answered on the side of the enslaved by war against slaveholders, and also war among slaves themselves.’

Read the full article in the New Yorker.



Comparing US vs UK academia
Richard Chappel, Philosophy et cetera,
30 December 2019

Having been back in the US for a full year now, it’s interesting to compare how differently academia works here compared to in the UK (where I worked for the preceding 4.5 years).  I much prefer the US system, personally, but will try to offer an even-handed overview here.  Others are of course welcome to contribute their own observations in the comments (or email me if they’d prefer their comment to be posted anonymously).  I especially welcome any corrections if my observations aren’t representative in some respects.

Firstly, advantages of the UK:

* No ‘up-or-out’ tenure-track system means that junior academics are hired directly into permanent positions, which removes a major source of stress for some people.

* More research events: the number of reading groups, colloquia, works in progress seminars, etc. — often several such events every week during term time — contributes to a very active-feeling ‘research culture’.  I definitely appreciated that before having a kid.  (Now I need all the time I can get for doing my own work!)  Related: Junior academics are much more likely to receive research invitations in the UK than in the US, at least in my experience.

Ambivalent differences:

* Much shorter UK teaching terms, while highly condensed, may lead to more dedicated time available for research.  (Though I think I nonetheless prefer the more relaxed & spread-out teaching loads of the US.  I generally enjoy teaching, but not when I had to teach 10+ seminar groups in a single week.)

* More availability of grants you can apply for to potentially ‘buy out’ your teaching & admin responsibilities and get more research time.  Great for those who are good at writing successful grants.  Otherwise, the application process (and low chances of success) can be a depressing time-sink.  Grant funding is a major factor in determining REF scores (see below) which can lead departments to put significant pressure on academic staff to spend more time pursuing grant applications.

Disadvantages of the UK:

Far, far less workplace autonomy.  It’s really impossible to exaggerate how different the jobs feel in this respect.  Teaching is a completely different experience, as assessment types for a given module are fixed and standardized at a departmental level.  Grading must be anonymous and ‘moderated’ (or second-marked) by your colleagues, which rules out participation grades, oral presentations, or pretty much anything else besides the standard methods of essays & exams.  You can’t offer students extensions (that goes through a formal committee that will demand documentation), punish plagiarists (another formal committee, with more documentation demands) or adjust your syllabus mid-term.  Depending on your institution, you may or may not be allowed to opt out of automatic voice recordings of all your lectures (or you may need to beg permission of a colleague in a senior management role). It’s all extremely rigid.

A pervasive sense of distrust.  A major reason for all of the above is that, institutionally, individual faculty members are not trusted to do a good job voluntarily.  There must be constant monitoring and oversight.  A big part of your job is to provide that oversight by monitoring your colleagues (e.g. second-marking) and serving on committees or in managerial roles that are empowered to make the decisions that individual faculty members (qua academic) are not allowed to make on their own.

Read the full article in Philosophy et cetera.


Detritus of revolution
Christopher J Lee, Africa Is A Country, 12 December 2019

The South African writer Nthikeng Mohlele garnered wide attention last year for his novel Michael K (2018), a revisiting of J. M. Coetzee’s Life and Times of Michael K (1983), which received the Booker Prize and firmly established Coetzee’s reputation in the South African canon following his earlier achievement, Waiting for the Barbarians (1980). Mohlele’s novel is in part an homage of influence—Coetzee has supported Mohlele’s work through cover endorsements, and Mohlele has expressed his admiration for Coetzee in interviews. But Michael K is also a critique of the limitations of Coetzee’s characterization of the mute figure of K, a coloured man who is caught amid a civil war in a reimagined apartheid-era South Africa. Like The Meursault Investigation (2015), a reworking of Albert Camus’s The Stranger (1942) by the Algerian writer Kamel Daoud, Michael K gives voice to a fictional character in a way that mutually enlivens both novels through a literary dialogue that crosses identities, generations, and time periods.

Mohlele’s project of reimagination further encourages a rereading of his earlier work. His second novel Small Things (2013)—he has published six so far—in particular can be read in retrospect as another response to Coetzee, albeit in a more indirect fashion. More specifically, it can be approached as a rejoinder to Disgrace (1999), Coetzee’s allegorical account of white male privilege and its delegitimation in post-apartheid South Africa. In the latter novel, as soon as the reader finishes the title page, one encounters the main protagonist David Lurie consorting with a prostitute—Coetzee’s work, it should be said, possesses moments of the bleakest humor—with Lurie’s status and life unraveling from there. After engaging in a non-consensual relationship with a student and losing his academic post, he moves to the Eastern Cape to live with his adult daughter, Lucy, on her farm. This temporary retreat is disrupted by an act of violence—Lucy is raped by assailants who ultimately go unpunished—and Lurie, who once took advantage of the unspoken rules of academia, now finds himself the victim of an unwritten set of laws in rural South Africa. At the end of the book, with no home, no sense of justice, and no future, Lurie is left without the possibility of redemption, with what remaining sense of humanity he has expressed through his care of dogs that are destined to be euthanized. Coetzee’s novel presses the question of where the limits of sympathy lie, eliciting the reader to address whether such parameters are to be determined in a present political moment, or whether they reside within an ethics located more deeply in the human condition.

Read the full article in Africa Is A Country.


Is there a crisis of truth?
By Steven Shapin, LA Review of Books, 2 December 2019

Of course, there’s a Crisis of Truth and, of course, we live in a ‘Post-Truth’ society. Evidence of that Crisis is everywhere, extensively reported in the non-Fake-News media, read by Right-Thinking people. The White House floats the idea of ‘alternative facts’ and the President’s personal attorney explains that ‘truth isn’t truth.’ Trump denies human-caused climate change. Anti-vaxxers proliferate like viruses. These are Big and Important instances of Truth Denial — a lot follows from denying the Truth of expert claims about climate change and vaccine safety. But rather less dangerous Truth-Denying is also epidemic.

Astrology and homeopathy flourish in modern Western societies, almost a majority of the American adult public doesn’t believe in evolution, and a third of young Americans think that the Earth may be flat. Meanwhile, Truth-Defenders point an accusatory finger at the perpetrators, with Trump, Heidegger, Latour, Derrida, and Quentin Tarantino improbably sharing a sinful relativist bed.

I’ve mentioned some examples that take a crisis of scientific credibility as an index of the Truth Crisis. Though I’ll stick with science for most of this piece, it’s worth noting that the credibility of many sorts of expert knowledge is also in play — that of the press, medicine, economics and finance, various layers of government, and so on. It was, in fact, Michael Gove, a Conservative British MP, once Minister in charge of the universities, who announced just before the 2016 referendum supporting Brexit that ‘the people in this country have had enough of experts,’ and, while he later tried to walk back that claim, the response to his outburst in Brexit Britain abundantly establishes that he hit a nerve.

It seems irresponsible or perverse to reject the idea that there is a Crisis of Truth. No time now for judicious reflection; what’s needed is a full-frontal attack on the Truth Deniers. But it’s good to be sure about the identity of the problem before setting out to solve it. Conceiving the problem as a Crisis of Truth, or even as a Crisis of Scientific Authority, is not, I think, the best starting point. There’s no reason for complacency, but there is reason to reassess which bits of our culture are in a critical state and, once they are securely identified, what therapies are in order.

Start with the idea of Truth. What could be more important, especially if the word is used — as it often is in academic writing — as a placeholder for Reality? But there’s a sort of luminous glow around the notion of Truth that prejudges and pre-processes the attitudes proper to entertain about it. The Truth goes marching on. God is Truth. The Truth shall set you free. Who, except the mad and the malevolent, could possibly be against Truth? It was, after all, Pontius Pilate who asked, ‘What is Truth?’ — and then went off to wash his hands.

Read the full article in the LA Review of Books.


What if the universe has no end?
Patchen Barss, BBC Future, 20 January 2020

But what if the Big Bang wasn’t actually the start of it all?

Perhaps the Big Bang was more of a ‘Big Bounce’, a turning point in an ongoing cycle of contraction and expansion. Or, it could be more like a point of reflection, with a mirror image of our universe expanding out the ‘other side’, where antimatter replaces matter, and time itself flows backwards. (There might even be a ‘mirror you’ pondering what life looks like on this side.)

Or, the Big Bang might be a transition point in a universe that has always been – and always will be – expanding. All of these theories sit outside mainstream cosmology, but all are supported by influential scientists.

The growing number of these competing theories suggests that it might now be time to let go of the idea that the Big Bang marked the beginning of space and time. And, indeed, that it may even have an end.

Many competing Big Bang alternative stem from deep dissatisfaction with the idea of cosmological inflation.

‘I have to confess, I never liked inflation from the beginning,’ says Neil Turok, the former director of the Perimeter Institute for Theoretical Physics in Waterloo, Canada. ’The inflationary paradigm has failed,’ adds Paul Steinhardt, Albert Einstein professor in science at Princeton University, and proponent of a ‘Big Bounce’ model.

‘I always regarded inflation as a very artificial theory,’ says Roger Penrose, emeritus Rouse Ball professor of mathematics at Oxford University. ‘The main reason that it didn’t die at birth is that it was the only thing people could think of to explain what they call the ‘scale invariance of the Cosmic Microwave Background temperature fluctuations’.’

The Cosmic Microwave Background (or ‘CMB’) has been a fundamental factor in every model of the Universe since it was first observed in 1965. It’s a faint, ambient radiation found everywhere in the observable Universe that dates back to that moment when the Universe first became transparent to radiation.

The CMB is a major source of information about what the early Universe looked like. It is also a tantalising mystery for physicists. In every direction scientists point a radio telescope, the CMB looks the same, even in regions that seemingly could never have interacted with one another at any point in the history of a 13.8 billion-year- old universe.

‘The CMB temperature is the same on opposite sides of the sky and those parts of the sky would never have been in causal contact,’ says Katie Mack, a cosmologist at North Carolina State University. ‘Something had to connect those two regions of the Universe in the past. Something had to tell that part of the sky to be the same temperature as that part of the sky.’

Without some mechanism to even out the temperature across the observable Universe, scientists would expect to see much larger variations in different regions.

Read the full article on BBC Future.


The images are, from top down: An RSS image of Bharat Mata (Mother India); Sculpture of Zeno in the Farnese Collection, Naples, photo by Paolo Monti; A still from Ritwik Ghatak’s Subarnarekha, the last of hisPartition trilogy; portrait of James Baldwin (photographer unknown).

%d bloggers like this: