Pandaemonium

PLUCKED FROM THE WEB #41

web 41

The latest (somewhat random) collection of recent essays and stories from around the web that have caught my eye and are worth plucking out to be re-read.


.

No, it’s not a gig economy
Doug Henwood, Jacobin, 8 June 2018

Despite the voluble testimony of pundits and bar companions, the world of work is not one of Uber drivers and temp workers. In fact, the share of US employment accounted for by contingent and ‘alternative’ arrangements is lower now than it was in 2005 and 1995.

That testimony is derived from several original sources. For example, a much-ballyhooed 2014 study commissioned by the Freelancers Union — which is not a materially disinterested party — reported that a third of workers are freelancers. The claim of a 2016 paper by Lawrence Katz and Alan Krueger that ‘all of the net employment growth in the US economy from 2005 to 2015 appears to have occurred in alternative work arrangements’ was widely quoted and quickly became folk wisdom. That paper was based on an online survey conducted by the RAND Corporation. The survey was small — fewer than 4,000 respondents — and its sample wasn’t very representative of the overall population, a flaw the authors corrected through vigorous statistical handiwork.

Data released… by the Bureau of Labor Statistics should put an end to this chatter. According to a special edition of their Current Population Survey, a monthly poll of 60,000 households conducted jointly with the Census Bureau, just 3.8% of workers were classed as contingent in May 2017, meaning they don’t expect their job to last a year. That’s down from 4.1% in 2005 and 4.9% in 1995. Tighter definitions show smaller shares, but also down from earlier years. In 2017, 96.2% of workers were non-contingent, compared with 95.1% twenty-two years earlier.

The share of workers in ‘alternative’ arrangements was 10.1%. Of those, 6.9% were independent contractors, 1.7% were on-call, and 1.5% were employed by either temp or contract firms. That means that 89.9% of the workforce has a ‘traditional’ job, down 0.2 points from 1995.

There’s less of a racial pattern to contingency than one might guess: 3.7% of white workers don’t expect their jobs to last, compared to 4.0% of black workers, 4.9% of Asian, and 5.1% of ‘Hispanic/Latino.’ All these shares are down from 1995. Nor is there a vast gender disparity: 3.9% of women vs 3.8% of men are contingent.

Same with age: workers under the age of 25 are less likely to be contingent than they were 22 years ago…

None of this is to argue that the world of work is a delight, or that young workers — or any workers except the professional/managerial elite — have a great thing going. But we should be clear about what the problems are. Precarity isn’t the major problem in the American labor market. It’s that wages are stagnant or worse, benefits are eroding, and much labor is dull, alienating, pointless, and sometimes dangerous. Many people with normal, full-time jobs have a hard time making ends meet, and most households have little or no savings to fall back on in a crisis. Emphasizing precarity only makes workers feel even more powerless than they are.

Read the full article in Jacobin.


.

Five myths about the refugee crisis
Daniel Trilling, Guardian, 5 June 2018

In recent years, ‘European values’ have been invoked both in support of refugees and migrants and to attack them. On the one hand, demagogues such as Hungary’s Viktor Orbán have positioned themselves as defenders of a Christian European civilisation, enacting anti-migrant policies to protect Europe from being overrun by Muslim hordes. On the other, humanitarians have frequently appealed to a vision of Europe like the one set out by José Manuel Barroso, president of the European commission in 2012, when the EU was awarded the Nobel peace prize. ‘As a community of nations that has overcome war and fought totalitarianism,’ Barroso said in his acceptance speech, ‘we will always stand by those who are in pursuit of peace and human dignity.’

Both visions are wrong. The first tries to erase the fact that Europe is a diverse continent, in which Christian, Muslim, Jewish and secular traditions have been present for centuries. Orbán’s vision also has a liberal companion, especially popular in western Europe, which holds that Muslim immigrants present a threat to ‘European’ traditions of tolerance, freedom and democracy: this, too, ignores the fact that where these principles do exist they have been fought for and won, usually against the violent resistance of European elites. It is no small irony, either, that many of the refugees who arrive on European shores today have been engaged in similar struggles for rights and equality in their home countries.

The second vision presents Europe as a beacon of hope to the rest of the world. Europe certainly has great power to affect the world for better or worse, and pressing our politicians to live up to such an aspiration is worthwhile. But the aspiration will remain unfulfilled if we ignore the fact that while the nations of Europe have overcome war and fought totalitarianism, many of these same nations became rich and powerful by conquering and administering huge empires, which were partially justified by the idea of European racial supremacy. And European unity, in its founding documents, was conceived of as a way of maintaining imperial power, as well as preventing future conflict in Europe.

Rather than seeing European racism as a thing of the past, the recognition of its persistence is essential if we are to understand the refugee crisis and some of the responses to it. Thousands of people from former European colonies, whose grandparents were treated as less than human by their European rulers, have drowned in the Mediterranean in the past two decades, yet this only became a ‘crisis’ when the scale of the disaster was impossible for Europeans to ignore.

Read the full article in the Guardian.


.

The lifespan of a lie
Ben Blum, Medium, 7 June 2018

It was late in the evening of August 16th, 1971, and twenty-two-year-old Douglas Korpi, a slim, short-statured Berkeley graduate with a mop of pale, shaggy hair, was locked in a dark closet in the basement of the Stanford psychology department, naked beneath a thin white smock bearing the number 8612, screaming his head off.

‘I mean, Jesus Christ, I’m burning up inside!’ he yelled, kicking furiously at the door. ‘Don’t you know? I want to get out! This is all fucked up inside! I can’t stand another night! I just can’t take it anymore!’

It was a defining moment in what has become perhaps the best-known psychology study of all time. Whether you learned about Philip Zimbardo’s famous ‘Stanford Prison Experiment’ in an introductory psych class or just absorbed it from the cultural ether, you’ve probably heard the basic story.

Zimbardo, a young Stanford psychology professor, built a mock jail in the basement of Jordan Hall and stocked it with nine ‘prisoners,’ and nine ‘guards,’ all male, college-age respondents to a newspaper ad who were assigned their roles at random and paid a generous daily wage to participate. The senior prison ‘staff’ consisted of Zimbardo himself and a handful of his students.

The study was supposed to last for two weeks, but after Zimbardo’s girlfriend stopped by six days in and witnessed the conditions in the ‘Stanford County Jail,’ she convinced him to shut it down. Since then, the tale of guards run amok and terrified prisoners breaking down one by one has become world-famous, a cultural touchstone that’s been the subject of books, documentaries, and feature films  – even an episode of Veronica Mars.

The SPE is often used to teach the lesson that our behavior is profoundly affected by the social roles and situations in which we find ourselves. But its deeper, more disturbing implication is that we all have a wellspring of potential sadism lurking within us, waiting to be tapped by circumstance. It has been invoked to explain the massacre at My Lai during the Vietnam War, the Armenian genocide, and the horrors of the Holocaust. And the ultimate symbol of the agony that man helplessly inflicts on his brother is Korpi’s famous breakdown, set off after only 36 hours by the cruelty of his peers.

There’s just one problem: Korpi’s breakdown was a sham.

Read the full article on Medium.


.

Grand theft paycheck
Philip Mattera, Good Jobs First, June 2018

Many of the largest companies operating in the United States have fattened their profits by forcing employees to work o the clock or depriving them of required overtime pay. An extensive analysis of federal and state court records shows that these corporations have been embroiled in hundreds of lawsuits over what is known as wage theft and have paid out billions of dollars to resolve the cases. e list of the most penalized employers includes the giant retailer Walmart, as well as big banks, major telecommunications and technology companies, and a leading pharmaceutical producer. More than 450 large firms have each paid out $1 million or more in wage theft settlements.

These findings result from a yearlong compilation of records of collective action lawsuits. In this little-studied form of labor standards enforcement, groups of workers take their employer to court to recover the pay they were wrongly denied.

We identified more than 1,200 successful collective actions involving large companies that have been resolved since the beginning of 2000. In these cases, employers paid total penalties of $8.8 billion.

We also compiled actions against large employers pursued by the US Department of Labor and by regulatory agencies in eight states which enforce wage theft and provided data (California, Illinois, Kentucky, Massachusetts, Minnesota, Missouri, Pennsylvania and Washington). Combining the lawsuits with the state and federal administrative actions, we found 4,220 cases against large employers that generated total penalties of $9.2 billion.

Among the dozen most penalized corporations, Walmart, with $1.4 billion in total settlements and nes, is the only retailer. Second is FedEx with $502 million. Five of the top dozen are banks and insurance companies, including Bank of America ($381 million); Wells Fargo ($205 million); JPMorgan Chase ($160 million); and State Farm Insurance ($140 million). e top 25 also include prominent companies in sectors not typically associated with wage theft, including telecommunications (AT&T); information technology (Microsoft and Oracle); pharmaceuticals (Novartis); and investment services (Morgan Stanley and UBS).

Read the full article in Good Jobs First.


.

L0073441 Only healthy seed must be sown

Eugenics never went away
Robert A Wilson, Aeon, 5 June 2018

In 2012, the Senate of Australia launched an inquiry into contemporary, often non-consensual sterilisation of girls and women with disabilities. Unlike Canada and the United States, Australia never passed eugenic sterilisation laws. Despite that, the affinity between what was happening in Australia and the broader eugenic sterilising past got the Senate’s attention. Floating free of explicit state-sanctioned policy, the practice of sterilising women and girls with disabilities ‘for their own good’ often rested on eugenic arguments. It also sat uneasily with Australia’s formal human-rights commitments.

It was not that Australia had no eugenics pipeline in the past. It was just that it flowed through cultural rather than surgical means. Australia’s eugenics past chiefly targeted Aboriginal people through child-removal practices, and otherwise controlled the ethnicity of future populations through the immigration policy informally known as the White Australia Policy. This is cultural eugenics. Still, the revelation of eugenic sterilisation now in Australia caused much consternation, as it should have.

Australia was not alone. During the summer of 2013, across the Pacific in California, Corey Johnson of the Center for Investigative Reporting revealed that women in the state prison system had been recently sterilised under conditions of missing or dubious consent, and sometimes without their knowledge. Johnson’s reporting revealed that about 150 Latina and African-American women were sterilised between 2006 and 2010.

Many of California’s legislators were aware of the need to acknowledge the legacy of eugenics. In the early 2000s, the then governor Gray Davis’s formal apology for California’s eugenics history, together with California’s Senate Resolution No 20, had expressed ‘profound regret’ over the state’s extensive involvement in eugenics. The resolution urged ‘every citizen of the state to become familiar with the history of the eugenics movement’. The hope was ‘that a more educated and tolerant populace will reject any similar abhorrent pseudoscientific movement should it arise in the future’. In the wake of ongoing sterilisations, however, what was needed was more than acknowledgment by the citizens of California of a eugenic past. California needed to address the eugenic present, a need made vivid through the actions of its own state employees.

At the end of 2014, at least a dozen women in the central Indian state of Chhattisgarh died after undergoing sexual sterilisation as part of a paid incentive programme aimed to control poverty through population containment. These typically low-caste women died of blood poisoning or haemorrhagic shock following their sterilisation. The news spread worldwide because few outside India knew just how extensive and routine this sterilisation programme was. According to United Nations statistics compiled in 2006, as many as 37 per cent of Indian women have undergone sexual sterilisation. Many of those did so as part of incentive programmes such as that in Chhattisgarh, which offer women free sterilisation, or even pay many of them an incentive of $10-$20, amounting to more than a week’s salary.

And even these cases are far from isolated. Just before the turn of the 21st century, the government of the Peruvian president Alberto Fujimori approved use of sexual sterilisation to curtail Peru’s indigenous population. This resulted in approximately 300,000 sterilisations. There are also continuing reports of Romani women in countries from the former Eastern Bloc being sexually sterilised without consent. And in late 2015 and early 2016, Canada’s national network, the Canadian Broadcasting Commission, issued several reports detailing cases in which First Nations women had recently been sterilised without, or with dubious, consent in Alberta’s neighbouring province of Saskatchewan

Read the full article in Aeon.


.

Hate speech: An imaginary debate
Stephen Rhode, LA Review of Books, 17 June 2018

In Hate: Why We Should Resist It with Free Speech, Not Censorship, Nadine Strossen, a professor of constitutional law at New York Law School and president of the American Civil Liberties Union from 1991 through 2008, marshals a vast amount of legal, historical, social science, psychological, and transnational research in service of her premise that all ideas, no matter how hateful, deserve First Amendment protection. She sets out to ‘refute the argument that the United States, following the lead of many other nations, should adopt a broad concept of illegal ‘hate speech,’ and to demonstrate why such a course would not only violate fundamental precepts of our democracy but also do more harm than good.’

At the other end of the spectrum, in Must We Defend Nazis? Richard Delgado and Jean Stefancic, professors at the University of Alabama School of Law, challenge Strossen’s premise by arguing that ‘society should take more decisive measures to marginalize and discourage hate speech of all kinds than it has been doing’ and that nothing in the Constitution ‘requires that hate speech receive protection.’

The two books feel like ships passing in the night. How eager I was to get the authors together to square off on their opposing views and determine if there was any common ground on which they could agree. In lieu of such a face-to-face, I have decided to convene an imaginary one between Professors Strossen and Delgado, with myself as moderator, using only their own words (or a fair paraphrase thereof), in the hope that direct points and counterpoints would sharpen the arguments of both sides and help readers navigate these difficult issues. Apologies in advance if I have misstated either author’s viewpoints; I have endeavored to state each proponent’s views fairly and in line with my understanding of their respective viewpoints.

Read the full article in the LA Review of Books.


.

What happens when prosecutors break the law?
Nina Morrison, New York Times, 18 June 2018

In May 2017, Glenn Kurtzrock, a homicide prosecutor in Suffolk County, N.Y., was caught red-handed concealing dozens of pages of material from Messiah Booker, a young man charged with first-degree murder who maintained he was innocent.

Mr. Booker was arrested and spent more than 18 months in jail awaiting trial before his defense lawyer discovered that Mr. Kurtzrock had altered hundreds of pages of police records to remove a wealth of exculpatory information. That included evidence pointing to another suspect he knew Mr. Booker’s lawyer had been investigating. The prosecutor had also removed the covers of two police notebooks to make it look like his altered versions of the documents were the originals.

After the defense attorney discovered the misconduct and alerted the court, the district attorney promptly fired Mr. Kurtzrock and dismissed the murder charge against Mr. Booker in mid-trial. (Mr. Booker then pleaded guilty to attempted robbery, a reduced charge, which ensured he would be released from prison before his son finishes elementary school.) As he dismissed the case, the judge called it a ‘travesty.’

It was clear that Mr. Kurtzrock had violated the 1963 the Supreme Court ruling in Brady v. Maryland which held prosecutors must turn over any exculpatory evidence to defendants.

Yet there was more.

Over the last year, as the district attorney’s office reviewed all of Mr. Kurtzrock’s case files, prosecutors informed the court that four more murder convictions had been tainted by Mr. Kurtzrock’s illegal suppression of evidence. All four have now been overturned by the courts.

Most recently, in February, a man named Shaun Laurence was exonerated of murder and freed from a sentence of 75 years to life after the district attorney’s office discovered that Mr. Kurtzrock had concealed 45 different items of exculpatory evidence at trial — with the presiding judge declaring that the prosecutor’s misconduct was ‘absolutely stunning.’

So what’s happened to Mr. Kurtzrock?

Nothing.

Thirteen months after his public firing, and five murder cases overturned because of his illegal actions, Mr. Kurtzrock hasn’t been charged with a single crime. Not fraud, not tampering with government records, not contempt of court.

And he hasn’t even been suspended from practicing law, much less disbarred. He’s now working as a defense lawyer in private practice. That’s right: he’s making a living representing people accused of crimes, in the same courthouse from which he was (supposedly) banished a year ago. His law firm website even touts his experience as a ‘former homicide prosecutor.’

Read the full article in the New York Times.


.

A Muslim among Israeli settlers
Wajahat Ali, Atlantic, June 2018

About 800 Jewish settlers live in this enclave, protected by 650 or so Israeli soldiers and surrounded by 200,000 Palestinians, who are penned in by dozens of roadblocks and checkpoints around the city. The dreams of those 800 or so Jewish settlers shape and distort the lives of all the Palestinians living there. Hebron had a Jewish community until 1929, when the Jews were killed in a riot. In 1968, settlers came back for good.

We met one of these settlers, Noam Arnon, near the entrance to a playground. He shuffled toward us in his sandals, resembling a kindly Jewish American grandfather. In 1972, at 18, Arnon decided to visit the settlement of Kiryat Arba. He kept returning, eventually becoming involved in excavations and helping restore the old synagogue. Today he is not only a spokesman of the Jewish community of Hebron but also a historian and an expert on the Tomb of the Patriarchs.

Yishai Fleisher, a radio host and frequent commentator for international media, was to lead our tour. Born in Israel, he earned a legal degree at Cardozo School of Law, in the U.S.

Fleisher leaped out of his car with a boyish energy, extending his hand and welcoming us with a giant grin. He was carrying a big, visible handgun. ‘There are only two kinds of minorities in the Middle East,’ Fleisher told me. ‘Armed and unarmed.’

Arnon gave us a quick tour around the community before we headed over to the Tomb of the Patriarchs. He took us first into the playground and pointed at a mural depicting flames emerging from a baby carriage. It was to honor a 10-month-old baby who had been killed in 2001 by a sniper bullet that had come from ‘over there,’ he said, motioning toward a nearby hill. I asked him whether it was worth staying in Hebron, especially with children, considering the danger. Yes, he said. ‘Children play here, and every one of them is a victory over terror.’

Arnon said he believes that the Jewish condition in Hebron ‘is an apartheid’: ‘The Jews are in a ghetto. The Jews are limited to 3 percent of the town.’

A ghetto? This was the first time I’d heard anyone accuse the Palestinians of imposing an apartheid regime on their neighbors.

Read the full article in the Atlantic.


.

DNA design, vector illustration.

Theory suggests that all genes
affect every complex trait

Veronique Greenwood, Quanta Magazine, 20 June 2018

Starting about 15 years ago, geneticists began to collect DNA from thousands of people who shared traits, to look for clues to each trait’s cause in commonalities between their genomes, a kind of analysis called a genome-wide association study (GWAS). What they found, first, was that you need an enormous number of people to get statistically significant results — one recent GWAS seeking correlations between genetics and insomnia, for instance, included more than a million people. Second, in study after study, even the most significant genetic connections turned out to have surprisingly small effects. The conclusion, sometimes called the polygenic hypothesis, was that multiple loci, or positions in the genome, were likely to be involved in every trait, with each contributing just a small part. (A single large gene can contain several loci, each representing a distinct part of the DNA where mutations make a detectable difference.)

How many loci that ‘multiple’ description might mean was not defined precisely. One very early genetic mapping study in 1999 suggested that ‘a large number of loci (perhaps > than 15)’ might contribute to autism risk, recalled Jonathan Pritchard, now a geneticist at Stanford University. ‘That’s a lot!’ he remembered thinking when the paper came out.

Over the years, however, what scientists might consider ‘a lot’ in this context has quietly inflated. Last June, Pritchard and his Stanford colleagues Evan Boyle and Yang Li (now at the University of Chicago) published a paper about this in Cell that immediately sparked controversy, although it also had many people nodding in cautious agreement. The authors described what they called the ‘omnigenic’ model of complex traits. Drawing on GWAS analyses of three diseases, they concluded that in the cell types that are relevant to a disease, it appears that not 15, not 100, but essentially all genes contribute to the condition. The authors suggested that for some traits, ‘multiple’ loci could mean more than 100,000.

The reaction was swift. ‘It caused a lot of discussion,’ said Barbara Franke, a geneticist at Radboud University in the Netherlands who studies attention deficit hyperactivity disorder (ADHD). ‘Everywhere you went the omnigenic paper would be discussed.’ The Journal of Psychiatry and Brain Science did a special issue just of response papers, some of them taking exception to the name, some saying that after all it was just an expansion of earlier ideas. A year on, however, the study has been cited more than 200 times, by papers whose subjects range from GWAS data to individual receptors. It seems to have encapsulated something many people in the genomics community had been turning over in their minds. But exactly what scientists should do with its insights depends on whom you talk to

Read the full article in Quanta Magazine.


.

What should be done with stolen artworks?
Chris Hayes, Dazed, 8 May 2018

‘The memory still leaves a bitter taste in my mouth,’ said Abebe Alenayehu. He was just a teenager when he saw Mussolini’s fascist troops haul away a 24 metre tall granite monument, the Obelisk of Axum. ‘All the adults in the town were under curfew,’ he remembered. ‘But we played with the soldiers who gave us sweets and sugar. We didn’t realise what was happening, but our parents were hiding their faces and crying.’ Then, in 2005, when Alenayehu was 81-years-old, he got to see this important symbol of Ethiopian sovereignty returned. The Italian government had agreed in a 1947 UN treaty to return this 4th-century relic to the city of Axum in Ethiopia which finally happened many years later due to the logistic problems of moving it.

The return of artworks and artefacts to their countries of origin is a complex issue. This obelisk was an example of artworks that were seized by twentieth-century fascists, with many more paintings, sculptures, and other cultural artefacts seized in and around World War II. There has been a significant amount of momentum around the return of these objects as a way of repairing wounds from these troubled times. But what’s to be done with objects taken during the height of Britain’s empire and other situations much further away? Last week, calls for the British Museum to return Nigeria’s Benin Bronzes heated up, spurred on by the opening of an exhibition at The Victoria and Albert Museum in London of objects taken by the British Army during the 1868 Abyssinian Expedition in Ethiopia. The launch of which prompted the Ethiopia government to repeat demands for these objects to be returned.

‘I do think they should give them back. It should be returned on a permanent basis, as opposed to a long-term loan,’ I was told by Maaza Mengiste, author of Beneath the Lion’s Gaze, selected by the Guardian as one of the ten best contemporary African books. She was responding to the comments from the V&A director, Tristram Hunt, who said that, ‘these items have never been on a long-term loan in Ethiopia, but as we look to the future I think what we’re interested in are partnerships around conservation, interpretation, heritage management, and these need to be supported by government assistance so that institutions like the V&A can support sister institutions in Ethiopia.’

The Ethiopian government has rejected this offer. During a Skype conversation with me, Mengiste says, ‘The V&A’s position seems to be that we have these items with a complicated history, and we have a responsibility to display them, and to teach you this history. There’s an implied generosity, an implied benevolence behind those words, but actually what they are saying is we’re not going to give this back.’ It’s an issue that’s important to her, as she explains, ‘When I’m looking at this as an Ethiopian, as someone who cares very deeply about the significance of art objects, of artefacts and the history that they contain – as well as the religious importance for people – what I’m hearing is a very nicely worded no. I think it’s the responsibility of all those who care about the erasures that continue to happen in history and historical memory, to fight back against that no; no matter how well coded it is in gentle language. We have to consider this an ongoing erasure of historical memory. We have to ask what the role of a museum is if it is also taking part in this erasure of memory, and of a people’s history.’

Read the full article in Dazed.


.

Italy: The bright side of populism?
Jan-Werner Müller, NYR Daily, 8 June 2018

The FSM in Italy and the radical-left Podemos in Spain are often described as mobilizations of angry citizens, especially the young, in the wake of the international financial crisis of 2008–2009 and the austerity measures imposed on southern countries during the euro crisis that followed. Both groups promote themselves as movements, rather than traditional parties (which Beppe Grillo, the founder of the FSM, has declared ‘evil’). Both benefit from being associated with ideals of direct democracy, in particular, a system of continuous online participation in decision-making as opposed to delegating power to professionals in parties. This story, ‘from the barricades to the blogs to the ballot box,’ is a little deceptive, though. In Spain, the great popular protests against ‘politics as usual’ took place in 2011, yet Podemos (literally, ‘We can’) was not formed until 2014. Its founders were political scientists who thought the main lesson from the protests in public squares was that the received ideas of the left no longer resonated with citizens. Instead of left-right, they held, the main political divide should be la casta—the caste of professional politicians—versus el pueblo, or simply: arriba versus abajo (above versus below; or, also, a colorful metaphor promoted by the professors: the elites as cats and the people as mice). Podemos’s instigators even concluded, ‘If you want to get it right, don’t do what the left would do’—though their actual policy ideas about housing and employment, for instance, were often close to what traditional Social Democrats would have offered.

This self-consciously post-ideological approach went hand in hand with an unabashed emphasis on strong (and mostly male) leadership. The reason, it seems, is not that southern Europeans are necessarily more prone to machismo, but that charismatic personalities help to establish a brand: early on, Podemos would just put on the ballot a headshot of its leader, Pablo Iglesias, originally a political science professor from Madrid, with his trademark ponytail. Iglesias was, in due course, accused by critics inside his party of hiperliderazgo,’online Leninism,’ and other epithets to describe an authoritarian leadership manipulating naïve activists. He responded, alluding to a famous passage in Marx, that one could not storm the heavens by consensus.

Read the full article in the NYR Daily.


.

The mask it wears
Pankaj Mishra, London Review of Books, 21 June 2018

In The Last Utopia, Moyn mentioned Du Bois’s attempt to internationalise the plight of African-Americans and to define institutionalised racism as a human rights violation, but he did not acknowledge the significance of Du Bois’s failure to achieve these things, or indeed the many valiant and doomed attempts in the global South to transcend racialised political and economic hierarchies. Moyn now acknowledges that his previous analysis was incomplete. In Not Enough, he more effectively provincialises an ineffectual and obsolete Western model of human rights. As he puts it, ‘local and global economic justice requires redesigning markets or at least redistributing from the rich to the rest, something that naming and shaming are never likely to achieve.’ Since the human rights movement ‘cannot reinvent itself with new ideals and tools’, he argues, it should ‘stick to what it does best: informing our concepts of citizenship and stigmatising evil, without purporting to stand for the whole of “global justice”’.

Moyn’s book is part of a renewed attention to the political and intellectual ferment of decolonialisation, and joins a sharpening interrogation of the liberal order and the institutions of global governance created by, and arguably for, Pax Americana. In A World of Struggle: How Power, Law and Expertise Shape Global Political Economy, David Kennedy blames humanitarian interventionists and international lawyers, among other globalists, for bringing forth a world that is ‘terribly unjust, subject to crisis, environmentally unwise, everywhere politically and economically captured by the few’. Martha Nussbaum recently denounced the United Nations ‘system’ as ‘grotesquely flawed and corrupt, totally lacking in democratic accountability, and therefore devoid of any procedural legitimacy when it comes to imposing law on people’. The loss of legitimacy seems more devastating in the case of the West-led human rights movement, for which severe self-reckoning and downsizing seem unavoidable today. Having turned, as David Rieff put it recently in Foreign Policy, into a ‘secular church of liberal globalism’, the human rights movement has become a casualty of the worldwide backlash against liberal globalists. A principled minority long suspicious of Western NGOs has been joined by opportunistic chieftains of majoritarian movements. Erdoğan has jailed the chair of Amnesty International Turkey. Amnesty International India had temporarily to close its offices in Bangalore in 2016 after it was assaulted by Hindu nationalists accusing the charity of ‘sedition’. Netanyahu has deported the director of Israel and Palestine Human Rights Watch. In Hungary, Orbán seems determined to expel George Soros’s Open Society. As Trump frankly admires autocrats and refuses to pay even vice’s meagre tribute to virtue, the human rights movement is facing, as Rieff writes, ‘the greatest test it has confronted since its emergence in the 1970s’.

Read the full article in the London Review of Books.


.

Frankenstein in Baghdad

Fiction of dystopian times:
Ahmed Saadawi’s ‘Frankenstein in Baghdad’

Sam Metz, LA Review of Books, 5 June 2018

Living amid Baghdad’s desolation is Hadi al-Attag, who, readers quickly learn, is Saadawi’s analogue of Mary Shelley’s Dr. Victor Frankenstein. Hadi is no doctor, but a gossipy antiques dealer ‘with bulging eyes, who reeked of alcohol and whose tattered clothes were dotted with cigarette burns.’ When Hadi is not embellishing his stories at the neighborhood coffee shop, he is drinking ouzo, sleeping with prostitutes, or roaming the city searching for tchotchkes and trinkets to sell out of his crumbling ruin of a home.

How does Hadi become Dr. Frankenstein? He rescues fragments of blown-off bodies abandoned on the street; he begins collecting the fragments, left over from car bombings and other incidents of violence, like they are precious antiques, hoping to return some semblance of dignity to the deceased. In what at first looks like nothing more than a half-baked art project, he sews them together to form a full human body, which, as you may have guessed, comes to life.

Once born, this creature pursues its goal of bringing forth justice that has long been absent in Baghdad. His method: vigilante killing. The creature kills a universally despised, old Ba’athist general responsible for sending many young soldiers to die in the 1980s Iran-Iraq War, a member of al-Qaeda, as well as a security contractor, and, in the process, quickly becomes the subject of nationwide rumors and speculation. A magazine features him in a story entitled ‘Urban Legends from the Streets of Iraq,’ complete with a cover of Robert De Niro as Frankenstein’s monster…

Dystopian fiction combines components of reality specific to the time in which it’s written with science or fantasy elements that depict the nightmarish direction we are bending toward. Frankenstein in Baghdad reverses this typical formula: the dystopian elements of the novel are not rooted in its speculative, supernatural elements but rather in the very real, nightmarish violence of 2005 Baghdad.

Read the full article in the LA Review of Books.


.

Divided we fall? Australia labor unions’ slump
may be one reason for low wages growth
Swati Pandley, Reuters, 17 June 2018

After a record 26 years of uninterrupted economic growth, Australian workers should be sitting pretty. They aren’t.

Their annual wage increases are, by some measures, lagging inflation, job security is an issue, and at least one survey shows their sense of overall wellbeing is at an all-time low.

Many policymakers and mainstream bank economists puzzle over the reasons for all this.

They point to Australia’s transition to more of a services economy, the impact of disruptive technologies, the lack of productivity growth, and the increase in the number of part-time and temporary jobs as among reasons.

But some labor experts have a better explanation: a plunge in trade union membership in Australia to less than 15 percent of the workforce now from more than 40 percent in 1991, much greater than declines in other industrialized countries.

They say that has allowed employers to dictate the size of wage rises without challenge.

‘Unionization has collapsed far more violently in Australia than virtually anywhere in other developed, rich countries,’ said Josh Bornstein, Melbourne-based employment lawyer at Maurice Blackburn, who often represents workers in litigation.

‘Unions have been disempowered and that is bad for wage outcomes,’ he added.

The contrast between stellar growth – the nation’s economy expanded at a 3.1 percent annual rate last quarter to outpace the United States, Europe and Japan – and the lot of ordinary Australians is a major concern for policymakers.

It poses a big political challenge for Prime Minister Malcolm Turnbull who has been flagging in polls for more than two years now, and who will probably hold a general election by next May.

Average annual compensation per employee crawled up by 1.6 percent last quarter, below the inflation rate of 1.9 percent, as companies took a large slice of the income pie with operating profits surging to a record. A separate measure released in May showed the wage price index, which follows price changes in a fixed basket of jobs, rose 2.1 percent last quarter.

Read the full article on Reuters.


.

Going nowhere fast
Ben Allanach, Aeon, 19 June 2018

All these challenges arise because of physics’ adherence to reductive unification. Admittedly, the method has a distinguished pedigree. During my PhD and early career in the 1990s, it was all the rage among theorists, and the fiendishly complex mathematics of string theory was its apogee. But none of our top-down efforts seem to be yielding fruit. One of the difficulties of trying to get at underlying principles is that it requires us to make a lot of theoretical presuppositions, any one of which could end up being wrong. We were hoping by this stage to have measured the mass of some superpartners, which would have given us some data on which to pin our assumptions. But we haven’t found anything to measure.

Instead, many of us have switched from the old top-down style of working to a more humble, bottom-up approach. Instead of trying to drill down to the bedrock by coming up with a grand theory and testing it, now we’re just looking for any hints in the experimental data, and working bit by bit from there. If some measurement disagrees with the Standard Model’s predictions, we add an interacting particle with the right properties to explain it. Then we look at whether it’s consistent with all the other data. Finally, we ask how the particle and its interactions can be observed in the future, and how experiments should sieve the data in order to be able to test it.

The bottom-up method is much less ambitious than the top-down kind, but it has two advantages: it makes fewer assumptions about theory, and it’s tightly tethered to data. This doesn’t mean we need to give up on the old unification paradigm, it just suggests that we shouldn’t be so arrogant as to think we can unify physics right now, in a single step. It means incrementalism is to be preferred to absolutism – and that we should use empirical data to check and steer us at each instance, rather than making grand claims that come crashing down when they’re finally confronted with experiment.

Read the full article in Aeon.


.

Philosophy is dead
Jonathan Rée, TLS, 20 June 2018

In a beautiful eulogy delivered on the occasion of Rorty’s death in 2007, Geuss recalled a conspiratorial moment when his colleague revealed a plan for an undergraduate course called ‘An alternative history of modern philosophy’. Rorty proposed to fill his lectures with supposedly minor characters such as Petrus Ramus, Paracelsus and Johann Gottlieb Fichte, to the exclusion of canonical drones such as Locke, Leibniz and Hume, and out-and-out deplorables such as Descartes (Rorty’s pet hate) or Kant (Geuss’s). The projected ‘alternative history’ came to nothing. (According to Geuss, Rorty blamed the Princeton ‘thought police’, otherwise known as the Committee on Instruction.) But Geuss’s latest book could be seen as a fulfil­ment of Rorty’s plan, forty years on.

Changing the Subject is a history of philosophy in twelve thinkers. There are lucid self-contained essays on Socrates, Plato, Lucretius, Augustine, Montaigne, Hobbes, Hegel, Nietzsche, Lukács, Heidegger, Wittgenstein and Adorno; but Descartes, Locke, Leibniz, Hume and Kant don’t even make it to the index. The whole performance combines polyglot philological rigour with supple intellectual sympathy, and it is all presented – as Geuss puts it – hilaritatis causa, or in a spirit of fun.

Out of his twelve philosophers, Geuss seems closest to Lucretius, who despised religion (though the word religio meant something rather different at the time), and maintained that the world has no moral purpose and is utterly indifferent to our existence. Hobbes comes almost as high in Geuss’s estimation: he invented the concept of the ‘state’ as the locus of political sovereignty, and treated it as an ‘artificial construct’ which pays no regard to such so-called principles as ‘natural rights’ or ‘the common good’. Hegel, as Geuss reads him, was a good disciple of Hobbes because he avoided trying to ‘justify’ the ways of the world, and he opened the way for Nietzsche’s furious attacks on self-serving ideas of ‘truth-telling’, ‘profundity’ and ‘authenticity’. In the wake of Lucretius, Hobbes, Hegel and Niet­zsche, philosophy seems to be essentially a battle against the bewitchment of our intelligence by moralistic sentimentality.

There are two different ways of responding to this predicament. Geuss sketches one of them in a scintillating chapter on Theodor Adorno, the twentieth-century aesthete who sought to combine classical Marxism with disdain for the stupidity of the masses. Adorno, you might say, showed signs of intellectual mysophobia, or Platonistic revulsion from impurity, and Geuss – who regards Plato as an ‘intellectual bully’ – is uneasy about Adorno’s ‘relentless negativism’. He finds an amiable alternative in Michel de Montaigne who, having no desire to correct the follies of humanity, was ‘free of all these pathologies’.

Read the full article in the TLS.


.

mosquito

Gene-editing mosquitoes
Jonathan Pugh, Practical Ethics, 5 June 2018

The CRISPR system opens up new opportunities for gene-editing strategies. It might be used to target other genes in other species. For example, some teams have explored the use of CRISPR to modify the Anopheles stephensi mosquito so that it becomes resistant to the plasmodium parasite that causes malaria in humans. More significantly, these researchers, and others investigating the use of CRISPR in this context, have harnessed the CRISPR system to develop gene drive systems for their genetic modifications. Such gene drive systems could potentially enable researchers to spread a chosen genetic mutation throughout an entire species, by stimulating preferential inheritance of the affected gene. As well as being deployed in the fight against vector borne diseases, gene drive systems might also be employed for environmental causes, such as suppressing the population of an invasive and destructive species.

This is not in the realm of science fiction; the technology is here, detailed in respected scientific journals. As such, it is hugely important to assess the ethical implications of how we should use this technology.

Some commentators have criticized the technology as being contrary to the principle of the sanctity of life, or by arguing that it amounts to ‘playing God’. In my research, I have suggested that these objections are problematic for a number of reasons, and that they obscure what is really at stake in this debate. One problem they both face is that both allegations could have similarly been made against our choice to eradicate the variola virus responsible for smallpox in the 1970s, through an extensive vaccination program. Yet this was one of modern medicine’s greatest triumphs.

The real ethical questions that the prospect of gene-editing mosquitoes raises are grounded by our scientific uncertainty about the technology; will the modification work? Would eradicating a species of mosquito adversely affect the ecosystem? Will the modification spread to other species? These are empirical questions about which there is significant debate, and we clearly need more data to answer them. However, these empirical questions must be the starting point for what is really the fundamental ethical question here, which is ‘How should we make decisions about whether or not to deploy a technology, when we have only a limited understanding of its potential risks and benefits? Who should decide? Do we have the right to deploy a technology that could plausibly change the global ecosystem, if others object to its use?

The easy answer here is to say that we should not take any risks, and that in the light of any uncertainty, we should simply maintain the status quo situation; perhaps it is better the devil you know. Of course, that is not the approach that we have taken with other sorts of novel technology, such as IVF and the internet. But more importantly, in this case the status quo is one in which many hundreds of thousands of people are dying from diseases that this new technology could potentially prevent. As such, we need have to have very strong moral reasons to maintain this situation. Rather than avoid the ethical questions by simply adverting to the possibility of risk, we have an obligation to engage in moral reasoning about how to weigh the relevant risks and benefits here, grounded in the best scientific evidence available to us.

Read the full article on Practical Ethics.


.

Witnesses for the future
Emily Bernard, The New Republic, 19 June 2018

‘You have seen how a man was made a slave,’ Frederick Douglass wrote in his 1845 autobiography, the Narrative of the Life of Frederick Douglass. ‘You shall see how a slave was made a man.’ These words herald the moment when Douglass masters his master, the sadistic overseer and ‘negro-breaker,’ Edward Covey, seizing him by the throat. More remarkable than Douglass’s physical prowess was the fact that he lived to write about this at all: In addition to the beatings and other miseries, Douglass endured severe cold that left gashes in his feet pronounced enough to cradle his pen. ‘Written by himself’ is Douglass’s subtitle, a phrase that resounds throughout early African American autobiographical writing. Douglass’s books, along with photographs of the author, portrayed a man who was fully self-composed. The story was the self.

‘This is the life story of Cudjo Lewis, as told by himself.’ Zora Neale Hurston similarly begins her preface to Barracoon: The Story of the Last ‘Black Cargo.’ Barracoon is not a slave narrative in the traditional sense, and its subject, Cudjo Lewis, never mastered the written word. But the story he had to tell filled an important gap in the grand narrative of the African American experience. Born Oluale Kossola in the 1840s, Lewis was believed to be the last survivor of the transatlantic slave trade. He was, Hurston writes, ‘the only man on earth who has in his heart the memory of his African home; the horrors of a slave raid; the barracoon; the Lenten tones of slavery; and who has sixty-seven years of freedom in a foreign land behind him.’ Like Douglass, Lewis was himself a story; his survival was proof of a people’s vitality.

Trained as an anthropologist, Hurston was interested in the universal, the profound, and the ordinary. ‘How does one sleep with such memories beneath the pillow?’ she wonders at the commencement of her conversations with Lewis. ‘How does a pagan live with a Christian God? How has the Nigerian ‘heathen’ borne up under the process of civilization? I was sent to ask.’ Her drive to understand brought her to Cudjo Lewis’s gate in December 1927, and it would direct her for her entire life, as she investigated and preserved, in her works of ethnography and fiction, the complex world of black folk traditions. Barracoon is not just the story of a man, Cudjo Lewis. It is also the story of a woman on her way to becoming a preeminent collector of black folklore.

Read the full article in The New Republic.


.

Britain through Muslim eyes:
An interview with Claire Chambers

Anastasia Valassopoulos & Claire Chambers,
Journal of Postcolonial Literature, 8 June 2018

AV: Yes, yours isn’t a post-9/11 literature project. Tell me, how do you source and decide on your materials?

CC: That was the hardest thing about researching Britain Through Muslim Eyes. Much of the reading came to me by word of mouth, with writers, academics and general readers giving me recommendations. Then one lead would take me to another, in a kind of snowball effect, and three archival websites – Project Gutenberg, Archive.org and Openlibrary.org – proved to be treasure troves of open access books I could download and then devour on my e-reader. One of the most useful works of criticism that led me to further material was Rasheed el-Enany’s monograph, Arab Representations of the Occident: EastWest Encounters in Arab Fiction. Another influential book was Nabil Matar’s Europe through Arab Eyes, 1578–1727. I combined and reconstellated these two books’ titles in an affectionate hat tip.

Indeed, eyes, looking and the gaze were themes that kept recurring in the literature by early Muslims in the UK. Take Sajjad Zaheer’s A Night in London, for example: that novel and several of the other texts are about a young, naive South Asian or Arab man, who comes to England, gets educated and, as I suggested earlier, starts to see through new eyes. There is a loss of vision, but there are also gains. Authors often use the trope of the protagonist having been blind before, and now they can see. They also get a jolt from looking at their own culture in a newly distorted way. These themes of looking, experiencing a shock at being in Britain and another shock on return are extremely common. In Tayeb Salih’s Season of Migration to the North, to take another example, things come into the narrator’s mind and people come to his eyes. He doesn’t look, but rather passively receives images. He’s a voyeur, really. And in Yahya Hakki’s novella, ‘The Lamp of Umm Hashim’, the fiancée the narrator has left behind in Egypt to come to Britain for his educational enlightenment ends up going blind. I make a lot of eyes and various modes of looking in this literature. I’m especially interested in what is to some extent a reversal of the orientalist gaze.

Read the full article in the Journal of Postcolonial Literature.


.

New human gene tally reignites debate
Cassandra Willyard, Nature, 19 June 2018

One of the earliest attempts to estimate the number of genes in the human genome involved tipsy geneticists, a bar in Cold Spring Harbor, New York, and pure guesswork.

That was in 2000, when a draft human genome sequence was still in the works; geneticists were running a sweepstake on how many genes humans have, and wagers ranged from tens of thousands to hundreds of thousands. Almost two decades later, scientists armed with real data still can’t agree on the number — a knowledge gap that they say hampers efforts to spot disease-related mutations.

The latest attempt to plug that gap uses data from hundreds of human tissue samples and was posted on the BioRxiv preprint server on 29 May. It includes almost 5,000 genes that haven’t previously been spotted — among them nearly 1,200 that carry instructions for making proteins. And the overall tally of more than 21,000 protein-coding genes is a substantial jump from previous estimates, which put the figure at around 20,000.

But many geneticists aren’t yet convinced that all the newly proposed genes will stand up to close scrutiny. Their criticisms underscore just how difficult it is to identify new genes, or even define what a gene is.

‘People have been working hard at this for 20 years, and we still don’t have the answer,’ says Steven Salzberg, a computational biologist at Johns Hopkins University in Baltimore, Maryland, whose team produced the latest count.

In 2000, with the genomics community abuzz over the question of how many human genes would be found, Ewan Birney launched the GeneSweep contest. Birney, now co-director of the European Bioinformatics Institute (EBI) in Hinxton, UK, took the first bets at a bar during an annual genetics meeting, and the contest eventually attracted more than 1,000 entries and a US$3,000 jackpot. Bets on the number of genes ranged from more than 312,000 to just under 26,000, with an average of around 40,000. These days, the span of estimates has shrunk — with most now between 19,000 and 22,000 — but there is still disagreement

Read the full article in Nature.


.

Blake America A Prophecy

A natural ally
Michael Caines, TLS, 8 June 2018

The US was not a place Blake had visited (except, presumably, in his imagination), but it had cast a shadow across his artistic career nonetheless. He had been seventeen years old when the American War of Independence had erupted; with ‘all its dark horrors’, as he later called them, it would invite unavoidable comparison with the turmoil in France a few decades later. It was in that period, in 1793, that Blake wrote the long poem America, the first ‘Prophecy’ in his projected ‘Bible of Hell’: another vision, this time it was of strife between that force of rational and pseudo-religious domination, the ‘Guardian Prince of Albion’, and the rebellious Orc. Fire and plague duke it out amid ‘red clouds and raging fires’: ‘Albion is sick; America faints’.

The citizens of New York close their books and lock their chests;
The mariners of Boston drop their anchors and unlade;
The scribe of Pennsylvania casts his pen upon the earth;
The builder of Virginia throws his hammer down in fear.

Only then Albion’s plagues are foisted back onto England, which results in further chaotic scenes, until Urizen intervenes (‘His stored snows he poured forth’), freezing the conflict in a frieze that cannot hold. At least this is one simplified, fairly literal way to sum things up. There is, of course, more to it than that. Modern editions of Blake’s poetry should come with a warning: mystical mythopoeia with lengthy scholarly footnotes ahead.

Some read America and its like – the answering EuropeThe Marriage of Heaven and HellThe First Book of Urizen etc – and are baffled. Yet there have always been people, as Freedman shows, who have found his work speaks to them, especially in the US. This is by no means terra incognita: John Beer could write in the TLS about the ‘deep-cut disparity’ between English and North American responses to Blake back in 1969. Walt Whitman, Hart Crane, Robert Duncan and Allen Ginsberg are among the poets discussed in a volume of essays called William Blake and the Moderns, published in 1982. Yet Freedman’s book William Blake and the Myth of America: From the Abolitionists to the Counterculture, as that straight line of a subtitle suggests, aims to tell the story straight through. It begins with those for whom Blake the religious dissenter was a ‘natural ally’ in the 1830s and 40s, such as Ralph Waldo Emerson, and takes in later poets such as Whitman, Crane and Ginsberg, on its way to Bob Dylan, The Doors and Patti Smith – and eventually up to the present day.

Read the full article in the TLS.


.

The Standard Model (of physics) at 50
Yvette Gendes, Scientific American, 15 June 2018

The first thing to emphasize is that the Standard Model is well worth celebrating. As noted  by Gerard t’Hooft (Nobel laureate, 1999), no one knew in the 1960s, when he did his own seminal work in electroweak theory, that there would be something as comprehensive as the model turned out to be. But there is, and it explains all matter on all scales, from the tiniest Planck length (6.3631×1034 inch) to the scale of the universe. ‘It’s gorgeous!’ said David Gross (Nobel laureate, 2004, for his work on the strong force that binds atomic nuclei,) beaming like a proud father at the written equation that encapsulates the model. So precise are its predictions that physicists who rely on it at the Large Hadron Collider (LHC) near Geneva, Switzerland, have to be alert to incredibly mundane effects like trains rumbling by miles away, because shaking alters the minute electrical signals at the giant accelerator’s detectors. You don’t worry about things like that unless the predictions are incredibly spot on.

And yet, despite its robust predictions, the consensus was that today’s Standard Model is not the final one. For all its success, the Standard Model does not answer the question of what the dark matter and dark energy are that make up the majority of matter in our universe. It does not explain why neutrinos have mass. It does not explain how the fourth fundamental force, gravity, can be reconciled with the other three. And it does not explain why all the matter in our universe is here in the first place—the question of why there’s something rather than nothing.

‘The bottleneck in particle physics is experimental, not theoretical,’ explained Gross. The accelerators required to test the Standard Model are incredibly expensive—the LHC cost about $9 billion to build, and costs $1 billion a year to run—and finding discrepancies in experiments that could lead to a new, even more powerful theory could require even more costly experiments. Without that sort of data, however, ‘it’s easy to get lost in the fog,’ observed t’Hooft.

Could the answers to those questions lie in extra dimensions, or string theory, or some other theory that hasn’t even been conceived yet? It’s possible, but without experimental proof, it’s easy to get carried away. ‘Remember,’ said George Smoot, who shared the 2006 Nobel Prize for his work in characterizing the cosmic microwave background (CMB) radiation left over from the big bang, ‘the steady state theory for the universe [the theory ruled out by the first detection of the CMB in 1964] is extremely beautiful, but it’s also extremely wrong.’

Read the full article in Scientific American.


.

Review of Naked:
The Dark Side of Shame and Moral Life

Carissa Véliz, Notre Dame Philosophical Reviews,
23 June 2018

The book is particularly timely given how common public shaming has become in online settings. Krista K Thomas argues that, even though shame is a negative emotion with potentially damaging consequences, its dark side is outweighed by its moral benefits insofar as shame is constitutive of desirable moral commitments. According to her, being liable to shame is constitutive of respecting other people’s points of view, acknowledging others’ moral standing, and accepting that our identities are not only set by what we think of ourselves, but also by factors outside of our control that include our personal histories and other people’s opinions of us.

Chapter 1 introduces three philosophical positions on shame: the traditional, the naturalistic, and the pessimistic views. What Thomason calls the traditional view of shame presents shame as the result of realising one has failed to live up to one’s ideals. While this view can account satisfactorily for moral instances of shame (e.g. feeling shame about having done something wrong), Thomason argues that it fails to provide a convincing account of some of the most paradigmatic, yet non-moral cases of shame (for example, a boy caught masturbating). The naturalistic view sees shame as the result of not behaving in accordance with public norms and standards. In describing shame’s role as incentivising social conformity, it does a better job than the traditional view of accounting for non-moral instances of shame. On the downside, this view lacks a normative stance: it tells us nothing about whether and when shame might be an appropriate response. Finally, the pessimistic view of shame argues that shame is a negative emotion that ought to be overcome. Thomason’s ultimate objective will be to argue that shame is morally necessary, and not something to get rid of.

In Chapter 2 Thomason examines the connection between shame and violence. When faced with feeling shame, or with the threat of feeling shame, people sometimes resort to aggression and violence. She argues that neither the naturalistic nor the traditional view can explain this phenomenon. The naturalistic view cannot explain violence in response to shame because it goes against its main argument: that shame is a way for people to show appeasement to dominance. Similarly, the traditional view cannot explain why someone would respond to shame by doing something even worse than failing to live up to an ideal. Thomason argues that a better way to interpret the connection between shame and violence is to think about violence as a reaction to a lack of control in relation to an unflattering part of our identity that we ‘do not embrace or identify with’ (pp.74-75). Thomason further elucidates why shame can turn into violence in the following chapter.

Readers interested in Thomason’s original account of shame can skip the first two chapters, which can be seen as an extensive prolegomenon. In Chapter 3, Thomason offers her account of privacy, arguing that shame is an experience of tension between one’s identity (who we are, which is partly determined by features of our histories and by how others see us) and one’s self-conception (who we think we are). When we feel shame, we feel defined by some feature of our identity that overshadows our self-conception; we suddenly feel like we are nothing other than what we feel shame about…

This book is undoubtedly a valuable contribution to furthering the conversation about shame and its proper place in morality. Thomason’s account of the nature of shame is alluring and deserves serious consideration. The jury is still out about whether shame is a desirable emotion, however. Thomason’s bet is for it. Mine is against it. Philosophers seeking to better inform their own bets would do well in reading this book.

Read the full article in the Notre Dame Philosophical Reviews.


.

A sneaky theory of where language came from
Ben James, Atlantic, 10 June 2018

Oren Kolodny, a biologist at Stanford University, puts the question in more scientific terms: ‘What kind of evolutionary pressures could have given rise to this really weird and surprising phenomenon that is so critical to the essence of being human?’ And he has proposed a provocative answer. In a recent paper in the journal Philosophical Transactions of the Royal Society B, Kolodny argues that early humans—while teaching their kin how to make complex tools—hijacked the capacity for language from themselves…

Kolodny’s arguments build off the groundbreaking experiments of Dietrich Stout, an anthropologist at Emory University. A flintknapper himself, Stout has taught hundreds of students how to make Acheulean-era tools, and he’s tracked their brain activity during the learning process. Stout found that his students’ white matter—or the neural connectivity in their brains—increased as they gained competence in flintknapping. His research suggests that producing complex tools spurred an increase in brain size and other aspects of hominin evolution, including—perhaps—the emergence of language.

But language couldn’t just pop out fully formed, like Athena from the head of Zeus. ‘Every evolutionary process, including the evolution of language, has to be incremental and composed of small steps, each of which independently needs to be beneficial,’ Kolodny explains.

Teaching, he says, was a crucial part of the process. When hominins like Homo ergaster and Homo erectus taught their close relatives how to make complex tools, they worked their way into an ever more specialized cultural niche, with evolutionary advantage going to those individuals who were not only adept at making and using complex tools, but who were also able—at the same time—to communicate in more and more sophisticated ways.

Kolodny points out what might seem like a contradiction in this notion: Many species of ape use tools in sequence-dependent ways and also have highly developed levels of communication. But the order in which those apes produce their utterances doesn’t make much difference to their meaning, Kolodny explains. ‘The question becomes not ‘How did language arise only in humans?’ but ‘Why did it not arise in other apes as well?’ And the answer is, the qualitative difference between us and other apes is they don’t have the communication system coupled to those temporal sequencing structural capabilities.’

Read the full article in the Atlantic.


.

CRISPR takes on Huntington’s disease
Michael Eisenstein, Nature, 30 May 2018

Like most other neurological disorders, Huntington’s disease has proved to be a costly and frustrating target for drug developers. But it also has distinctive features that make it a good match for treatments that target genes. It arises from a mutation in a single gene that encodes the protein huntingtin, and a disease-causing copy of the gene can be readily distinguished from a normal copy by the presence of an overlong stretch of a repeated triplet of nucleotides, CAG. Before turning to CRISPR, Davidson and her colleagues had some success in treating animal models of Huntington’s disease with RNA interference (RNAi), which uses synthetic molecules of RNA to prevent the production of mutant huntingtin — although it took them a considerable amount of time to get there. ‘We’ve focused the last 17 years on RNAi-based approaches,’ says Davidson. However, both this and a promising related treatment for Huntington’s disease that involves antisense oligonucleotides will probably require long-term, repeated administration to provide sustained benefits.

By contrast, CRISPR could achieve the same benefits through a single dose that permanently inactivates the defective gene with remarkable efficiency, as Davidson’s team demonstrated last year1, both in cells from people with Huntington’s disease and in mouse models of the condition. ‘I was surprised how easy it was — I think that’s the beauty of the system,’ she says. In the past five years, several teams of researchers have independently shown that genome editing can reliably eliminate the gene that encodes mutant huntingtin, thereby halting the production of the toxic protein and its accumulation into clumps in experimental models.

But clearing protein clumps in mice is of questionable value when researchers often struggle to translate such findings into treatments for people — in general, potential therapies for brain disorders have a long history of failure and disappointment in clinical trials. Accordingly, the early adopters of CRISPR are trying to obtain clearer evidence of its probable clinical benefits while grappling with thorny questions related to its safety, efficacy and delivery that it is crucial to answer before trials in people can take place. ‘I believe we can now seriously consider clinical strategies to edit huntingtin,’ says Nicole Déglon, a neurologist at the Lausanne University Hospital in Switzerland, ‘but I would say we are still at the very beginning of the story.’

Read the full article in Nature.


.

Why we don’t read, revisited
Caleb Crain, New Yorker, 14 June 2018

Here there’s a little bit of good news: the average American reader spent 1.39 hours reading in 2003, rising to 1.48 hours in 2016. That’s the very gradually rising blue line in the graph above. In other words, the average reading time of all Americans declined not because readers read less but because fewer people were reading at all, a proportion falling from 26.3 per cent of the population in 2003 to 19.5 per cent in 2016. You could call this a compositional effect, but it’s a rather tautological one: reading is in decline because the population is now composed of fewer readers. And the assessment would be a little unfair: we don’t know that the survey’s non-readers are in fact never-readers. All we know is that, when Americans sit down to read, they still typically read for about an hour and a half, but fewer are doing so, or are doing so less often.

It’s beyond my statistical powers (though probably not beyond an expert’s) to figure out whether a decline in an individual’s reading tends to be correlated with a rise in any other activity measured by the American Time Use Survey. I can only offer suggestive comparisons. The activity that the survey calls ‘socializing and communicating’ seems to be shifting in more or less the same way that reading is: those who take part spend about as much time on it as they ever did, but the over-all average of hours per day spent on it is declining because fewer people are taking part.

Perhaps whatever is eating away at reading is also eating away at socializing. More and more people are taking part in ‘game playing‘ and ‘computer use for leisure, excluding games,’ even as the time that devotees spend on the activities holds steady. It’s possible, too, that the numbers may be reflecting a shift in the way that people read news and essays. As best as I can tell from the survey’s coding instructions, reading an e-book and listening to an audiobook both count as ‘reading.’ With computer activity, which would seem to include the use of smartphones, the survey-taker is supposed to ‘code the activity the respondent did as the primary activity,’ which presumably means that reading a newspaper or magazine online would also be classified as ‘reading.’ But ‘browsing on the internet’ is listed in the survey’s official lexicon as an example of ‘computer use for leisure, excluding games.’ So there’s a chance that people who used to read the newspaper in print and be counted as ‘reading’ are now doing so online and being counted as Web surfers.

But, at last, we come to the rival to reading known as television, and find a footprint worthy of a Sasquatch.

Read the full article in the New Yorker.

%d bloggers like this: