The latest (somewhat random) collection of recent essays and stories from around the web that have caught my eye and are worth plucking out to be re-read.
The people, no
Thomas Frank, The Baffler, No 42, November 2018
In 1999, we thought right-wing populism was a historical mystery that needed to be unraveled and understood; in the minds of its legions of analysts today it’s no mystery at all. For them, the common people’s incomprehension of liberal values is as unremarkable and self-evident as is the superiority of Yale over Southeast Missouri State. Populism, much of our ruling punditburo now believes, is a creature of the ill-informed right by its very nature. Bring aggrieved plebes together in movements and mass rallies and of course they will start chanting the name of Trump. That’s just who ordinary Americans are. In this democracy, it’s the people themselves who are the problem.
This is the reason that so many of the prized manifestos of the left these days resemble nothing so much as ritualized scolding—or, rather, concerned letters from mom gently reminding the lowly of their precarious place in the New Economy hierarchy. Before long, chide the solons of liberalism, they’ll have to retrain and repatriate to one of the coastal warrens of tech monopoly and lifestyle liberalism—places where they don’t look too fondly on MAGA hats.
At the other end of the ideological spectrum, meanwhile, the Tea Party movement of 2009 is now in the midst of a full-on populist makeover. Never has there been a phonier, more transparent bid to mislead an angry public. Supposedly a protest against bank bailouts, it was actually launched from among the futures traders on the floor of the Chicago Mercantile Exchange—and then backed to the hilt by Beltway libertarians looking for a way to distance themselves from the badly damaged Republican brand. The movement’s Number One heroine, Ayn Rand, spent decades dreaming up ways to express her contempt for democracy and its beloved average citizen. The Tea Party movement’s great demand for solving the problem of out-of-control bankers—more bank deregulation—seems to have been developed as a kind of thought experiment to gauge the outer limits of human gullibility.
Yet in the fast-growing field of populism studies, the Tea Party movement is widely understood to be the realest deal there is. This is what populism looks like, the new Trump-addled cohort of populist diagnosticians confidently tells us; indeed, it is an almost perfect example of the species. That anti-bank catechism the founders of American populism preached in Kansas a century ago is the outlier, the mystery. That faith in ‘the people’ that built unions and fought World War II is something completely forgotten. Don’t call those things ‘populism’; don’t even think of them at all.
The story of how racist right-wing demagoguery came to be the meaning of ‘populism’ rather than a perversion of the populist impulse is a fascinating one, but what’s even more critical now is the implication of the change.
What does it tell us when liberals, faced with epic political corruption, spectacular bank misbehavior, and towering inequality, take that opportunity to declare war on populism? It tells us that they’ve lost any sense of their own movement as an expression of the vast majority. It tells us they have no idea why they believe they should be entrusted with power in the first place. And it reminds us that their particular brand of class-based self-delusion is a luxury that the rest of us can ill afford.
Read the full article in The Baffler.
The myth of the modernizing dictator
Robert Kagan, Washington Post, 21 October 2018
Sympathetic Americans saw Mohammed, or MBS, as he is known, as a transformational figure seeking to reform Saudi Arabia’s one-commodity economy and to reconcile Islam and modernity. If doing so required more not less dictatorial control, if it entailed locking up not only fellow members of the royal family but also women’s rights activists, moderate religious figures and even young economists raising questions about the dubious figures contained in his ‘Vision 2030’ program, then so be it. Only a ‘revolution from above’ held any promise of reforming that traditionalist, hidebound society. You know — omelets, eggs.
The trope isn’t new. During the 1920s and 1930s, Benito Mussolini, Joseph Stalin and even Adolf Hitler looked to many Americans like just what their countries needed to get them into shape. During the Cold War, leaders including the Philippines’ Ferdinand Marcos, Iran’s Mohammad Reza Pahlavi, South Korea’s Park Chung-hee and Chile’s Augusto Pinochet took turns as the United States’ favorite ‘modernizing’ dictators. In the post-Cold War era, the Chinese dictatorship has gained many Americans’ admiration for its smooth handling of the country’s economy.
Justifying all this sympathy for the dictator have been variations on what used to be called ‘modernization theory.’ Developing societies, the argument ran, had to move through an authoritarian stage before they could become democracies, for both economic and political reasons. Only authoritarian governments could be trusted to make the right economic decisions, unhampered by popular pressures for inflationary and deficit-raising spending.
Moreover, non-Western societies allegedly lacked many of the basic elements necessary to sustain democracy — the rule of law, stable political institutions, a middle class, a vibrant civil society. Pressing democracy on them prematurely would produce ‘illiberal democracy’ and radicalism. The role of the reforming autocrat was to prepare these societies for the eventual transition to democracy by establishing the foundations for liberalism.
During the 1960s, the political scientist Samuel P. Huntington argued that what modernizing societies need is order, not liberty. During the late 1970s, Jeanne Kirkpatrick used this argument to defend supporting ‘friendly’ right-wing dictatorships — on the theory they would eventually blossom into democracies if the United States supported them against their opponents, but would give way to radical, communist governments if the United States withdrew support.
It is remarkable how much power these kinds of arguments retain, despite their having turned out to be mostly nonsense. Kirkpatrick had it exactly backward. Communist governments were the ones that undertook reforms that led to their unraveling and a turn to democracy, however feeble. Meanwhile, authoritarianism persisted in the Middle East and elsewhere, except where the United States did withdraw support, as in the Philippines, South Korea and Chile; only at that point did they become democracies.
Read the full article in the Washington Post.
Legitimising bloodlust
Daily Times, 3 November 2018
And still Team Khan is talking of success. There, of course, is much truth to this. Just that it is the TLP that has emerged from the crisis wholly victorious. The latter cried judicial blasphemy. As it incited murder and insurrection against all state institutions. Thus to enter into dialogue with an entity whose entire mandate is built on glorifying political murder as a means of avenging ‘blasphemy’ — on equal footing, much less ink a ceasefire pact — is to legitimise bloodlust. And no good can come of it. This lesson ought to have been learned a year ago when Rizvi and his cohorts went from demanding the Law minister’s head on a stick to inciting an assassination attempt on the then man at the Interior.
All of which ought to have reminded the Centre that appeasement can never be a long-term substitute for effective strategy; such as collectively reforming the madrassa system to bring it into the mainstream. Instead of this ceasefire agreement, those at the political helm should have recognised that the time for restraint was through. Meaning Prime Minister Khan should have had the gumption to constitutionally empower all law enforcement agencies to do what was required to clear the streets of those who are at liberty to call for a bloodbath. Instead, it chose to capitulate to blackmail.
What was prematurely welcomed as a small yet significant step towards a more pluralistic and tolerant Pakistan has now been trampled underfoot and verily destroyed. Thereby placing minority communities in a more precarious position than ever before. This is to say nothing of throwing the courageous judges of the Supreme Court (SC) as well as defence counsels firmly to the wolves that no longer bother dressing in sheep’s clothing. Or, indeed, the poor who have had their businesses looted in some of the worst cases of daylight robbery at the hands of Islam’s so-called true custodians.
It appears the premier does not care that he has singlehandedly undermined the state’s writ. He should. For his government is in a perilous position; having likely lost what little parliamentary goodwill it enjoyed. And then there is the question of international confidence in the country as a safe investment destination. On both fronts, this will be incredibly hard to recover from. But the greater tragedy is that here in Naya Pakistan, the highest court in the land can acquit a Christian woman of all charges of blasphemy and this is still insufficient for her to be free.
Read the full article in the Daily Times.
The populist morass
Chris Lehman, The Baffler, No 42, November 2018
Ever since the dismal heyday of Joseph McCarthy, liberal intellectuals have adopted populism as an all-purpose synonym for cynical, bottom-feeding demagoguery, particularly when it takes on a racist or nativist guise. McCarthy himself was undoubtedly a populist in this version of historical inquiry, as were his many spittle-flecked progeny in the postwar world, such as George Wallace, Pat Buchanan, and Ross Perot. For that matter, the preceding generation of opportunist panderers outside the political mainstream were populists as well: the FDR-baiting radio preacher Charles Coughlin and the quasi-socialist Bayou kingmaker Huey Long; the definitely socialist Upton Sinclair and a motley array of Southern Dixiecrats and Klan sympathizers—populists all, and dangerous augurs of how minority rights, civic respect, and other core liberal-democratic values can be deformed in the hands of charismatic, divisive sloganeers.
The only trouble with this brand of populist-baiting is that it’s ideologically and historically incoherent. Inconveniently for the prim, hectoring postwar sermons of populist scourges like Richard Hofstadter and J.L. Talmon, populism originated not as a readymade platform for strongman demagogues, but as an economic insurgency of dispossessed farmers and working people. America’s first (upper-case P) Populist dissenters didn’t set out to traduce and jettison democratic norms and traditions; they sought, rather, to adapt and expand them, in order to meet the unprecedented rise of a new industrial labor regime and the consolidation of monopoly capitalism in the producers’ republic they described as the ‘cooperative commonwealth.’
Far from rallying to this or that fire-breathing strongman orator, the Populists of the late nineteenth century summoned their political insurgency out of a vast network of purchasing-and-marketing cooperatives, known as the National Farmers’ Alliance and Industrial Union—a movement that would come to employ more than forty-thousand lecturers nationwide and organize at the precinct level in forty-three states. Because the Farmers’ Alliance sought to promote both the economic independence and civic education of its members, it began life as an urgent campaign of political pedagogy. The pages of its widely circulated newspaper, the National Economist, outlined the history of democratic government in the West, harking back to Aristotle and Polybius. Alliance lecturers also found themselves advancing not merely political literacy, but literacy itself, since the gruesome exploitation of Southern tenantry usually involved getting farmers to sign usurious contracts they were unable to read.
In time, Populist organizers came to realize that simple economic cooperation would never, by itself, countermand the kind of economic power accruing to Gilded Age capitalists. So they began to organize a political arm, aimed at providing the sort of infrastructure that economic democracy requires. In addition to advocating the kind of procedural reforms to be taken up by a later generation of Progressive era reformers—such as direct election of senators, legislating by popular initiative, and public ownership of utilities—Alliance organizers proposed an alternate currency and banking system, known as the Subtreasury Plan. The idea behind the Subtreasury was to re-engineer America’s currency—and thereby the American economy at large—to reward the interests of laborers over those of capital. Economic reward would be directly weighted to crops harvested, metals mined, and goods manufactured, as opposed to wealth amassed and/or inherited.
Populists, in other words, took the country’s founding promise of democratic self-rule seriously as an economic proposition—and understood, as few mass political movements have done before or since, how inextricable the securing of a sustainable and independent livelihood is to the basic functioning of democratic governance. True to the incorrigibly procedural form of liberal political appropriation, however, the Subtreasury Plan would survive as a rough blueprint for the introduction of the Federal Reserve in 1914—with the significant caveat that the Fed would serve as a subtreasury network to fortify the nation’s currency for bankers, manufacturing moguls, and stock plungers, not ordinary farmers and workers.
Read the full article in The Baffler.
How vilification of George Soros
moved from the fringes to the mainstream
Kenneth P. Vogel, Scott Shane & Patrick Kingsley,
New York Times, 31 October 2018
After the end of the Cold War, with the Open Society Foundations as his main vehicle, Mr. Soros funded new work for destitute Soviet scientists in Russia, paid for free school breakfasts for Hungarian children and set up a college, the Central European University, that later drew the ire of Mr. Orban’s government.
In the United States, where Mr. Soros was granted citizenship in the 1960s, Mr. Soros’s efforts often won bipartisan applause. A professed admirer of President Ronald Reagan’s efforts to topple Communist rule in Eastern Europe, Mr. Soros, who at the time described himself as a political independent, was seen by anti-Communist Republicans as a fellow freedom fighter.
As his activities grew more prominent in Europe, and he began funding drug reform efforts in the United States, he started being cast in the 1990s as a central figure in a shadowy Jewish cabal by extremist figures such as the fascist presidential candidate Lyndon H. LaRouche Jr. and allies of repressive Eastern European leaders who were targeted by groups funded by Mr. Soros.
The theories were initially confined to the anti-Semitic fringe, though Mr. Soros is not closely associated with Jewish or Israeli causes, and in fact has been accused of being anti-Israel and was criticized by Prime Minister Benjamin Netanyahu.
Mr. Soros first became a major target for Republicans when he donated $27 million in the 2004 election cycle to an effort to defeat President George W. Bush, whose administration Mr. Soros condemned for rushing to war in Iraq and compared to Hitler’s Nazi regime.
Dennis Hastert, Republican of Illinois, suggested in 2003, when he was House speaker, that the money that Mr. Soros was spending to defeat Mr. Bush ‘could be drug money.’ And in 2010, the talk show host Glenn Beck accused Mr. Soros of ‘helping send the Jews to the death camps,’ devoting three hourlong episodes of his top-rated Fox News show to a series branding Mr. Soros a ‘puppet master’ intent on engineering a coup in the United States. The claims were repudiated by the Anti-Defamation League.
The efforts by Mr. Soros and a small band of wealthy donors to defeat Mr. Bush in 2004, while unsuccessful, later led to the creation of a network of major liberal donors that reshaped the American political left, marked Mr. Soros as a leading figure in Democratic politics and reinforced his status as a perennial election-time foil for the right.
‘Back then, it was a handful of crackpots; it was considered fringe; and it was contained,’ said David Brock, the self-described right-wing hit man who switched sides and started a fleet of liberal groups to track conservative disinformation, including from hosts like Mr. Beck.
‘But it started coming back with a vengeance during the 2016 campaign,’ said Mr. Brock, whose groups have received millions of dollars from Mr. Soros.
Read the full article in the New York Times.
Bolsonaro rising
Alex Hochuli, The Baffler, 29 October 2018
Bolsonaro’s rise has come as a considerable shock to outside observers who have long taken for granted the notion that, after its postwar run of hard-right strongman rule, Brazil had settled into democratic ‘normality.’ Presidential elections were primarily fought between PT and the PSDB, which used to be a Clintonite third-way market liberal party. This center-right/center-left alternation was a supposed mark of political maturity. PT gained the presidency in 2002 for the first time, with Lula promising to play nice with the country’s neoliberal establishment. And sure enough, Lula largely maintained orthodox macroeconomic policies alongside mild redistributive programs. Brazil grew, the poor got somewhat richer, and the really rich got really richer.
After losing its fourth presidential election in a row in 2014, PSDB decided it had had enough. With PT reeling from the ‘Car Wash’ investigations into a massive graft scandal around the oil giant Petrobras, the center-right establishment pounced, eventually settling on the dodgy impeachment of President Dilma Rousseff as the means to divest PT of executive office. The ensuing, disastrous government of PMDB’s Michel Temer (Rousseff’s VP and eventual usurper), who ruled with PSDB support, implemented a program of neoliberal counter-reforms that ratcheted the core inequalities of Brazil’s precarious political economy back up; just the sort of program that had been repeatedly rejected at the polls. The net result was what Temer’s critics dubbed a ‘soft coup.’
PSDB assumed it would sweep into office in 2018. Instead, the soft coup’s decisive break with democratic norms only accelerated the collapse of the Brazilian establishment’s legitimacy in the eyes of the public. The great losers of 2018 were not the PT, as Bolsonaro and the foreign press have assumed, but PSDB and the wider center-right. Despite the business class and the media’s best efforts, to position the party as the best ‘third way’ option between PT and Bolsonaro, PSDB’s candidate finished with 5 percent of the vote in this month’s preliminary vote; the party is fragmenting. PSDB’s middle-class base abandoned it in favor of the new champion of antipetismo, Bolsonaro.
Bolsonaro’s base is the constituency identified in classic studies of fascism: reactionary small business owners and independent professionals, plus members of the state’s repressive apparatus, the police and armed forces. But it was the backing of the educated upper-middle class—you know, the sensible, cultured, rational types—who propelled him into the political mainstream. By the eve of the first round of voting, Bolsonaro had the support of 41 percent of those with a college education. The next candidate down only had 16 percent. He had around 50 percent support among those households earning over 10 times the minimum wage. On the eve of the second round, his support had soared to 65 percent (versus 27 percent for Haddad) among the Brazilian top 10 percent. Of the regularly tracked demographic groups, this is where Bolsonaro had the biggest lead. Bolsonaro’s rise was no revolt led by the lumpens; rather it was a unified push of Brazil’s socioeconomic power elite, something roughly akin to the specter of well-to-do Hillary voters and Never Trump Republicans coming out for Trump. Except, as should be clear, Bolsonaro is no mere ‘Trump of the Tropics’—he is much worse.
These are the kind of people who ‘have gay friends’ (but probably not any black friends), who will smoke the odd joint, and who say they want a more powerful judiciary to ‘fight corruption.’ Nevertheless, they voted in lopsided numbers for someone who hates gays, wants to massacre ‘drug dealers’ (i.e., any poor favela resident he and his military supporters might regard as a threat), and intends to pack the Supreme Court with twenty-one justices to ‘balance things out.’
Read the full article in The Baffler.
Politicizing immigration wears thin in Iowa
Christopher R Martin,
Working Class Perspectives, 29 October 2018
For weeks during the summer of 2018, the case of a missing University of Iowa student occupied statewide and national attention. Mollie Tibbetts, 20, who was housesitting in Brooklyn, Iowa (population 1,391), went jogging at night on July 18 and disappeared. On August 21, police identified Tibbett’s alleged killer, who led them to her body in a cornfield. The news story may have ended there, except for one fact: the man charged with her murder was a 24-year-old immigrant from Mexico, alleged to have entered the U.S. illegally.
Those who have followed the politics of immigration could anticipate what would happen next. As POLITICO reported, ‘within hours, the tragedy emerged as a polarizing wedge issue — just in time for the fall campaign homestretch.’ Iowa’s Republican Gov. Kim Reynolds — campaigning to win a full term — tweeted hints of a new political strategy: ‘We are angry that a broken immigration system allowed a predator like this to live in our community, and we will do all we can bring justice to Mollie’s killer.’ Iowa’s two Republican U.S. Senators, Joni Ernst and Charles Grassley, also linked the murder to the immigration system, echoing Reynolds’s position.
That evening, President Donald Trump furthered the provocation at a rally in West Virginia: ‘You heard about today with the illegal alien coming in, very sadly, from Mexico and you saw what happened to that incredible, beautiful young woman.’ The next day, the White House ramped up the message with the release of a video compilation of families victimized by violence from undocumented immigrants. A tweet accompanying the video said ‘The Tibbetts family has been permanently separated. They are not alone.’ The phrase ’permanently separated’ contrasted this family’s story with the presumably less-permanent separations of thousands of immigrant children from their parents at the southern U.S. border.
But as the Republicans prepared to ride a red wave to the November elections on immigration scare tactics, something unexpected happened. The Tibbetts family rejected the politicization of Mollie’s death, as several family members made clear in social media. On August 21, Tibbetts’ aunt posted a message defending immigrants against a wholesale attack: ‘Our family has been blessed to be surrounded by love, friendship and support throughout this entire ordeal by friends from all different nations and races.’ Tibbetts’ second cousin pushed back at a conservative commentator on Twitter: ‘hey i’m a member of mollie’s family and we are not so fucking small-minded that we generalize a whole population based on some bad individuals.’ Another cousin wrote ‘You do not get to use her murder to inaccurately promote your ‘permanently separated’ hyperbole.’ Finally, on September 1, Rob Tibbetts, Mollie’s father, responded with a guest column in the Des Moines Register: ’The person who is accused of taking Mollie’s life is no more a reflection of the Hispanic community as white supremacists are of all white people. To suggest otherwise is a lie.’
Read the full article in Working Class Perspectives.
What’s so bad about polarisation?
Alice Thwaite, Drugstore Culture, 26 October 2018
Let’s get back to first principles: why are we so upset about polarisation at all? It’s because we assume it will destabilise the democratic process and threaten civic cohesion. A government legislates in the interests of a specific group or group; there will correspondingly be a significant number of people who are opposed to that action. These people may soon feel disenfranchised and might even be tempted to take violent action. Ergo: all the sinews of the body politic must be stretched to prevent polarisation.
This is not my view. If polarisation – defined as significant disagreement – is so deeply threatening, in and of itself, then the supposedly higher cause of achieving a workable consensus should logically oblige one side to concede to the other. But neither side, of course, would be willing to do that. And why should they?
The philosophical flaw in the argument is the supreme value assigned to consensus and unity. But why should we think this way? I value innovative and fresh ideas, the challenge of new convictions and propositions. In any society, however consensual, there are always people who, for whatever reason, oppose the status quo. I don’t believe that because these minorities are too small to pose a substantial threat to stability, that they should be ignored.
My point is that we are looking in the wrong place. The problem is not polarisation per se; it is our animosity towards the other side, which is not the same thing at all. The Pew Centre has shown that US Democrats are increasingly fearful of, angry with and frustrated by Republicans. The same is true of Republicans’ feelings towards Democrats. These are strong and potentially violent emotions, visible right now in the highly contentious campaigns for the mid-term elections on November 6.
If polarisation refers only to the extent of disagreement, my question is this: can we not confront that division – welcome it even, as a source of intellectual evolution – but find ways of controlling the aggressive feelings that often accompany it? How do we nurture a civic and social environment in which we feel psychologically safe, and yet can still voice seriously divergent opinions?
Read the full article in Drugstore Culture.
Peterloo shaped modern Britain,
as much as any king or queen did
John Harris, Guardian, 29 October 2018
In his breathtaking book The Making of the English Working Class, EP Thompson cut to the chase: Peterloo, he wrote, ‘was without question a formative experience in British political and social history’. For AJP Taylor, the massacre ‘began the break-up of the old order in England’. Even if the response of those in power to what happened was an authoritarian crackdown that combined with an economic upswing to slow the progress of the reform movement, the massacre exposed the tangle of stupidity, patronage and corruption that passed for the country’s system of government – and began a long process of change, as an increasingly politicised working class gradually found its voice.
The foundation of this newspaper – then called the Manchester Guardian, and established in the aftermath of Peterloo to ‘warmly advocate the cause of Reform’ – is part of this story. Chartism flared to life 15 years later. Trade unions eventually began to break through in the 1850s; the expansion of the electorate was marked by the passage of legislation in 1832, 1867, 1884, 1918 and 1928.
These changes may have happened at a glacial pace, but not so slowly that we can’t look beyond their datelines and identify events that set the stage: the huge meetings that happened in 1816 at Spa Fields in north London, and teetered into riots; the ’reform riots’ of 1831 in Bristol, Nottingham and Derby, and demonstrations around the same time in London and Birmingham; the huge Chartist gatherings at Kersal Moor, in Salford, in 1838 and 1839; the latter year’s Newport Rising.
These things form a kind of shadow history of 19th-century Britain, which still has far too little purchase on this country’s shared sense of its past.
Peterloo was pre-eminent among them, not least because it confirmed that reformers had an unanswerable moral case, while the authorities were exposed as the defenders of brute privilege. There is, moreover, another chronically overlooked aspect of the way August 1819 opened a path to the future: the fact that in the period leading up to the massacre, women’s reform societies had sprung up around the north-west, formed to support the fight for male suffrage, but also responsible for the innovation of women voting alongside men at radical gatherings. They attracted no end of abuse, but women reformers were defiantly on the platform at St Peter’s Field. In a just-published book about the massacre, the historical writer Jacqueline Riding says that what they did represented ‘the earliest example of organised female activity in British politics’.
Peterloo also shines unflattering light on some of our laziest national myths. Over the past 15 years or so, we have got used to politicians describing Britain – or England – as a country that has always exuded ’decency, tolerance and a sense of fair play’. By contrast, Thompson saw 1819’s carnage originating in ‘the panic of class hatred’ and an ingrained belief that any working class crowd was always only a breath away from turning into the mob – something that has regularly surfaced long into the democratic age: witness Orgreave or Hillsborough. Another point, even more overlooked, is bound up with the same tendency of authority to tip into brutality: the fact that Peterloo saw the violence meted out by British colonialists unexpectedly returning home, and pointed ahead to some of the empire’s bloodiest episodes.
Read the full article in the Guardian.
Too poor to vote
Danielle Lang & Thea Sebastian,
New York Times, 1 November 2018
While many Americans would claim to believe in second chances, this country’s felony laws frequently block people from full participation in our society after they’ve served time by denying them the right to vote. Those who have completed their sentences are all too often prevented from casting ballots simply because they have unpaid court fines and fees. In seven states – Arkansas, Arizona, Alabama, Connecticut, Kentucky, Tennessee and Florida — laws explicitly prohibit people who owe court debt from voting. In other states — such as North Carolina, New Mexico and Wisconsin — in order to regain the vote, people must complete parole or probation, which often requires paying excessive fines and fees.
In all these cases, the price tag can be significant: In North Carolina, for example, people who have been incarcerated must pay $40 per month in supervision fees and $90 per month if placed on electronic monitoring. And these are often alongside the fees that they have already racked up. These include $60 to determine whether a person is too poor to afford a lawyer, $10 a day for each day that he or she is jailed pretrial because bail was unaffordable, and $600 if the prosecutor tests evidence at the state crime lab.
A national research project collecting information from 14 states found families owe on average $13,600 in court-related fees and fines. We’ve seen reporting on people who owe tens of thousands —$33,000 in one instance and $91,000 in another. And, in too many states, you cannot cast a ballot until you’ve paid every penny.
Regardless of the stated goal of this policy, the effects are clear: Wealthy people can pay these fees and vote immediately, while poor people could spend the rest of their lives in a cycle of debt that denies them the ability to cast a ballot.
You may be wondering: where do these fees come from? The answer is that the criminal system charges individuals for almost everything. In every state but Hawaii and the District of Columbia, there are charges for the ‘privilege’ of wearing an ankle monitor. In 44 states, there are charges for probation and parole supervision. In 43 states and the District of Columbia, there are charges for public defenders — even though our Constitution guarantees both counsel and criminal trials. In 41 states, people are charged for ‘room and board.’ And the list goes on. In Ohio, all told, there are 118 different fees and surcharges.
By the time people re-enter society, they often owe thousands in debt. Nationally, about 10 million people owe over $50 billion in debt associated with the criminal justice system. Worse, this money is generally being demanded from people who are unlikely to be able to pay it. A Brookings paper that linked data from the entire prison population to earnings records over a 16-year period showed, at best, only about half of those recently released are able to find work at all — and even when they do get a job, many earn an income well below the poverty line.
The combination of employment discrimination, license suspension, housing restrictions and other barriers to economic stability makes re-entry into society — and the ability to earn enough to pay off court debt — nearly impossible.
Read the full article in the New York Times.
Could populism actually be good for democracy?
James Miller, Guardian, 11 October 2018
Fearful of armed crowds and the possibility of mob rule, the American constitution had been explicitly designed to empower not ordinary citizens, but a ‘natural aristocracy’. As Benjamin Rush, a signatory of the Declaration of Independence, explained: ‘All power is derived from the people’ – but this power is not wielded by the people: ‘They possess it only on the days of their elections. After this, it is property of their rulers, nor can they exercise or resume it, unless it is abused.’
To this day, the US remains a deeply flawed democracy. It still has an electoral college designed to thwart majorities. It still has a senate that guarantees inequality of political representation. It still is the scene of pitched struggles over the right to vote.
Yet, this primordial American prejudice against democracy was almost instantly transformed by an equally passionate upwelling of American enthusiasm for democracy, in the wake of the French Revolution. Among the enthusiasts was Thomas Jefferson, who, in 1800, brought his Democratic-Republican party to power, and in this way also brought democracy – at least as a word – into the American lexicon.
A generation later, Andrew Jackson, the first great American democratic leader – or, demagogue, to use the ancient Greek term of art for such a leader – became the US’s first plebiscitary president, imbued with imperial prerogatives in the eyes of his most ardent supporters. He was, after all, the only representative nominally elected by all the nation’s people (unless, of course, they were women, slaves or Native Americans – democracy in America in Jackson’s day was only the white man’s business).
Jackson tried, and failed, to eliminate the electoral college. Such enduring limits to democracy in the US were, paradoxically, one reason why it was the first country to give birth to populism, both as a word and as a phenomenon. From 1892 till 1896, a People’s Party played a major political role in some parts of the US.
At roughly the same time, Woodrow Wilson, arguably the country’s most ardent champion of democracy, criticised the populist movement and offered, instead, a new vision of the democratic system. A pioneering political scientist who would become the 28th president, Wilson pondered deeply the meaning of democracy, not just in the US, but in world history, where it marked in his view the highest stage of human evolution. In his private papers, after rejecting European conceptions of democracy as primitive, and corrupted by class conflict, he defined modern democracy ‘most briefly’ as ‘government by popular opinion’.
As Wilson concedes almost in passing, democracy in practice will always involve ‘the many led by the few: the minds of the few disciplined by persuading, and masses of men schooled and directed by being persuaded’. In other words, Wilson’s vision of self-rule is closer to Adams’s ‘natural aristocracy’ than the popular sovereignty the French revolutionaries and American populists fought for.
We find here a core ambiguity at the heart of modern liberal democracy, as Wilson understood it. All power in theory derives from the people – but in practice, the truest vessel of a people’s hopes will be its highest elected official when he enjoys the support of ‘public opinion’.
Read the full article in the Guardian.
The origins of birthright citizenship
Robert L Tsai, Boston Review, 9 November 2018
Jones’s bottom-up approach to the problem of black citizenship injects the agency of freed blacks themselves into the drama of the nation’s slavery debates and demonstrates that even in proslavery states, the matter of how they should be legally treated was messy and unevenly handled. Sometimes, this worked in the favor of free blacks. For instance, Jones proves that, despite a Maryland law that forbid blacks from testifying in court against whites, many Baltimore judges in fact admitted such evidence. Additionally, although blacks were prohibited by the state from gathering for religious purposes unless led by white clergy, Baltimore city officials construed this law to allow black churches to hold independent services simply by leave of the city or written permission from a white preacher. Permit requirements for black gun ownership were not always enforced in Baltimore, and courthouse records show that blacks regularly obtained gun licenses and filed the requisite paperwork containing the names of white men who vouched for them.
Reading Jones’s account, it is not hard to imagine that some knowledge of the lives led by freed persons must have later had an impact on legislative debates, especially when proponents of the Reconstruction Amendments denounced the horrendous local treatment of loyal, free blacks ‘born within the Republic.’ John Bingham, often described as the Father of the Fourteenth Amendment, would later oppose the admission of Oregon to the Union, calling its first constitution—which barred every ‘free negro or mulatto’ to ‘come, reside, or be within the state,’ own property, or sue—’injustice and oppression incarnate.’ That kind of outrage only has bite if listeners had some awareness, however imperfect, of law-abiding black people living in the United States and leading upright, morally rewarding lives.
But if former slaves and their descendants did their best to survive in the interstices of oppressive laws, Jones’s study also confirms the necessity of inscribing rights formally whenever opportunities to do so arise. Free blacks insisted on exercising the rights of citizenship as much as they could, but it is not always apparent whether this was possible because local whites believed in birthright citizenship—or instead merely because it was convenient at times to permit blacks to engage with state bureaucracy as though they were citizens. In short, because the rights of free persons were not explicitly guaranteed, they remained dependent upon white comforts, white economic needs, and white sensibilities. For instance, Maryland law barred free blacks who traveled outside of the state from returning, unless a travel permit was secured beforehand. Those who left the state without permission could be deemed ‘aliens.’ Jones gives the example of Cornelius Thompson, a prominent black resident who was able to get a travel permit because he had friendly dealings with the state’s chief justice, Roger Brooke Taney. But untold black residents who lacked connections to powerful whites had to take their chances without a travel permit, returning clandestinely and at great risk. While some freed persons were able to live openly like white citizens—though always under the threat that one day everything could be taken from them at the whim of a white man—many more were unable to avail themselves of even these limited privileges afforded to their better-connected peers.
Read the full article in the Boston Review.
Why the war poets matter
James Heartfield, spiked review, 8 November 2018
Those seeking to pull down the war poets have pointed to other writers at the time who, some argue, give a more authentic, less mannered account of the Great War. Among those being rediscovered are Fred Roberts and Jack Pearson, the editors of a trench newspaper called either the Wipers’ Times (because English soldiers could not pronounce Ypres), or sometimes the Salient News. As Joe Shute puts it in the Daily Telegraph, the Wipers’ Times was ‘loved by soldiers and was far better read than the likes of Wilfred Owen and Siegfried Sassoon’.
The Wipers’ Times was very good, though the humour is a little hard to see today. Where the war poets strike a sardonic, tragic note, the Wipers’ Timesis mostly a very gentle satire, of the kind that you find later in The Goon Show or Monty Python. People’s names are inverted for comic effect (Teech Bomas, Belary Hilloc) and there is a scepticism about big questions, and lots of mocking of enemies and fair-weather allies. Conscientious objector Ramsay McDonald comes in for some mockery, too. It is all very English and, while rank-and-file soldiers might have liked it, it is unavoidably the work of a captain and a lieutenant, in a style we would now call ‘public school’ humour. Those who like the Wipers’ Times better than Owen and Sassoon, though, underestimate not the futility of war, so much as the absurdity of military bureaucracy.
What the critics of the war poets say about the difficulty some veterans of the First World War had with appreciating Owens and Sassoon carries some weight. Many soldiers preferred the patriotic poets to whom Sassoon and Owen were contrasting themselves. Poetry was, by modern standards, surprisingly popular then. People liked the patriotic poet Jessie Pope, who is today taught mostly as the bad example against Owen and Sassoon’s more intelligent insight. Jessie Pope’s ‘Who’s for the Game?’, which first appeared in the Daily Express in November 1915, is typical:
‘Who knows it won’t be a picnic – not much –
Yet eagerly shoulders a gun?
Who would much rather come back with a crutch
Than lie low and be out of the fun?’
There is much to mock in patriotic poetry (‘Play up! play up! and play the game!’, wrote Henry Newbolt), but there can be strong imagery too, like Rupert Brooke’s field which is forever England.
Ordinary soldiers told all kinds of stories about the war, or none. My great uncle Davy Poole liked to scare his nieces with his tale of serving in the camel corps with TE Lawrence in the Arab revolt. Davy swore (who knows) that at one point he was in a tent, sat at a table with a pile of piastres, giving one to each Arab fighter who presented him with the head of a Turk. ‘What happened to the heads?’, my aunt asked. ‘They were thrown into a bag in the corner of the room.’ Davy liked the story because it was horrific. It made him look scary and big to those unworldly girls. But he was, I think, expiating his own horror, and maybe guilt, too, by telling it.
The strength of the war poets, though, is not that they are all that representative of the opinions of the time. It would be foolish to think that poetry ought to be representative. Their strength really comes from the way that they reworked the words and thoughts of the time and rose above the immediacy of war fervour. They were blessed, if that is not too grotesque a word, with a deeply poetic and literary moment, where words rose up to lead men on to extraordinary deeds.
Read the full article in spiked review.
The double battle
Eric Foner, The Nation, 24 October 2018
In an age known for political oratory, Douglass was one of America’s greatest public speakers. Even as an enslaved child, Blight relates, Douglass came to grasp the power of words and secretly learned to read and write. After his escape from slavery at the age of 20, language became his weapon. Blight quotes extensively from and offers astute analyses of Douglass’s remarkable speeches, including great set pieces such as his oration on the Fourth of July and its meaning to slaves—a devastating condemnation of the hypocrisy of a nation that proclaimed its devotion to freedom but held millions in bondage—and his speech at the unveiling of a statue of Lincoln in the nation’s capital, a penetrating exploration of the extent and limits of the Great Emancipator’s policies regarding slavery and black citizenship.
Over six feet tall, with a powerful baritone voice, Douglass made an indelible impression as a lecturer. ‘He was the insurgent slave,’ wrote one listener, ‘taking hold of the right of speech, and charging on his tyrants the bondage of his race.’ Like any accomplished orator, Douglass was also a performer. He would point out that according to Southern law he was a ‘thing,’ not a man, and then, drawing himself up to his full height, proclaim: ‘Behold the thing.’
Before the Civil War, to travel as an abolitionist speaker required courage, and this was especially true for Douglass, since until British admirers arranged to purchase his freedom in 1846, he ran the risk of being apprehended and returned to slavery. More than once, mobs broke up his lectures. But Douglass did not flinch from confrontation. In one incident related by Blight, Isaiah Rynders, the leader of a New York City street gang, brought his followers to disrupt one of Douglass’s speeches. Rynders climbed onto the stage and began spewing racist remarks. Rather than fleeing, Douglass engaged him in an impromptu debate about slavery and race, until Rynders and his gang retreated from the hall.
At the outset of his career as an abolitionist, Douglass adhered to the outlook of William Lloyd Garrison, who insisted that because the Constitution protected slavery, abolitionists could not in good conscience vote, and that the Union itself should be dissolved. But in the 1850s, Douglass changed his mind, aligning himself with Gerrit Smith, who had developed an antislavery interpretation of the Constitution and favored political action against slavery. Douglass also rejected Garrison’s pacifism, advocating violent resistance to the Fugitive Slave Act. During the prewar decade, he also came within the orbit of John Brown, although he declined Brown’s invitation to join the ill-fated assault on Harpers Ferry. (Five black men did join Brown’s private army; their story is compellingly told in a new book, Five for Freedom, by Eugene L. Meyer.)
Returning from a speaking tour of the British Isles in the late 1840s, Douglass declared: ‘I have no patriotism, I have no country.’ But with the outbreak of the Civil War, he wholeheartedly embraced the Union cause. Douglass became, in Blight’s words, a ‘war propagandist,’ whose speeches whipped up hatred of the Confederacy and called for a ‘merciless’ crusade against it, while insisting that only a policy of emancipation could subdue the South. Anticipating the ‘Double V’ campaign of World War II, which called on black Americans to fight racism at home as well as fascism abroad, Douglass spoke of a ‘double battle’ against Southern slavery and against racial prejudice throughout the country. The ‘mission of the war’ (the title of his best-known wartime oration) could not be fulfilled until a new republic, based on universal freedom and civil and political equality, arose from the ashes of the old one. Douglass minced no words in condemning what he saw as Lincoln’s delay in moving toward emancipation; but after twice meeting with the president in the White House, he came to admire him, seeing this self-made son of Kentucky who had risen to prominence through powerful oratory as a kindred spirit.
Read the full article in the Nation.
Should scholars avoid citing the work of awful people?
Brian Leiter, The Chronicle of Higher Education,
25 October 2018
The issue is particularly fraught in one of my academic fields, philosophy, in which Gottlob Frege, the founder of modern logic and philosophy of language, was a disgusting anti-Semite, and Martin Heidegger, a prominent figure in 20th-century existentialism, was an actual Nazi.
What is a scholar to do?
I propose a simple answer: Insofar as you aim to contribute to scholarship in your discipline, cite work that is relevant regardless of the author’s misdeeds. Otherwise you are not doing scholarship but something else. Let me explain.
Wilhelm von Humboldt crafted the influential ideal of the modern research university in Germany some 200 years ago. In his vision, the university is a place where all, and only, Wissenschaften — ‘sciences’ — find a home. The German Wissenschaften has no connotation of natural science, unlike its English counterpart. A Wissenschaft is any systematic form of inquiry into nature, history, literature, or society marked by rigorous methods that secure the reliability or truth of its findings.
The English word ‘discipline’ captures the idea better: Universities should be home to all, and only, disciplines — each one teaching and deploying skills and techniques for acquiring knowledge about their subject matter, whether it involves the collapse of the Roman Empire, the nature of black holes, the meaning of Plato’s Republic, the evolution of language, or the role of ‘genetic hitchhiking’ in evolution.
The idea of universities as Wissenschaft is also central to the ideal of academic freedom that Humboldt championed: The freedom of scholars in research and teaching is predicated precisely on using their disciplinary expertise to produce knowledge of the truth. John Stuart Mill, who was directly influenced by Humboldt’s ideas, thought that such freedom of inquiry was to the benefit of society as a whole.
The problem with deciding not to cite certain scholars because of their personal malfeasance should now be obvious. Scholarly citation has only two purposes in a discipline:
- To acknowledge a prior contribution to knowledge on which your work depends.
- To serve as an epistemic authority for a claim relevant to your own contribution to knowledge. (By epistemic authority I mean simply another scholar’s research that is invoked to establish the reliability or truth of some other claim on which your work depends.)
In each case, citation has its purpose — ensuring the integrity of the scholarly discipline in question. Failure to cite because of a scholar’s misconduct — whether for being a Nazi or a sexual harasser — betrays the entire scholarly enterprise that justifies the existence of universities and the protection of academic freedom.
Read the full article in the Chronicle of Higher Education.
The universe is always looking
Philip Ball, The Atlantic, 20 October 2018
Schrödinger’s cat forces us to rethink the question of what distinguishes quantum from classical behavior. Why should we accept Bohr’s insistence that they’re fundamentally different things unless we can specify what that difference is?
We might then be inclined to point to features that classical objects like coffee cups have but that quantum objects don’t necessarily have: well-defined positions and velocities, say, or characteristics that are localized on the object itself and not spread out mysteriously through space. Or we might say that the classical world is defined by certainties while the quantum world is (until a classical measurement impinges on it) no more than a tapestry of probabilities, with individual measurement outcomes determined by chance. At the root of the distinction, though, lies the fact that quantum objects have a wave nature—which is to say, the equation Schrödinger devised in 1924 to quantify their behavior tells us that they should be described as if they were waves, albeit waves of a peculiar, abstract sort that are indicative only of probabilities.
It is this waviness that gives rise to distinctly quantum phenomena like interference, superposition, and entanglement. These behaviors become possible when there is a well-defined relationship between the quantum ‘waves’: in effect, when they are in step. This coordination is called ‘coherence.’
The concept comes from the science of ordinary waves. Here, too, orderly wave interference (like that from double slits) happens only if there’s coherence in the oscillations of the interfering waves. If there is not, there can be no systematic coincidence of peaks and troughs and no regular interference pattern, but just random, featureless variations in the resulting wave amplitude.
Likewise, if the quantum wave functions of two states are not coherent, they cannot interfere, nor can they maintain a superposition. A loss of coherence (decoherence) therefore destroys these fundamentally quantum properties, and the states behave more like distinct classical systems. Macroscopic, classical objects don’t display quantum interference or exist in superpositions of states because their wave functions are not coherent.
Notice how I phrased that. It remains meaningful to think of these objects as having wave functions. They are, after all, made of quantum objects and so can be expressed as a combination of the corresponding wave functions. It’s just that the wave functions of distinct states of macroscopic objects, such as a coffee cup being in this place and that place, are not coherent. Quantum coherence is essentially what permits ‘quantumness.’
There is no reason (that we yet know of) why, in principle, objects cannot remain in coherent quantum states no matter how big they are—provided that no measurement is made on them. But it seems that measurement somehow does destroy quantum coherence, forcing us to speak of the wave function as having ‘collapsed.’ If we can understand how measurement unravels coherence, then we would be able to bring measurement itself within the scope of quantum theory, rather than making it a boundary where the theory stops.
Read the full article in the Atlantic.
On ‘More Blood, More Tracks,’
familiar Bob Dylan songs cut closer to the bone
John Pareless, New York Times, 30 October 2018
But while LP jackets were being printed and advance vinyl pressings were sent out, Dylan decided to revisit the songs with a pickup band of local Minneapolis musicians who were hastily assembled during the last week of December 1974. He had rewritten (and improved) some lyrics, and with more musicians in the room and, perhaps, more distance on the songwriting, he delivered the songs more forcefully, facing them outward rather than inward.
When ‘Blood on the Tracks’ was released in January 1975, half of the New York City recordings were replaced with the Minneapolis sessions (although with album covers already printed, that studio band went uncredited). Meanwhile, to give the music a subliminal edge, Dylan had the tracks sped up by 2 to 3 percent, shortening the running times by a few seconds and very slightly raising the pitch. Insiders who had heard the original album mourned what they considered to be a push toward pop. A handful of songs from the New York sessions that trickled out on Dylan’s first Bootleg Series compilations suggested they had a point.
‘More Blood, More Tracks,’ strips away any gloss. In the six-CD package, the takes that appeared on the original album are returned to accurate speed and mixed more austerely, with considerably less reverb around Dylan’s voice and guitar and different balances on band tracks. (The six CDs include all the takes recorded in New York; there are no surviving outtakes from the Minneapolis sessions, but the master tapes are remixed.) The pricey full package also includes a hardcover volume featuring a trove of Dylan lore: a page-by-page reproduction of a spiral notebook of lyrics, full of cross-outs and alternatives. The one-CD version is a well-chosen playlist among many that could traverse the New York sessions.
From the beginning, none of the performances on the complete set is tentative or demo-like. Dylan had clearly thought through the songs beforehand, chosen his guitar strategies and decided where the dramatic peaks were. His first performances in the studio were apparently so incandescent that the engineers didn’t pay attention to the sound of his vest buttons clacking against his guitar — the only distraction in his very first take of ‘Simple Twist of Fate,’ which rises from and falls back to a stoic near-whisper, like a startling rumor being passed along.
The New York recordings, solo or close to it, bring out the solitude in the songs: The singer endlessly wandering, bereft of the woman he loved, wondering what could have been different, coming to terms with it all. Stripped of arrangements that have been familiar for decades, Dylan’s voice comes through as more insistent, while the lyrics land more sharply. The complicated storyline of ‘Lily, Rosemary and the Jack of Hearts’ becomes more immediately comprehensible in a solo performance. And without the Minneapolis band’s organ crescendos, ‘Idiot Wind’ becomes a more private attack, as much plaint as indictment: ‘You’ll never know the hurt I suffered nor the pain I rise above/And I’ll never know the same about you, your holiness or your kind of love.’
Read the full article in the New York Times.
Ancient genomics is recasting the story
of the Americas’ first residents
Ewen Callaway, Nature, 8 November 2018
Ancient genomics is finally beginning to tell the history of the Americas — and it’s looking messy.
An analysis of genomes from dozens of ancient inhabitants of North and South America, who lived as long ago as 11,000 years — one of the largest troves of ancient DNA from the region studied so far — suggest that the populations moved fast and frequently. The findings were published on 8 November in Cell1 and Science.
The studies suggest that North America was widely populated over a few hundred years, and South America within one or two thousand years by related groups. Later migrations on and between the continents connected populations living as distantly as California and the Andes.
‘These early populations are really blasting across the continent,’ says David Meltzer, an archaeologist at Southern Methodist University in Dallas, Texas, who co-led the Science study.
The studies also suggest that prehistory of the Americas — the last major land mass to be settled — was just as convoluted as that of other parts of the world.
‘I think this series of papers will be remembered as the first glimpse of the real complexity of these multiple peopling events,’ says Ben Potter, an archaeologist at the University of Alaska Fairbanks. ‘It’s awesome.’
For decades, the peopling of the Americas was painted in broad brushstrokes, using data from archaeological finds and DNA from modern humans.
Scientists discerned that groups crossed the Bering land bridge from Siberia into present-day Alaska and then moved steadily south as the last Ice Age ended. Humans carrying artefacts from a culture known as Clovis, including sophisticated projectile points, began to populate the interior of North America around 13,000 years ago. For decades, researchers thought that people associated with this culture were the continents’ first inhabitants.
But the discovery of ‘pre-Clovis’ settlements — including a nearly 15,000-year-old site at the southern tip of Chile — pointed to an even earlier wave of migration to the Americas, presumably also over the Bering land bridge.
Read the full article in Nature.
In cave in Borneo jungle, scientists find
oldest figurative painting in the world
Carl Zimmer, New York Times, 7 November 2018
On the wall of a cave deep in the jungles of Borneo, there is an image of a thick-bodied, spindly-legged animal, drawn in reddish ocher.
It may be a crude image. But it also is more than 40,000 years old, scientists reported on Wednesday, making this the oldest figurative art in the world.
Until now, the oldest known human-made figures were ivory sculptures found in Germany. Scientists have estimated that those figurines — of horses, birds and people — were at most 40,000 years old.
Researchers have found older man-made images, but these were abstract patterns, such as crisscrossing lines. The switch to figurative art represented an important shift in how people thought about the world around them — and possibly themselves.
The finding also demonstrates that ancient humans somehow made the creative transition at roughly the same time, in places thousands of miles apart.
‘It’s essentially happening at the same time at the opposite ends of the world,’ said Maxime Aubert, an archaeologist at Griffith University in Australia and a co-author of the report, published in the journal Nature.
Archaeologists have been discovering cave paintings and ancient sculptures for centuries, but it was only in the mid twentieth century that it became possible to precisely determine their age.
Traces of radioactive carbon are present in some types of art, and scientists gauge their age by measuring how long the carbon has been breaking down.
In the 1950s, radiocarbon dating on paintings in the Lascaux Cave in southern France showed that the images — of horses and other animals — were made 15,500 years ago.
On further investigation, the Lascaux paintings were shown to be 18,000 years old, making them the oldest artwork known at the time.
Eventually even older art came to light. Another French cave, called Chauvet, is decorated with drawings of animals that researchers estimate date back as far as 37,000 years.
In 2003, Nicholas Conard of the University of Tübingen in Germany discovered the ivory figurines, which turned out to be far older: up to 40,000 years old.
For years, those sculptures stood out as the oldest figurative artworks on the planet. ‘It was very lonely for a long time,’ said Dr. Conard.
Read the full article in the New York Times.
Deepfakes aren’t the problem, we are
Joseph Shieber, Three Quarks Daily, 22 October 2018
1.Bored, and with little to occupy their time, two cousins, Elsie, who was 16, and Frances, who was 10, decided to play around with photography. At a river near where they lived, they manipulated an image so that it looked as if they were interacting with little, magical winged creatures — fairies.
The photo was believable enough that they fooled a number of adults — including world-famous writers. The girls produced a number of other photos, using the same methods. The media was ablaze with discussions of the images and of whether they provided proof of the existence of fairies.
This all happened in 1917.
I was reminded of this case — the case of the Cottingley fairies — by the recent interest in the phenomenon of deepfakes.
Deepfakes are incredibly realistic manipulations of video and audio. Here, for example, is a video of President Obama uttering something that President Obama never said — made by swapping in the actor Jordan Peele’s mouth and voice.
If you believe the hype surrounding deepfakes, this technology threatens not only ‘the collapse of reality’, but also the falsification of our memories. While the threat is real, the problem isn’t actually with the deepfakes — it’s with us.
Actually, the discussion of deepfakes can help us to see two different problems that we face. Solving those problems, however, doesn’t really involve technological solutions.
2. Here’s the first problem: we’re not good at handling complexity. We want simple, easy-to-understand, answers. ‘1+1=2’, rather than ‘P(A|B) = (P(B|A) x P(A))/P(B)’.
The unstated premise in the widespread panic over the rise of deepfakes is that, prior to this technology, video evidence was supposed to be the gold standard for evidence, practically guaranteeing the truth of what it depicted.
This, however, is absurd. There is no gold standard, no single piece of evidence that can guarantee the truth of the information it conveys.
Let me be clear what I mean by that.
By saying that there is no single piece of evidence that can guarantee the truth of the information it conveys, I’m not one of those post-truth types who is trying to argue that there is no such thing as truth — nor, even, that there is just no such thing as truth anymore.
There is truth. Black holes exist. So does human-caused climate change. Vaccines don’t cause autism. John Lennon didn’t really ‘bury Paul’ McCartney.
Instead, what I’m saying is that we don’t discover the truth by means of a single, glaring piece of evidence that removes all doubt. On the contrary, we only get at any truth — or at least any truth worth arguing about — by gathering and weighing evidence.
Read the full article in 3 Quarks Daily.
Nigeria plans museum for art looted from Benin
Catherine Hickley, The Art Newspaper, 22 October 2018
Nigeria has presented plans for a new Benin Royal Museum that would permanently display historic art from the region, including bronze sculptures plundered from the Benin palace by British troops in 1897 that have since found their way into European public collections.
The Benin Dialogue Group, bringing Nigerian representatives together with museum officials from Austria, Germany, the Netherlands, Sweden and the UK, said they agreed on a three-year time-frame for a permanent display in a new museum at a meeting in the Dutch city of Leiden on 19 October.
‘I am happy that we are making progress in the effort to give our people the opportunity to once more access our heritage that was looted,’ said Prince Gregory Akenzua (Enogie of Evbobanosa) of the Benin royal family.
British troops launched a ‘punitive expedition’ to take Benin City in 1897 and plundered the royal palace, seizing an estimated 4,000 pieces, including bronze reliefs, shrines, and artefacts carved out of ivory. The Benin Dialogue Group comprises European museums that later acquired these works, including the British Museum, Berlin’s Ethnology Museum, Vienna’s Weltmuseum and the National Museum of World Cultures in Leiden. The Nigerian members represent the Edo State government, the Royal Court of Benin and the National Commission for Museums and Monuments.
The museums have all agreed to contribute artefacts to the Benin Royal Museum on a rotating basis, to provide advice as requested on building and exhibition design, and to cooperate with the Nigerian partners in developing training, funding and a legal framework for the display in the new museum.
Read the full article in the Art Newspaper.
The limits of liberal history
Nathan J Robinson, Current Affairs, 28 October 2018
I was therefore delighted when I found out that Lepore had written an entire one-volume history of the United States. There could be no better tour guide. My problem with most histories is that they focus on big sweeping economic and political changes, macro trends rather than ordinary people. Lepore is the perfect antidote: she gets past the usual narrative and focuses on scraps and snippets. She weaves together history through the voices of those who actually lived it, through newspaper headlines, memoirs, advertisements, and ephemera.
So when These Truths: A History of the United States showed up, I devoured it. But something began to feel odd. At first, I couldn’t figure out what it was. Lepore writes with her usual verve, apart from a few tortured metaphors (e.g. the ending about the ‘ship of state’: ’On deck, conservatives had pulled up the ship’s planking to make bonfires of rage: they had courted the popular will by demolishing the idea of truth itself, smashing the ship’s very mast. It would fall to a new generation of Americans… to fathom the depths of the doom-black sea… they would need to fell the most majestic pine in a deer-haunted forest and raise a new mast that could pierce the clouded sky.’ Are the deer supposed to be… pundits?) She pays special attention to the lives of women and African Americans, telling us about Jane Franklin as well as Benjamin, Harry Washington as well as George. She digs up marvelous quotes. Cotton Mather fumes at James Franklin: ’The Plain Design Of Your Paper Is To Banter and Abuse The Ministers of God.’ Abigail Adams writes to her husband: ‘I desire you would Remember the Ladies, and be more generous and favourable to them than your ancestors. Do not put such unlimited power into the hands of the Husbands. Remember all Men would be tyrants if they could.’ An insufferable John writes back: ‘I cannot but laugh… we know better than to repeal our Masculine systems.’ These Truths is Lepore’s effort to create a fair and representative history, one in which voices like that of Maria Stewart and Alice Paul are given the platform they deserve.
And yet: diverse and kaleidsocopic as the book was, it seemed to be missing something critical. I got up to about 1980 (page 668) before I realized what it was: the labor movement. The history of American labor is almost completely absent from the book. Lepore’s history is full of racial and gender diversity. But it doesn’t include much about workers or their struggles.
It’s not just that labor is given a light treatment. It’s that there is a giant hole in the middle of the book where an important part of the American story should go.
Read the full article in Current Affairs.
Himmler’s antiquity
Alison C Traweek, LA Review of Books, 18 October 2018
To fully understand these abuses of antiquity, it is important to put them in the context of the development of the field. It isn’t as if the discipline of classical studies arrived complete and fully formed on the desks of early scholars like Friedrich Nietzsche, whose Dionysian versus Apollonian reading of antiquity is still visible in the fabric of the field. The discipline was shaped by mostly elite European men, and their interests determined the early scope of the field. That’s why we learn Greek and Latin but not Hebrew, which was part of the field until the 18th century; it’s why the lives of women and children were largely ignored; it’s why the novels associated with the lower classes were not taken seriously as literature. It’s why Cicero and Socrates, not to mention Jesus, were cast as white, just like the scholars studying them were.
Except, of course, Cicero and Socrates were decidedly not ’white.’ They would have been thoroughly confused by the claim, since ancient theories of race differed greatly from modern ones, and had no category for ‘white.’ Rather than being primarily physiognomic — that is, based on visible physical features like melanin or hair type — race in antiquity was tied to climate, geography, and even political structures; one’s race might not be easily identifiable at sight, and might even change, based on the exigencies of life, and regardless of external appearances. There was, consequently, no conception of a ‘white’ race — what we see as whiteness did not signify racially.
But the European scholars of antiquity needed Cicero and Socrates to be white, so they were made white. Not consciously or explicitly, for the most part — we have, of course, no early draft of The Birth of Tragedy in which Nietzsche muses on how to support his assumption that the Greeks conceived of race as he did, and, importantly, were white in the same way he was. But the logical contradictions and bizarre dismissals of inconvenient evidence that went against the conception of the ancients as white, and especially of whiteness being seen as superior in antiquity as in their own day, lay bare the motivations, conscious or not…
Egypt was a particular problem for scholars of antiquity. It was an unchallenged premise for them that the ancient Egyptians could not have been black Africans, since they assumed black Africans could not have been responsible for so advanced a society. They clung to this belief even in the face of clear contrary evidence, like Herodotus’s mention of Egyptians’ black skin and woolly hair. At the same time, they were determined to see contemporary Egyptians as childish and savage, and argued that the formerly great race must have been degraded by contact with black Africans and Ottomans. These theories of decline through contact were also useful for explaining what they saw as the inferior natures of contemporary Greeks and southern Italians: their noble ancestors too had been sullied by generations of mixing with barbarians. Edward Gibbon, in his monumental Decline and Fall of the Roman Empire, laments the ignorance of contemporary Athenians, whom he believed to be tainted by long contact with the Ottomans. Having never been to Athens himself, he writes, ‘Athenians walk with supine indifference among the glorious ruins of antiquity; and such is the debasement of their character, that they are incapable of admiring the genius of their predecessors.’ A similar argument is still part of why the Parthenon sculptures — the so-called Elgin Marbles, after the British Lord Elgin who appropriated them — are still on display in London rather than Athens.
Read the full article in the LA Review of Books.
Border crossings: Myths and memories of tolerance
Leslie Harkema, Marginalia, 26 October 2018
The seemingly disparate events and histories concerning the legacy of medieval Iberia, modern Spanish colonialism in Morocco, the Spanish Civil War, and the Moroccan resistance to Spain’s presence in North Africa come into contact with each other in Eric Calderwood’s recent book, Colonial al-Andalus: Spain and the Making of Modern Moroccan Culture. As its title suggests, Calderwood’s book demonstrates how integral the memory of al-Andalus became to Spanish colonial expansion in Morocco in the nineteenth and early twentieth centuries as well as how this memory fueled Moroccan nationalism leading up to the country’s independence and beyond.
The history of Spanish-Moroccan relations, Calderwood reminds us, is by no means simply a story of invasion from south to north: migrants and invaders have also crossed the Strait of Gibraltar from the Iberian Peninsula into Africa. After the Nasrid kingdom in Granada fell to Ferdinand and Isabella (the ‘Catholic Kings’) in 1492, newly expelled Jews and Muslims fled across the Mediterranean to Morocco. With the expulsion of the Moriscos (Muslims who had converted to Christianity) in 1609, this North-African diaspora grew. Two and a half centuries later, in 1859, the Spanish military crossed into Morocco, inaugurating a colonial campaign known, in Spanish, as the ‘Guerra de África.’ In 1912 the Spanish protectorate in Morocco was established, and in 1921 Spanish recruits again crossed the strait to quell a colonial rebellion in the Rif region. The conflict lasted until 1926, when the French and the Spanish ultimately prevailed. Morocco would not gain independence until 1956.
The North-African Spanish enclaves of Ceuta and Melilla exist as physical reminders of this history of crossings and colonization. They embody the blurring of distinctions between Europe and Africa, Christianity and Islam, East and West that has long been the reality of the relationship between Spain and Morocco. In the imagination of many, al-Andalus has often served as a less concrete, more idealized symbol of this blending. The legacy of a medieval Iberia where Christians, Muslims, and Jews coexisted and shared social space under Muslim rule has been invoked—most recently and controversially by María Rosa Menocal in The Ornament of the World (2002)—to counter the notion that interreligious relationships in Iberia were marked by hostility and violence. Published in the wake of 9/11, Menocal’s book sought to highlight the cultural and intellectual riches produced in al-Andalus, and to show, as its subtitle proclaimed, ‘How Muslims, Jews, and Christians Created a Culture of Tolerance in Medieval Spain.’ Scholars since have criticized Menocal’s invocation of harmonious coexistence or convivencia in the Iberian Middle Ages as overly rosy and insufficiently objective. In the context of the Bush era and pronouncements about an ‘axis of evil’ in North Africa and the Middle East, it seemed to some that Menocal had distorted the history of convivencia in order to serve contemporary ideological ends.
Read the full article in Marginalia.
A neuroscientist explains the limits and possibilities
of using technology to read our thoughts
Angela Chen & Russell Poldrack, The Verge, 17 October 2018
In the book, you talk a lot about the fallacy of ‘reverse inference.’ What is that?
Reverse inference is the idea that presence of activity in some brain area tells you what the person is experiencing psychologically. For example, there’s a brain region called the ventral striatum. If you receive any kind of reward, like money or food or drugs, there will be greater activity in that part of the brain.
The question is, if we take somebody and we don’t know what they’re doing, but we see activity in that part of the brain, how strongly should we decide that the person must be experiencing reward? If reward was the only thing that caused that sort of activity, we could be pretty sure. But there’s not really any part of the brain that has that kind of one-to-one relationship with a particular psychological state. So you can’t infer from activity in a particular area what someone is actually experiencing.
You can’t say ‘we saw a blob of activity in the insula, so the person must be experiencing love.’
What would be the correct interpretation then?
The correct interpretation would be something like, ‘we did X and it’s one of the things that causes activity in the insula.’
But we also know that there are tools from statistics and machine learning that can let one quantify how well can you quantify something from something else. Using statistical analysis, you can say, ‘we can infer with 64 percent accuracy whether this person is experiencing X based on activity across the brain.’
Is reverse inference the most common fallacy when it comes to interpreting neuroscience results?
It’s by far the most common. I also think sometimes people can misinterpret what the activity means. We see pictures where it’s like, there’s one spot on the brain showing activity, but that doesn’t mean the rest of the brain is doing nothing.
Read the full article in The Verge.
Almost too sober: On the appeal of Stoicism
Robert Zaretsky, LA Review of Books, 30 October 2018
Yet there is darkness at the heart of Stoicism — a darkness that, in Yourcenar’s novel, Hadrian glimpses. While he admires the example set by Epictetus — the crippled old man, Hadrian reports, seemed to ‘enjoy a liberty which was almost divine’ — the emperor tells Marcus Aurelius that he nevertheless refuses to embrace either the man or his philosophy. Epictetus, Hadrian muses, ‘gave up too many things, and I had been quick to observe that nothing was more dangerously easy for me than mere renunciation.’ The emperor is on to something. The reach of Stoic renunciation, unflinchingly acknowledged by Epictetus, is much further than most of us would ever wish to go. In one of his prosaic similes, he compares the Stoic’s life to a voyage on a ship commanded by nature. Just as a voyager on a real sea voyage might disembark at a port to gather ‘a little shellfish and vegetable,’ he must be prepared to drop these things and return to the ship at a moment’s notice. So, too, on the ship of life, the Stoic, during a port of call, might gather ‘a little wife and child.’ Ah, but don’t treasure these souvenirs, for ‘if the captain calls you, run to the boat and leave all those things without even turning around.’
In the Encheiridion, Epictetus — a childless bachelor, mind you — multiplies such examples. Should I want my wife and children to live, not to mention flourish, he lectures me for being ‘silly’ because I want things to be up to me that are not up to me. Should one of my children or wife die, I must never say ‘I have lost’ them. They were never mine in the first place, which is why I should instead say that they have been returned. Classicists like Martha Nussbaum and Richard Sorabji rightly question the consequences and costs of Stoic renunciation. Not only is it goodto have attachments to those we love, but it is also necessary; without these attachments, we might enjoy greater security and even serenity, but we would also experience less humanity. We would be, quite simply, less human.
There are other big questions raised by this small handbook. Does not Stoicism, which tells us that economic, political, and social issues are things indifferent, thus encourage forms of political resignation? Is there not the danger that Stoics, in the wide swath they cut with the blade of things indifferent, are in fact conspiring with forms of slavery we could and should resist? Is it really silly to wish with all your heart that your children not only survive you, but flourish as well? Once you put down Epictetus, you might wish to ask yourself whether you side with Hadrian, who accepted his own vulnerability and mourned the loss of his beloved Antinous, or ‘dear Mark,’ who sought to evince his vulnerability by instead loving mere humanity.
Read the full article in the LA Review of Books.
Always beginning
David Naimon, Poetry Foundation, 22 October 2018
With the notable exception of her novel Always Coming Home, in which she created the language, music, and poetry of the fictional Kesh people, Le Guin’s verse is not the imagined literature of alternate worlds—at least not in the science fiction and fantasy sense of ‘alternate.’ Her poems are rooted in contemplation of this world. While Le Guin didn’t write haikus, I sensed a kinship between that form and much of her poetry, as in her short, self-described elegy ‘Felled’:
Sterile gravel, stepping stones
where the great willow grew.
Only to me in empty air
a tree I must walk through.
I read to Le Guin from Robert Hass’ The Essential Haiku(1994), in which he writes about the qualities, beyond the formal ones, that define this type of poem: attention to time and space, grounding in a particular season, plain language marked by accurate, original images drawn from common life, and perhaps most importantly when regarding Le Guin’s work, a sense of humankind’s place in the world’s cyclical nature. Le Guin said writing haikus in English didn’t work for her, since she thought rhythmically rather than syllabically. She preferred as her equivalent to the haiku ‘a very old English form, with mostly iambic or trochaic rhythm and often with rhyme’—the quatrain. ‘Solstice’ is an example:
On the longest night of all the year
in the forests up the hill,
the little owl spoke soft and clear
to bid the night be longer still.
Nonetheless, Le Guin felt comfortable having her work described, in sensibility and in orientation, the way Hass describes the haiku form. She had a lifelong engagement with Buddhism and Taoism, and given the haiku’s connection to Buddhism—to Zen Buddhism in particular—I think of shoshin, the cultivation of ‘the beginner’s mind,’ of the openness and eagerness of a beginner even when working at an advanced level, as one source of Le Guin’s humility near the end of a lifetime of writing poems. Perhaps a certain decentering of self was a prerequisite to writing poems such as hers.
Read the full article in the Poetry Foundation.
What is nothing?
Sean Carroll & Jim Holt, Motherboard, 31 October 2018
‘Science and philosophy are concerned with asking how things are, and why they are the way they are. It therefore seems natural to take the next step and ask why things are at all,’ he wrote in the chapter. ‘Our experience of the world, which is confined to an extraordinarily tiny fraction of reality, doesn’t leave us well-equipped to think in appropriate ways about the question of its existence.’
True nothingness is very different from simply ‘empty space,’ even though that might be a serviceable, everyday definition, Carroll told me on a recent Skype call.
‘In quantum field theory, which we think is our best way of describing the universe that we have right now, empty space is kind of interesting,’ he explained. ‘Even if it’s as empty as it can be, there are still quantum mechanical [properties]—they’re just in a zero-energy state not doing anything. But you could probe the vacuum, as particle physics does, and discover its properties.
‘Empty space is a very interesting place in modern physics; there’s a lot going on, whereas, if it were nothing, there would be nothing going on,’ he said.
Quantum states are wave functions that measure the unpredictable energy levels of atoms and particles to a high degree of precision. A quantum mechanical system in its lowest energy state might look a lot like nothing, even from a mathematical perspective, but there would still be minute particles and energy bouncing around in there.
Whether it’s a hole in the ground or the vast swathes of space between celestial bodies, these ‘empty’ spaces are filled with something that has physical properties. That vacuum is not nothing, at least as far as Carroll and his contemporaries are concerned.
But that’s only one way of understanding this problem. The other is even more mind-bending: the absence of space-time altogether, ‘empty’ or otherwise.
‘Just truly nothingness—not a quantum theory vacuum—just the absence of anything,’ Carroll said. ‘Given that we’re in a post-general relativity world, we know that space and time are not fixed and absolute; they are dynamical. Einstein said that space and time are warped by energy, so it’s probably better to think of nothing as the absence of even space and time, rather than space and time without anything in them. There’s a big difference between empty space and nothing.’
While it’s important to keep this definition of nothing in mind, Carroll added, it’s not really of any service to the field of physics. ‘‘Something’ is interesting; nothingness is interesting only insofar as it’s the absence of something,’ he said.
Ultimately, Carroll said, he’s not losing sleep over the question of ‘What is nothing?’ even if it’s a fascinating thing to think about.
‘I think the question of ‘Why is there something rather than nothing?’ is interesting, but the answer probably is, ‘That’s just the way it is,’’ he concluded. ‘There’s probably not anything deeper than that. It’s not like nothing is some mysterious unknown; it’s just the absence of anything. That’s all there is to say about it. There’s nothing more to learn about nothing. All there is to learn is about something.’
Read the full article in Motherboard.
The images are, from top down: drawing of George Soros Denise Nestor for Politico; ‘The Massacre at St Peter’s or Britons Strike Home”’ by George Cruickshank © British Library; CRW Nevinson, ‘Returning to the Trenches’ (1914); Album cover for ‘Blood on the Tracks’; A Benin bronze in the British Museum; British Museum & Michel Wal, from The Art Newspaper; Betty Lee, White matter fibre tracts: Visualizations of fiber track data from diffusion MR imaging; winner of the 2011 Brain Art competition.