Pandaemonium

PLUCKED FROM THE WEB #35

web 35

The latest (somewhat random) collection of recent essays and stories from around the web that have caught my eye and are worth plucking out to be re-read.


.

A brief history of Stephen Hawking:
A legacy of paradox
Stuart Clark, New Scientist, 14 March 2018

‘I think most physicists would agree that Hawking’s greatest contribution is the prediction that black holes emit radiation,’ says Sean Carroll, a theoretical physicist at the California Institute of Technology. ‘While we still don’t have experimental confirmation that Hawking’s prediction is true, nearly every expert believes he was right.’

Experiments to test Hawking’s prediction are so difficult because the more massive a black hole is, the lower its temperature. For a large black hole – the kind astronomers can study with a telescope – the temperature of the radiation is too insignificant to measure. As Hawking himself often noted, it was for this reason that he was never awarded a Nobel Prize. Still, the prediction was enough to secure him a prime place in the annals of science, and the quantum particles that stream from the black hole’s edge would forever be known as Hawking radiation.

Some have suggested that they should more appropriately be called Bekenstein-Hawking radiation, but Bekenstein himself rejects this. ‘The entropy of a black hole is called Bekenstein-Hawking entropy, which I think is fine. I wrote it down first, Hawking found the numerical value of the constant, so together we found the formula as it is today. The radiation was really Hawking’s work. I had no idea how a black hole could radiate. Hawking brought that out very clearly. So that should be called Hawking radiation.’

The Bekenstein-Hawking entropy equation is the one Hawking asked to have engraved on his tombstone. It represents the ultimate mash-up of physical disciplines because it contains Newton’s constant, which clearly relates to gravity; Planck’s constant, which betrays quantum mechanics at play; the speed of light, the talisman of Einstein’s relativity; and the Boltzmann constant, the herald of thermodynamics.

The presence of these diverse constants hinted at a theory of everything, in which all physics is unified. Furthermore, it strongly corroborated Hawking’s original hunch that understanding black holes would be key in unlocking that deeper theory.

Hawking’s breakthrough may have solved the entropy problem, but it raised an even more difficult problem in its wake. If black holes can radiate, they will eventually evaporate and disappear. So what happens to all the information that fell in? Does it vanish too? If so, it will violate a central tenet of quantum mechanics. On the other hand, if it escapes from the black hole, it will violate Einstein’s theory of relativity. With the discovery of black hole radiation, Hawking had pit the ultimate laws of physics against one another. The black hole information loss paradox had been born.

Hawking staked his position in another ground-breaking and even more contentious paper entitled Breakdown of predictability in gravitational collapse, published in Physical Review D in 1976. He argued that when a black hole radiates away its mass, it does take all of its information with it – despite the fact that quantum mechanics expressly forbids information loss. Soon other physicists would pick sides, for or against this idea, in a debate that continues to this day. Indeed, many feel that information loss is the most pressing obstacle in understanding quantum gravity.

‘Hawking’s 1976 argument that black holes lose information is a towering achievement, perhaps one of the most consequential discoveries on the theoretical side of physics since the subject was invented,’ says Raphael Bousso of the University of California, Berkeley.

Read the full article in the New Scientist.


.

Whose university is it anyway?
Ron Srigley, Los Angeles Review of Books, 22 February 2018

Administrators control the modern university. The faculty have ‘fallen,’ to use Benjamin Ginsberg’s term. It’s an ‘all-administrative’ institution now. Spending on administrators and administration exceeds spending on faculty, administrators out-number faculty by a long shot, and administrative salaries and benefit packages, particularly those of presidents and other senior managers, have skyrocketed over the last 10 years. Even more telling perhaps, students themselves increasingly resemble administrators more than professors in their ambitions and needs. Safety, comfort, security, quality services, first-class accommodations, guaranteed high grades, institutional brand, better job placements, the market value of the credential — these are the things one hears students demanding these days, not truth, justice, and intelligence. The traditional language of ‘professors’ and ‘students’ still exists, though ‘service provider’ and ‘consumer’ are making serious bids to replace them. The principles of collegial governance and joint decision-making are still on the books, but they are no longer what the institution is about or how it works.

The revolution is over and the administrators have won. But the persistence of traditional structures and language has led some to think that the fight over the institution is now just beginning. This is a mistake. As with most revolutions, open conflict occurs only after real power has already changed hands. In France, for instance, the bourgeoisie were able to seize control of the regime because in a sense they already had it. The same is true of the modern university. Administrators have been slowly taking control of the institution for decades. The recent proliferation of books, essays, and manifestoes critiquing this takeover creates the impression that the battle is now on. But that is an illusion, and most writers know it. All the voices of protest, many of them beautiful and insightful, all of them noble, are either cries of the vanquished or merely a dogged determination to take the losing case to court.

Ask about virtually any problem in the university today and the solution proposed will inevitably be administrative. Why? Because we think administrators, not professors, guarantee the quality of the product and the achievement of institutional goals. But how is that possible in an academic environment in which knowledge and understanding are the true goals? Without putting too fine a point on it, it’s because they aren’t the true goals any longer. With the exception of certain key science and technology programs in which content proficiency is paramount, administrative efficiency and administrative mindedness are the true goals of the institution. Liberal arts and science programs are quietly being transmogrified through pressure from technology and technological modes of education so that their ‘content’ is increasingly merely an occasion for the delivery of what the university truly desires — well-adjusted, administratively minded people to populate the administrative world we’ve created for them. The latent assumption in all this is that what is truly important is not what students know or how intelligent they are, but how well and how often they perform and how finely we measure it.

If you think I exaggerate, consider the deliverables universities are forever touting to students today: ‘collaboration,’ ‘communication,’ ‘critical analysis,’ ‘impact.’ All abstract nouns indicating things you can do or have, but not a word about what you know or who you are. No promise to teach you history or politics or biology or to make you wise or thoughtful or prudent. Just skills training to equip you to perform optimally in a competitive, innovative world.

Read the full article in the LA Review of Books.


.

Mark Lilla and the crisis of liberalism
Samuel Moyn, Boston Review, 27 February 2018

For an author known for much of his career as a scourge of the left, Lilla’s reliance on Karl Marx to drive his argument is curious. Over the course of the short text, he makes not one but two section-length shout-outs to Marx—and they are utterly pivotal. Lilla appeals to ‘material conditions’ to explain what politics are plausible in any particular time period and—above all—how it was that progressives drifted into an unholy alliance with the right they were supposed to be fighting. Too bad Lilla does not follow through on that insight. If he did so, we would have more of the explanation we need, and the story of what happened between 1968 and Trump would be about economics and politics, and not solely about culture.

‘If an ideology endures,’ Lilla explains, ‘this means it is capturing something important in social reality.’ And in Lilla’s story, it was no accident that the left embraced an individualism—embarking on searches for meaning and obsessed with their personal identities—that atomized the country at the same time that the right championed a parallel economic libertarianism. Identity politics is ‘Reaganism for lefties,’ Lilla says, just with self-absorption rather than self-interest as the rationale. Beaten in their initial demands for a collective justice beyond the limits of the old welfare state, refugees from the 1960s took over the English departments and taught their students not communitarian politics but wounded narcissism.

Lilla is right that material conditions strongly affect the imaginations of reformers, even if they do not determine it. Marx made that point most famously, but it is the common coin of all who believe that no one makes history under circumstances of their own choosing. And we are living in times that force a new acceptance of this truth, even if we conclude that the imagination counts alongside interests (indeed, helps define interests) in the making of social reality. Our economically neoliberal age has shaped many of the most exciting causes progressives have embraced in recent decades, helping to tilt them in an individualist and meritocratic direction. These range from an affirmative action that has tended to help the best-off African Americans (along with recent immigrants and their children who fit the terms of the programs); to a feminism that honors the shattering of glass ceilings for elites but not the stagnation of the lives of middle-class and poor women; to an LGBTQ politics that lifted centuries of opprobrium by appealing to the libertarian instincts of constitutional judges.

Indeed, Lilla’s feints toward a politics of economic interests distinguish The Once and Future Liberal from other once-famous analyses and indictments of atomistic fracture and the ‘disuniting of America,’ of identity politics and liberal racism, ranging from Tocqueville himself to Daniel Rodgers or Richard Rorty or Arthur Schlesinger, Jr. But Lilla’s overall story of the United States and the Democratic Party is still far too much about the superstructure (in the relevant terminology). It needs more attention to the base. More importantly, it is too much about the wrong reformers, focusing on the New Left in humanities departments and omitting the actual governmental and party policies that have mattered most. Lilla intuits the limits of his culturalist analysis of narcissism, but he discards his newfound acknowledgment that structural forces matter.

Read the full article in the Boston Review,


.

The PowerPoint philosophe
David A Bell, The Nation, 7 March 2018

Given Pinker’s scorn for intellectuals and disregard of social movements, it is no surprise that his politics, and his hopes for the future, can best be summed up as technocratic neoliberalism. He puts his trust in free markets and the guidance of enlightened scientists and moguls (is it really a surprise that Bill Gates calls Enlightenment Now ’my new favorite book of all time’?). Let the rich get very, very rich, as long as everyone else’s income is rising, and don’t worry about the power they may be accumulating in the process. And when it comes to public policy, trust an expert class that proclaims its allegiance to science and progress alone and believes it is beyond politics. ‘To make public discourse more rational,’ Pinker proclaims, ‘issues should be depoliticized as much as is feasible.’

If protesters start to march and shout in the streets, calling for politicians to respect the will of the people, then what is called for is ‘effective training in critical thinking and cognitive debiasing’ so the people will respect the will of the experts. And, Pinker continues, ‘When people with die-hard opinions on Obamacare or NAFTA are challenged to explain what those policies actually are, they soon realize that they don’t know what they are talking about, and become more open to counterarguments.’ It’s a revealing sentence. Why do people with ‘die-hard opinions’ not know what they are talking about? Are the ‘experts’ always right?

Enlightenment Now is not a book that deserves a wide readership, but much like Dan Brown’s new novel, Origin, piles of it loom wherever books are sold.

Oddly, Enlightenment Now has several points in common with Origin. They both, for instance, have long, windy passages musing about the relationship of the second law of thermodynamics to the meaning of life. Brown, riffing on the work of Massachusetts Institute of Technology physicist Jeremy England, proposes that life is ‘the inevitable result of entropy. Life is not the point of the universe. Life is simply what the universe creates and reproduces in order to dissipate energy.’ Pinker, alternately, believes that the ‘ultimate purpose of life’ is ‘to deploy energy and knowledge to fight back the tide of entropy.’ The principal male characters in Origin are a wise Harvard professor and a farseeing tech mogul, and the climax is a TED Talk–like lecture in which the mogul reveals the destiny of the human race. But while Origin does little more than provide transient entertainment, Enlightenment Now may well have real influence.

In a 2004 profile, Time magazine suggested that Steven Pinker ‘crystallizes an intellectual era.’ Fourteen years later, what Pinker has actually crystallized in books like Enlightenment Now is our anti-intellectual era, one in which data and code are all too often held to trump serious critical reasoning and the wealth of the humanistic tradition and of morally driven activism is dismissed in favor of supposedly impartial scientific and technological expertise. These attitudes in no sense stem from the great movement of thought of 18th-century Europe. They are not ‘progress,’ as the philosophes understood the term. The philosophes, in fact, would have condemned them. They are not enlightened. They are benighted.

Read the full article in the Nation.


.

Dana Scutz Open Casket

How the Dana Schutz controversy – and a year of reckoning – have changed museums forever
Julia Halperin, Artnet, 6 March 2018

Last March, the artist Parker Bright stood in front of Dana Schutz’s painting Open Casket at the Whitney Biennial wearing a t-shirt with the words ‘Black Death Spectacle’ scrawled in Sharpie across the back. Photos of his protest went viral on social media, and it set off a chain of events that put the Whitney at the center of a scorched-earth debate over cultural appropriation, the definition of censorship, and the very role of the contemporary art institution in the Trump era. Now—almost one year later—some say museums will never be the same.

The Whitney is one of a number of institutions, including the Walker Art Center in Minneapolis and the Contemporary Art Museum, St. Louis, that have found themselves in the crossfire for presenting art that some have deemed insensitive, exploitative, or even traumatizing.

These bitter conversations about the need for change have been echoing across the culture field. Their contentions were, for instance, front and center at the 90th Academy Awards on Sunday night, while New Yorkmagazine recently dubbed 2017 ‘The Great Awokening‘ in pop culture. And while museums have long faced criticism of various kinds, many of the field’s top cultural leaders say that a potent cocktail of factors—from the fraught political situation in the US to the magnifying powers of social media—have caused these recent protests to erupt with unprecedented force, leading to a deeper rethinking.

‘We’re in a time when these issues are real, these controversies are part of public space and public discourse, and museums are going to become places where these issues get played out,’ Glenn Lowry, the director of the Museum of Modern Art in New York, told artnet News.

For some institutions—particularly those engulfed by controversy—this period of tumult has resulted in concrete changes to policy and governance. For those museums watching from the sidelines, the desire to avoid becoming the next target of social-media outrage has been a powerful motivator to diversify their curatorial staffs and to rethink the way they message their core values.

Read the full article in Artnet.


.

Captive orangutans are curious (but wild ones are not)
Ed Yong, The Atlantic, 5 March 2018

To test orangutans, one of van Schaik’s team members, Sofia Forss, built fake orangutan nests in the Sumatran canopy. She then filled them with items that the apes would never have seen before—a Swiss flag, plastic fruit, and even an orangutan doll. Footage from motion-sensitive cameras revealed that wild orangutans walked around the items for months. Only two adolescents ever actually touched the unfamiliar items. When another team member, Caroline Schuppli, repeated the same experiment in several zoos, she got completely different results. Within minutes, the orangutans had wrecked the nests.

Meanwhile, Laura Damerius did a similar experiment with 61 orangutans who lived in Indonesian rehabilitation stations, assessing their responses to unfamiliar objects like a human stranger or a lump of purple-dyed food. She found that apes who had spent more time with humans before arriving at the stations behaved more curiously—that is, they actively sought out new things, and explored them with gusto. And this, she found, influenced their mental abilities. On a battery of challenges designed to test their problem-solving skills, the curious orangutans scored higher than their incurious peers.

Scientists rarely study curiosity in other animals, and perhaps for good reason. It’s ‘a difficult concept to define, even for humans,’ says Jill Pruetz from Texas State University, ‘but it has very intriguing implications for understanding human evolution.’

For example, our ancestors had developed large brains, upright bodies, and basic tools hundreds of thousands of years before they acquired language, art, and other more sophisticated cultural innovations. ‘We’ve always wondered what unleashed that, and it may have something to do with curiosity,’ says van Schaik. Perhaps some change in our society, whether larger groups or the advent of new weapons, afforded us the safety that zoos provide to orangutans. That, in turn, could have unlocked the latent curiosity in our own minds, turning us into explorers and innovators.

Of course, ‘it’s very hard to test this,’ van Schaik says. But orangutans provide some support for the idea. Wild ones learn almost all of their skills by copying their mothers and selected role models. ‘They’re not going around like Curious George and turning everything over,’ says van Schaik. That makes sense. Curiosity, as they say, kills the orangutan. In a world of strangers and dangers, it’s more efficient and less risky to take your cues from experienced peers.

Read the full article in the Atlantic.


.

Why are we surprised when Buddhists are violent?
Dan Arnold & Alicia Turner, New York Times, 5 March 2018

While history suggests it is naïve to be surprised that Buddhists are as capable of inhuman cruelty as anyone else, such astonishment is nevertheless widespread — a fact that partly reflects the distinctive history of modern Buddhism. By ‘modern Buddhism,’ we mean not simply Buddhism as it happens to exist in the contemporary world but rather the distinctive new form of Buddhism that emerged in the 19th and 20th centuries. In this period, Buddhist religious leaders, often living under colonial rule in the historically Buddhist countries of Asia, together with Western enthusiasts who eagerly sought their teachings, collectively produced a newly ecumenical form of Buddhism — one that often indifferently drew from the various Buddhist traditions of countries like China, Sri Lanka, Tibet, Japan and Thailand,

This modern form of Buddhism is distinguished by a novel emphasis on meditation and by a corresponding disregard for rituals, relics, rebirth all the other peculiarly ‘religious’ dimensions of history’s many Buddhist traditions. The widespread embrace of modern Buddhism is reflected in familiar statements insisting that Buddhism is not a religion at all but rather (take your pick) a ‘way of life,’ a ‘philosophy’ or (reflecting recent enthusiasm for all things cognitive-scientific) a ‘mind science.’

Buddhism, in such a view, is not exemplified by practices like Japanese funerary rites, Thai amulet-worship or Tibetan oracular rituals but by the blandly nonreligious mindfulness meditation now becoming more ubiquitous even than yoga. To the extent that such deracinated expressions of Buddhist ideas are accepted as defining what Buddhism is, it can indeed be surprising to learn that the world’s Buddhists have, both in past and present, engaged in violence and destruction.

There is, however, no shortage of historical examples of violence in Buddhist societies. Sri Lanka’s long and tragic civil war (1983-2009), for example, involved a great deal of specifically Buddhist nationalism on the part of a Sinhalese majority resentful of the presence of Tamil Hindus in what the former took to be the last bastion of true Buddhism (the ‘island of dharma’). Political violence in modern Thailand, too, has often been inflected by Buddhist involvement, and there is a growing body of scholarly literature on the martial complicity of Buddhist institutions in World War II-era Japanese nationalism. Even the history of the Dalai Lama’s own sect of Tibetan Buddhism includes events like the razing of rival monasteries, and recent decades have seen a controversy centering on a wrathful protector deity believed by some of the Dalai Lama’s fellow religionists to heap destruction on the false teachers of rival sects.

Read the full article in the New York Times.


.

The shadow over Egypt
Orla Guerin, BBC News, 23 February 2018

Disappearances, death sentences, and torture have become daily news, she says. ‘Even people who are silent or are trying to stay away from engaging with the political scene are still being thrown in jail or getting randomly detained.’

She says Egyptians are feeling ‘numb, exhausted and scared’ and there’s no fight left on the streets. ‘It’s very understandable to be scared with a regime that is not hesitant about using killing.’

After her brother finishes his five-year sentence next March, he will be on probation for a further five years. That may amount to another form of imprisonment. He could be required to sleep at his local police station every night. That has been the fate of other activists.

Before Alaa Abdel Fattah was convicted, I met him at his mother’s apartment. It was 2014 and he was out on bail. He and Laila chatted and laughed over tea, the conversation ranging from mathematical theories to political strategies. The young revolutionary was clear-eyed about how much worse things were since the revolution.

‘When you were confronting Mubarak hope was a material thing,’ he said. ‘You could almost touch it. And so it was very easy to feel that it was worth it. And so people were taking these risks without any kind of despair. Right now it’s very bleak.’

Read the full article on BBC News.


.

Ex Machina

Why self-taught Artificial Intelligence
has trouble with the real world

Joshua Sokal, Quanta, 21 February 2018

One characteristic shared by many games, chess and Go included, is that players can see all the pieces on both sides at all times. Each player always has what’s termed ‘perfect information’ about the state of the game. However devilishly complex the game gets, all you need to do is think forward from the current situation.

Plenty of real situations aren’t like that. Imagine asking a computer to diagnose an illness or conduct a business negotiation. ‘Most real-world strategic interactions involve hidden information,’ said Noam Brown, a doctoral student in computer science at Carnegie Mellon University. ‘I feel like that’s been neglected by the majority of the AI community.’

Poker, which Brown specializes in, offers a different challenge. You can’t see your opponent’s cards. But here too, machines that learn by playing against themselves are now reaching superhuman levels. In January 2017, a  program called Libratus created by Brown and his adviser, Tuomas Sandholm, outplayed four professional poker players at heads-up, no-limit Texas Hold’ em, finishing $1.7 million ahead of its competitors at the end of a 20-day competition.

An even more daunting game involving imperfect information is StarCraft II, another multiplayer online video game with a vast following. Players pick a team, build an army and wage war across a sci-fi landscape. But that landscape is shrouded in a fog of war that only lets players see areas where they have soldiers or buildings. Even the decision to scout your opponent is fraught with uncertainty.

This is one game that AI still can’t beat. Barriers to success include the sheer number of moves in a game, which often stretches into the thousands, and the speed at which they must be made. Every player — human or machine — has to worry about a vast set of possible futures with every click.

For now, going toe-to-toe with top humans in this arena is beyond the reach of AI. But it’s a target. In August 2017, DeepMind partnered with Blizzard Entertainment, the company that made StarCraft II, to release tools that they say will help open up the game to AI researchers.

Despite its challenges, StarCraft II comes down to a simply enunciated goal: Eradicate your enemy. That’s something it shares with chess, Go, poker, Dota 2 and just about every other game. In games, you can win.

From an algorithm’s perspective, problems need to have an ‘objective function,’ a goal to be sought. When AlphaZero played chess, this wasn’t so hard. A loss counted as minus one, a draw was zero, and a win was plus one. AlphaZero’s objective function was to maximize its score. The objective function of a poker bot is just as simple: Win lots of money.

Real-life situations are not so straightforward. For example, a self-driving car needs a more nuanced objective function, something akin to the kind of careful phrasing you’d use to explain a wish to a genie. For example: Promptly deliver your passenger to the correct location, obeying all laws and appropriately weighing the value of human life in dangerous and uncertain situations. How researchers craft the objective function, Domingos said, ‘is one of the things that distinguishes a great machine-learning researcher from an average one.’

Read the full article in Quanta.


.

Josef K in Washington
David Luban, New York Review of Books, 22 March 2018

Many of these doctrines are not well understood by people outside the legal profession. For example, as Chemerinsky observes:

If the Supreme Court were to hold that the government can give unlimited amounts of money to religious schools, the decision would make the front-page headline of every newspaper in the country. But if the Court were to hold that no one has standing to challenge the government when it gives money to parochial schools, that would get far less attention. Yet the effect is exactly the same: if no one can challenge a government action in court, the government can do what it wants.

Article 3 of the Constitution ensures that the courts only hear genuine ‘cases and controversies,’ but it was not until the 1920s that the Supreme Court began using it to limit who can sue the government. In 1992, the Court declared, in Lujan v. Defenders of Wildlife, that citizens lack standing to challenge a government action in court unless it causes ‘a concrete and particularized’ injury to the person suing.

Chemerinsky thinks this is absurd: he argues that ‘it makes no sense to have a situation where no one can sue because of a hypothetical concern over wanting to make sure that there is the best plaintiff.’ (True to his principles, Chemerinsky is one of the legal scholars who has filed a lawsuit challenging Donald Trump’s business ties as a violation of the Constitution’s emoluments clause—a challenge that some experts predict will fail for lack of standing.)

What is standing? The standing doctrine prevented Adolpho Lyons, an African-American choked until he became unconscious by Los Angeles police in a routine traffic stop in 1976, from requesting an injunction against the LAPD’s chokehold policies. He was able to sue for his injuries and collected a nominal settlement, but the Supreme Court declared that Lyons could not challenge the police policy of using chokeholds, because he could not show that he would likely be the victim of one in the future. Obviously, nobody could show that—and so nobody has standing to challenge the policy in court. Police officers continue to use chokeholds, as the death of Eric Garner in 2014 reminds us.

Read the full article in the New York Review of Books.


.

The city that remembers everything
Geoff Manaugh, The Atlantic, 23 February 2018

In 2009, the US military revealed the use of a new surveillance tool called Gorgon Stare. It was named after creatures from Greek mythology that could turn anyone who made eye contact with them to stone. In practice, Gorgon Stare was a sphere of nine surveillance cameras mounted on an aerial drone that could stay aloft for hours, recording everything in sight. As an Air Force major general explained to The Washington Post at the time, ‘Gorgon Stare will be looking at a whole city, so there will be no way for the adversary to know what we’re looking at, and we can see everything.’

If a car bomb were to detonate in an outdoor market, for example, the accumulated footage Gorgon Stare had captured from that day could be rewound to track the vehicle to its point of origin. Perhaps it could even follow the car backward in time, over several days. This could reveal not only the vehicle’s driver but also the buildings the car might have visited in the hours or days leading up to the attack. It is instant-replay technology applied to an entire metropolis instead of a football field—a comparison made unnervingly literal by the fact that the same company that supplied instant replays for the National Football League began consulting with the Pentagon to bring their technology to the battlefield.

Unsurprisingly, militarized replay technologies such as Gorgon Stare caught the eye of domestic U.S. law enforcement who saw it as an unprecedented opportunity to identify, track, and capture even the pettiest of criminals—or, more ominously, to follow every attendee of a political rally or public demonstration.

The larger promise of Gorgon Stare, however, was a narrative one: Its capacity for total documentation implied that every event in the city could not just be reconstructed but fully and completely explained. Indeed, James Joyce and the U.S. military would seem to agree that the best way to make sense of the modern metropolis is to document even the most inconsequential details. Urban events as minor as a Dubliner out for an afternoon stroll—let alone something as catastrophic as a terrorist attack—can be rewound, studied, and rationally annotated. Gorgon Stare is Ulysses reimagined as a police operation: a complete, time-coded, searchable archive of a person’s every act. The police can go back days, weeks, or months; if they have enough server space, years. Should they wish, they could produce the most complex, novelistic explanations imaginable simply because their data pool has become so rich.

This sort of persistent surveillance no longer requires drones, however, or even dedicated cameras; instead, people have willfully embedded these technologies into their daily lives. The rise of the so-called smart city is more accurately described as the rise of a loose group of multisensory tracking technologies. Gorgon Stare, we might say, is the metropolis now.

Read the full article in the Atlantic.


.

Congo for the Congolese
Helen Epstein, NYRB Daily, 19 February 2018

Ahead of his first state visit to Africa in March, Secretary of State Rex Tillerson should rethink Washington’s longstanding support for some of Africa’s more tyrannical regimes, particularly those of Yoweri Museveni of Uganda and Paul Kagame of Rwanda, whose security forces continue to prey on their vast, mineral-rich neighbor Congo.

Beneath Congo’s soil lies an estimated (at 2011 prices) $24 trillion in natural resources, including rich supplies of oil, gold, diamonds, the coltan used in computer chips, the cobalt and nickel used in jet engines and car batteries, the copper for bathroom pipes, the uranium for bombs and power plants, the iron for nearly everything. This wealth is the source of untold suffering. Today, more Congolese are displaced from their homes than Iraqis, Yemenis, or Rohingyas. Yet their miseries are all but invisible, in part because the identities and aims of Congo’s myriad combatants are mystified by layers of rumor and misinformation, which serve the interests of those profiting from the mayhem…

In circumstances like these, informal sources such as bloggers and independent journalists sometimes provide more plausible insights than official ones, thanks to their ties to local populations, security forces, churches, and political movements in war-torn areas. One such source is Les massacres de Beni (The Massacres of Beni), a self-published monograph by independent Congolese researcher Boniface Musavuli, now living in exile in France.

As Musavuli emphasizes, Congo’s chronic instability has always been rooted in external interference. Created in the ninteenth century by Belgian imperialists, the country was ruled during the cold war by the flamboyant, leopardskin-draped dictator Mobutu Sese Seko, who, in exchange for billions of dollars in CIA cash, kept the nation’s riches under Western control. After the Berlin Wall fell, the US, alarmed by Mobutu’s warming relations with Sudan’s Islamist leadership, backed an invasion by the armies of Uganda and Rwanda that toppled Mobutu and occupied over 1,000 square miles of eastern Congo, which they proceeded to plunder for its natural resources. Most of the dirty work was done by Congolese proxy forces armed, trained, and supported by Uganda and Rwanda.

The Ugandan and Rwandan armies officially pulled out of Congo in 2003, and their proxy forces were nominally integrated into Congo’s national army. However, some of these ex-rebels continued to torment local Congolese, especially around the smuggling routes bordering Uganda and Rwanda. In 2012, a damning UN report exposed Rwanda’s and Uganda’s links to a particularly brutal rebel group known as the M23, and Western donor nations briefly imposed sanctions on Rwanda (though not on Uganda, for reasons unknown).

Musavuli maintains that since the sanctions, Rwanda-loyalists inside the Congolese army, still intent on controlling Beni, now disguise their brutal activities by recruiting members of other ragtag armed groups under false pretenses.

Read the full article in the NYRB Daily.


.

Terracotta warrior

The copy is the original
Byung-Chul Han, Aeon, 8 March 2018

In 1956, an exhibition of masterpieces of Chinese art took place in the museum of Asian art in Paris, the Musée Cernuschi. It soon emerged that these pictures were, in fact, forgeries. In this case, the sensitive issue was that the forger was none other than the most famous Chinese painter of the 20th century, Chang Dai-chien, whose works were being exhibited simultaneously at the Musée d’Art Moderne. He was considered the Pablo Picasso of China. And his meeting with Picasso that same year was celebrated as a summit between the masters of Western and Eastern art. Once it became known that the old masterpieces were his forgeries, the Western world regarded him as a mere fraud. Yet for Chang himself, they were anything but forgeries. In any case, most of these old pictures were no mere copies, but rather replicas of lost paintings that were known only from written descriptions.

In China, collectors themselves were often painters. Chang, too, was a passionate collector. He owned more than 4,000 paintings. His collection was not a dead archive but a gathering of Old Masters, a living place of communication and transformation. He was himself a shape-shifting body, an artist of metamorphosis. He slipped effortlessly into a role of past masters and created a certain kind of original. As Shen Fu and Jan Stuart put it in Challenging the Past: The Paintings of Chang Dai-chien (1991):

Chang’s genius probably guarantees that some of his forgeries will remain undetected for a long time to come. By creating ‘ancient’ paintings that matched the verbal descriptions recorded in catalogues of lost paintings, Chang was able to paint forgeries that collectors had been yearning to ‘discover’. In some works, he would transform images in totally unexpected ways; he might recast a Ming dynasty composition as if it were a Song dynasty painting.

His paintings are originals insofar as they carry forward the ‘real trace’ of the Old Masters and also extend and change their oeuvre retrospectively. Only the idea of the unrepeatable, inviolable, unique original in the emphatic sense downgrades them to mere forgeries. This special practice of persisting creation (Fortschöpfung) is conceivable only in a culture that is not committed to revolutionary ruptures and discontinuities, but to continuities and quiet transformations, not to Being and essence, but to process and change.

In 2007, when it became known that terracotta warriors flown in from China were not 2,000-year-old artefacts, but rather copies, the Museum of Ethnology in Hamburg decided to close the exhibition completely. The museum’s director, who was apparently acting as the advocate of truth and truthfulness, said at the time: ‘We have come to the conclusion that there is no other option than to close the exhibition completely, in order to maintain the museum’s good reputation.’ The museum even offered to reimburse the entrance fees of all visitors to the exhibition.

From the start, the production of replicas of the terracotta warriors proceeded in parallel with the excavations. A replica workshop was set up on the excavation site itself. But they were not producing ‘forgeries’. Rather, we might say that the Chinese were trying to restart production, as it were – production that from the beginning was not creation but already reproduction. Indeed, the originals themselves were manufactured through serial mass-production using modules or components – a process that could easily have been continued, had the original production methods been available.

Read the full article in Aeon.


.

Historians have long thought populism
was a good thing. Were they wrong?

Joshua Zeitz, Politico, 14 January 2018

Imagine, if you will, that millions of hard-working Americans finally reached their boiling point. Roiled by an unsettling pattern of economic booms and busts; powerless before a haughty coastal elite that in recent decades had effectively arrogated the nation’s banks, means of production and distribution, and even its information highway; burdened by the toll that open borders and free trade imposed on their communities; incensed by rising economic inequality and the concentration of political power—what if these Americans registered their disgust by forging a new political movement with a distinctly backward-looking, even revanchist, outlook? What if they rose up as one and tried to make America great again?

Would you regard such a movement as worthy of support and nurture—as keeping with the democratic tradition of Thomas Jefferson and Andrew Jackson? Or would you mainly dread the ugly tone it would inevitably assume—its fear of the immigrant and the Jew, its frequent lapse into white supremacy, its slipshod grasp of political economy and its potentially destabilizing effect on longstanding institutions and norms?

To clarify: This scenario has nothing whatsoever to do with Donald Trump and the modern Republican Party. Rather, it is a question that consumed social and political historians for the better part of a century. They clashed sharply in assessing the essential character of the Populist movement of the late 1800s—a political and economic uprising that briefly drew under one tent a ragtag coalition of Southern and Western farmers (both black and white), urban workers, and utopian newspapermen and polemicists.

That debate pitted ‘progressive’ historians of the early 20th century and their latter-day successors who viewed Populism as a fundamentally constructive political movement, against Richard Hofstadter, one of the most influential American historians then or since. Writing in 1955, Hofstadter theorized that the Populists were cranks—backward-looking losers who blamed their misfortune on a raft of conspiracy theories.

Hofstadter lost that debate: Historians generally view Populism as a grass-roots movement that fought against steep odds to correct many of the economic injustices associated with the Gilded Age. They write off the movement’s ornerier tendencies by pointing out—with some justification—that Populism was a product of its time, and inasmuch as its supporters sometimes expressed exaggerated fear of ‘the secret plot and the conspiratorial meeting,’ so did many Americans who did not share their politics.

But does that point of view hold up after 2016? The populist demons Trump has unleashed—revanchist in outlook, conspiratorial in the extreme, given to frequent expressions of white nationalism and antisemitism—bear uncanny resemblance to the Populist movement that Hofstadter described as bearing a fascination with ‘militancy and nationalism … apocalyptic forebodings … hatred of big businessmen, bankers, and trusts … fears of immigrants … even [the] occasional toying with anti-Semitic rhetoric.’

Read the full article in Politico.


.

Russia isn’t the only one meddling
in elections. We do it, too.

Scott Shane, New York Times, 17 February 2018

Most Americans are understandably shocked by what they view as an unprecedented attack on our political system. But intelligence veterans, and scholars who have studied covert operations, have a different, and quite revealing, view.

‘If you ask an intelligence officer, did the Russians break the rules or do something bizarre, the answer is no, not at all,’ said Steven L. Hall, who retired in 2015 after 30 years at the C.I.A., where he was the chief of Russian operations. The United States ‘absolutely’ has carried out such election influence operations historically, he said, ‘and I hope we keep doing it.’

Loch K. Johnson, the dean of American intelligence scholars, who began his career in the 1970s investigating the C.I.A. as a staff member of the Senate’s Church Committee, says Russia’s 2016 operation was simply the cyber-age version of standard United States practice for decades, whenever American officials were worried about a foreign vote.

‘We’ve been doing this kind of thing since the C.I.A. was created in 1947,’ said Mr. Johnson, now at the University of Georgia. ‘We’ve used posters, pamphlets, mailers, banners — you name it. We’ve planted false information in foreign newspapers. We’ve used what the British call ‘King George’s cavalry’: suitcases of cash.’

The United States’ departure from democratic ideals sometimes went much further. The C.I.A. helped overthrow elected leaders in Iran and Guatemala in the 1950s and backed violent coups in several other countries in the 1960s. It plotted assassinations and supported brutal anti-Communist governments in Latin America, Africa and Asia.

But in recent decades, both Mr. Hall and Mr. Johnson argued, Russian and American interferences in elections have not been morally equivalent. American interventions have generally been aimed at helping non-authoritarian candidates challenge dictators or otherwise promoting democracy. Russia has more often intervened to disrupt democracy or promote authoritarian rule, they said.

Read the full article in the New York Times.


.

Labor and the long Seventies
Lane Windham & Chris Books, Jacobin, 25 February 2018

CB: Your book describes unions as the ‘narrow door’ through which workers accessed our nation’s fullest social welfare system. What do you mean? 

LW: If you are German or French, you don’t have to join a union to have access to health care or retirement. Those are benefits provided as a matter of citizenship. In our country, employers provide those benefits to workers. How do we ensure that corporations step up to play this role? Through firm-level collective bargaining. So unions play a critical role in our social welfare system — they do the redistribution work that governments do in many other countries.

There are three ways that workers can access this social welfare system: they can form a union, they can get a job in a company that is already unionized, or they can get a job at a company that is matching union wages and benefits. Either way, someone at sometime had to organize a union for this to be possible.

So organizing a union is the narrow door through which working people access our nation’s most robust social welfare benefits. My book focuses on the 1970s, a period where we see women and people of color driving a wave of union organizing after gaining new access to the job market as a result of the passage of the 1964 Civil Rights Act.

CB: You describe Title VII of the Civil Rights Act as the single biggest challenge to employers’ workplace power since the passage of the 1935 National Labor Relations Act. Why?

LW: The National Labor Relations Act, or the Wagner Act, was a huge challenge to corporations. It provided a legal process through which workers can win a union and compel companies to negotiate a contract with them. In many ways, it was the answer to the late nineteenth and early twentieth century’s big labor question: how are we going to deal with the contradiction between the promise of democracy and the realities of industrial capitalism?

The Wagner Act was a compromise that excluded many women and people of color by excluding domestic service and agricultural jobs. This was one of the major limitations of the New Deal promise, but with the passage of the Civil Rights Act, all those workers who had been relegated to the margins of industrial capitalism suddenly had access to jobs in the core.

Title VII of the Civil Rights Act prohibited discrimination on the grounds of race, sex, color, religion, or national origin. The narrow door was suddenly open to everybody, and you see this great rush of women and people of color into unions. In 1960, only 18 percent of the nation’s union members were women, but by 1984, 34 percent of union members were women. By 1973, a full 44 percent of black men in the private sector had a union.

Read the full article in Jacobin.


.

Betty Lee, White matter fibre tracts, 2011 Brain Art winner

‘My brain made me do it’ is becoming
a more common criminal defense

Dina Fine Maron, Scientific American, 5 March 2018

After Richard Hodges pleaded guilty to cocaine possession and residential burglary, he appeared somewhat dazed and kept asking questions that had nothing to do with the plea process. That’s when the judge ordered that Hodges undergo a neuropsychological examination and magnetic resonance imaging (MRI) testing. Yet no irregularities turned up.

Hodges, experts concluded, was faking it. His guilty plea would stand.

But experts looking back at the 2007 case now say Hodges was part of a burgeoning trend: Criminal defense strategies are increasingly relying on neurological evidence—psychological evaluations, behavioral tests or brain scans—to potentially mitigate punishment. Defendants may cite earlier head traumas or brain disorders as underlying reasons for their behavior, hoping this will be factored into a court’s decisions. Such defenses have been employed for decades, mostly in death penalty cases. But as science has evolved in recent years, the practice has become more common in criminal cases ranging from drug offenses to robberies.

‘The number of cases in which people try to introduce neurotechnological evidence in the trial or sentencing phase has gone up by leaps and bounds,’ says Joshua Sanes, director of the Center for Brain Science at Harvard University. But such attempts may be outpacing the scientific evidence behind the technology, he adds.

‘In 2012 alone over 250 judicial opinions—more than double the number in 2007—cited defendants arguing in some form or another that their ‘brains made them do it,’’ according to an analysis by Nita Farahany, a law professor and director of Duke University’s Initiative for Science and Society. More recently, she says, that number has climbed to around 420 each year.

Even when lawyers do not bring neuroscience into the courtroom, this shift can still affect a case: Some defendants are now using the omission of neuroscience as grounds for questioning the competency of the defenses they received. In a bid to untangle the issue, Sanes, Farahany and other members of a committee of The National Academies of Sciences, Engineering and Medicine are meeting in Washington, D.C., on Tuesday to discuss what they have dubbed ‘neuroforensics.’

Read the full article in Scientific American.


.

‘Millennial’ means nothing
John Quiggin, New York Times, 6 March 2018

It’s true that the current cohort (the demographic term for a group of people born around the same time) of young people is different in important ways from earlier cohorts. It’s more ethnically diverse, with a smaller proportion of whites and more of most other racial and ethnic groups. But diversity is a characteristic of a population, not, in most cases, of individuals. A relatively small proportion of millennials personally embody ethnic diversity in the sense of identifying with more than one race or ethnicity.

Much of the apparent distinctiveness of the millennial generation disappears when we look at individuals rather than aggregates. Black millennials, like their parents, overwhelmingly vote Democratic. By contrast, 41 percent of white millennials voted for Donald Trump in 2016. That’s lower than the 58 percent of all white voters who went for Mr. Trump, but it makes more sense to attribute the difference to individual characteristics and experiences rather than a generational attitude.

Compared to the population as a whole, a larger proportion of millennials are college-educated, and a smaller proportion live in rural areas. Like other urban and educated voters, urban and educated millennials tend to vote Democratic. Rural millennials, meanwhile, share many of the attitudes of older rural voters who voted for Mr. Trump.

Activism by high school students in response to the Parkland, Fla., shooting has inspired interest in the generation younger than millennials, known as ‘Gen Z’ or ‘iGen.’ A recent Washington Post essay declared: ‘Millennials disrupted the system. Gen Z is here to fix the mess.’ It argued that members of this cohort ‘value compromise’ as ‘a byproduct of their diversity and comfort with working with peers from different backgrounds.’

But given that public schools have been resegregating for decades, to assume that the demographic makeup of a generation would have a meaningful impact on most individual Gen Z members’ experiences with diversity seems misguided.

Read the full article in the New York Times.


.

How to change the course of human history
David Graeber & David Wengrow, Eurozine, 2 March 2018

Another bombshell: ‘civilization’ does not come as a package. The world’s first cities did not just emerge in a handful of locations, together with systems of centralised government and bureaucratic control. In China, for instance, we are now aware that by 2500 BC, settlements of 300 hectares or more existed on the lower reaches of the Yellow River, over a thousand years before the foundation of the earliest (Shang) royal dynasty. On the other side of the Pacific, and at around the same time, ceremonial centres of striking magnitude have been discovered in the valley of Peru’s Río Supe, notably at the site of Caral: enigmatic remains of sunken plazas and monumental platforms, four millennia older than the Inca Empire. Such recent discoveries indicate how little is yet truly known about the distribution and origin of the first cities, and just how much older these cities may be than the systems of authoritarian government and literate administration that were once assumed necessary for their foundation. And in the more established heartlands of urbanisation – Mesopotamia, the Indus Valley, the Basin of Mexico – there is mounting evidence that the first cities were organised on self-consciously egalitarian lines, municipal councils retaining significant autonomy from central government. In the first two cases, cities with sophisticated civic infrastructures flourished for over half a millennium with no trace of royal burials or monuments, no standing armies or other means of large-scale coercion, nor any hint of direct bureaucratic control over most citizen’s lives.

Jared Diamond notwithstanding, there is absolutely no evidence that top-down structures of rule are the necessary consequence of large-scale organization. Walter Scheidel notwithstanding, it is simply not true that ruling classes, once established, cannot be gotten rid of except by general catastrophe. To take just one well-documented example: around 200 AD, the city of Teotihuacan in the Valley of Mexico, with a population of 120,000 (one of the largest in the world at the time), appears to have undergone a profound transformation, turning its back on pyramid-temples and human sacrifice, and reconstructing itself as a vast collection of comfortable villas, all almost exactly the same size. It remained so for perhaps 400 years. Even in Cortés’ day, Central Mexico was still home to cities like Tlaxcala, run by an elected council whose members were periodically whipped by their constituents to remind them who was ultimately in charge.

The pieces are all there to create an entirely different world history. For the most part, we’re just too blinded by our prejudices to see the implications. For instance, almost everyone nowadays insists that participatory democracy, or social equality, can work in a small community or activist group, but cannot possibly ‘scale up’ to anything like a city, a region, or a nation-state. But the evidence before our eyes, if we choose to look at it, suggests the opposite. Egalitarian cities, even regional confederacies, are historically quite commonplace. Egalitarian families and households are not. Once the historical verdict is in, we will see that the most painful loss of human freedoms began at the small scale – the level of gender relations, age groups, and domestic servitude – the kind of relationships that contain at once the greatest intimacy and the deepest forms of structural violence. If we really want to understand how it first became acceptable for some to turn wealth into power, and for others to end up being told their needs and lives don’t count, it is here that we should look. Here too, we predict, is where the most difficult work of creating a free society will have to take place.

Read the full article in Eurozine.


.

In Sudan, rediscovering ancient Nubia before it’s too late
Amy Maxmen, Undark, 29 February 2018

In 1905, British archaeologists descended on a sliver of eastern Africa, aiming to uncover and extract artifacts from 3,000-year-old temples. They left mostly with photographs, discouraged by the ever-shifting sand dunes that blanketed the land. ‘We sank up to the knees at every step,’ Wallis Budge, the British Egyptologist and philologist, wrote at the time, adding: ‘[We] made several trial diggings in other parts of the site, but we found nothing worth carrying away.’

For the next century, the region known as Nubia — home to civilizations older than the dynastic Egyptians, skirting the Nile River in what is today northern Sudan and southern Egypt — was paid relatively little attention. The land was inhospitable, and some archaeologists of the era subtly or explicitly dismissed the notion that black Africans were capable of creating art, technology, and metropolises like those from Egypt or Rome. Modern textbooks still treat ancient Nubia like a mere annex to Egypt: a few paragraphs on black pharaohs, at most.

Today, archaeologists are realizing how wrong their predecessors were — and how little time they have left to uncover and fully understand Nubia’s historical significance.

‘This is one of the great, earliest-known civilizations in the world,’ says Neal Spencer, an archaeologist with the British Museum. For the past ten years, Spencer has traveled to a site his academic predecessors photographed a century ago, called Amara West, around 100 miles south of the Egyptian border in Sudan. Armed with a device called a magnetometer, which measures the patterns of magnetism in the features hidden underground, Spencer plots thousands of readings to reveal entire neighborhoods beneath the sand, the bases of pyramids, and round burial mounds, called tumuli, over tombs where skeletons rest on funerary beds – unique to Nubia — dating from 1,300 to 800 B.C.

Sites like this can be found up and down the Nile River in northern Sudan, and at each one, archaeologists are uncovering hundreds of artifacts, decorated tombs, temples, and towns. Each finding is precious, the scientists say, because they provide clues about who the ancient Nubians were, what art they made, what language they spoke, how they worshipped, and how they died — valuable puzzle pieces in the quest to understand the mosaic of human civilization writ large. And yet, everything from hydroelectric dams to desertification in northern Sudan threaten to overtake, and in some cases, erase these hallowed archaeological grounds. Now, scientists armed with an array of technologies — and a quickened sense of purpose — are scrambling to uncover and document what they can before the window of discovery closes on what remains of ancient Nubia.

‘Only now do we realize how much pristine archaeology is just waiting to be found,’ says David Edwards, an archaeologist at the University of Leicester in the U.K. ‘But just as we are becoming aware it’s there, it’s gone,’ he adds. Within the next 10 years, Edwards says, ‘most of ancient Nubia might be swept away.’

Read the full article in Undark.


.

Philip K Dick Do Androids Dream of Electric Sheep

Where Blade Runner began: 50 years of
Do Androids Dream of Electric Sheep?
Ananyo Bhattacharya, Nature, 7 March 2018

Many know of the book solely through the film. But Blade Runner is only nominally based on the original. Dick’s prescience in Androids lies in his portrayal of a society in which human-like robots have emerged at the same time as advances that make people more pliable and predictable, like machines. The film eschews the intricacies of plot that bring this to the fore in the book…

These days, academic discourse around the work dwells on what distinguishes humans from sophisticated robots — driven by the film. Dick’s approach was more nuanced. The name Deckard, for instance, echoes that of seventeenth-century French philosopher René Descartes, who asked whether it was possible to distinguish, without direct access to their minds, a human from an automaton. Deckard explores that ambiguity, wondering uneasily whether he himself is an android. He passes the Voight-Kampff test but, towards the end of the novel, he recognizes a kind of kinship with his quarry. ‘The electric things have their lives, too,’ he says. ‘Paltry as those lives are.’

Whether such machines should also be accorded rights is a question that researchers wrestle with today. Artificial-intelligence specialist Joanna Bryson, among others, has argued that granting autonomous robots legal personhood would be a mistake: it would render their makers unaccountable. Bryson, an admirer of the book, believes that the mass production of machines with human-like goals and ambitions should be prohibited.

But Dick’s chief preoccupation in Androids is not the almost-human robot as moral subject. His synthetic beings are inhuman in important ways. They are unable to participate in the rituals of Mercerism, for instance. And their leader, Roy, is a brute who is summarily dispatched. (The film endows him with empathy and even literary flair, saving Deckard’s life as he delivers an unforgettable swansong about C-beams that ‘glitter in the dark near the Tannhäuser Gate’.)

Rather, Androids is a meditation on how the fragile, unique human experience might be damaged by technology created to serve us.

Read the full article in Nature.


.

How close are we to a cure for Huntington’s?
Peter Forbes, Mosaic, 6 March 2018

‘Phenomenal’. ‘Ground-breaking’. A ‘game changer’. So read the headlines when the trial’s results went public on 11 December 2017. It’s usually wise to take media coverage of medical research with a pinch of salt, but in this instance the scientific community seemed just as excited – if not more so.

‘I really think this is, potentially, the biggest breakthrough in neurodegenerative disease in the past 50 years,’ said neuroscientist John Hardy of University College London (UCL) when speaking to the BBC. ‘That sounds like hyperbole – in a year I might be embarrassed by saying that – but that’s how I feel at the moment.’

Sarah Tabrizi, a principal investigator at the Huntington’s Disease Centre at UCL and the leader of the trial, is similarly excited: ‘A ray of hope emerged in 1993 with the discovery of the genetic source of the disease. Since then, we’ve been telling HD families that treatments will come. Now we have reason to believe that science has caught up’…

Researchers have been trying to target the mutant HTT gene ever since 1993. They achieved success in mice as long ago as 2000, but it’s taken a lot of further work tailoring a drug and evaluating its effect in mice and non-human primates to get to this stage. The breakthrough delivered by this research is the result of 25 years of painstaking work.

The drug in question is IONIS-HTTRx, developed by California-based Ionis Pharmaceuticals in conjunction with researchers around the world. There are many lines of promising ongoing research, but the Ionis drug – a type of antisense oligonucleotide, or ASO – is the first that’s suggested it might be possible to suppress the disease. An ASO works by sticking to the messenger RNA that carries the information for making a specific protein from a cell’s DNA to its protein-building machinery. Then the ASO destroys the RNA, preventing the protein from being made.

The Ionis team wanted a way to reduce the amount of mutant huntingtin protein that the faulty HTT gene creates, which is what wreaks havoc in the brain. A crucial issue was whether it would be necessary to target the mutant protein alone (‘the bad guy’, as Tabrizi calls it) or whether a general lowering of huntingtin – both normal and mutant – would work. Tests on mice, run in collaboration with Don Cleveland’s laboratory at the University of California, San Diego, proved the latter to be the case.

The next step was to screen thousands of ASOs to find the one most likely to lower huntingtin effectively in people while remaining safe. ‘I don’t like to use the word ‘gene silencing’, because ASOs don’t silence,’ says Tabrizi. ‘You’re leaving enough huntingtin to cover its functions.’

With an ASO selected, the UCL trial then confirmed the drug’s safety and its ability, when injected into the spinal cord, to lower huntingtin levels in the cerebrospinal fluid. The excitement of the result lay in its biological effect: its success in lowering the protein in humans was, Tabrizi says, ‘beyond what I’d ever hoped’.

Read the full article in Mosaic.


.

New study tracks the evolution of stone tools
Iona N Smith, Ars Technica, 6 March 2018

For at least 2.6 million years, humans and our ancestors have been making stone tools by chipping off flakes of material to produce sharp edges. We think of stone tools as very rudimentary technology, but producing a usable tool without wasting a lot of stone takes skill and knowledge. That’s why archaeologists tend to use the complexity of stone tools as a way to measure the cognitive skills of early humans and the complexity of their cultures and social interactions.

But because the same tool-making techniques didn’t show up everywhere early humans lived, it’s hard to really compare how stone tool technology developed across the whole 2.6 million-year history of stone tool-making or across the broad geographic spread of early humans. To do that, you’ve got to find a common factor.

So a team led by anthropologist Željko Režek of the Max Planck Institute for Evolutionary Anthropology decided to study whether the length of the sharp, working edge of stone flakes changed over time relative to the size of the flakes. A longer, sharp edge is more efficient and takes more control and skill to create, so Režek and his colleagues reasoned that it would be a good proxy for how well early humans understood the process of working stone and how well they shared that knowledge with each other.

Read the full article in Ars Technica.


.

Inside the OED: Can the world’s biggest dictionary survive the Internet?
Andrew Dickson, Guardian, 23 February 2018

At one level, few things are simpler than a dictionary: a list of the words people use or have used, with an explanation of what those words mean, or have meant. At the level that matters, though – the level that lexicographers fret and obsess about – few things could be more complex. Who used those words, where and when? How do you know? Which words do you include, and on what basis? How do you tease apart this sense from that? And what is ‘English’ anyway?

In the case of a dictionary such as the OED – which claims to provide a ‘definitive’ record of every single word in the language from 1000AD to the present day – the question is even larger: can a living language be comprehensively mapped, surveyed and described? Speaking to lexicographers makes one wary of using the word ‘literally’, but a definitive dictionary is, literally, impossible. No sooner have you reached the summit of the mountain than it has expanded another hundred feet. Then you realise it’s not even one mountain, but an interlocking series of ranges marching across the Earth. (In the age of ‘global English’, the metaphor seems apt.)

Even so, the quest to capture ‘the meaning of everything’ – as the writer Simon Winchester described it in his book on the history of the OED – has absorbed generations of lexicographers, from the Victorian worthies who set up a ‘Committee to collect unregistered words in English’ to the OED’s first proper editor, the indefatigable James Murray, who spent 36 years shepherding the first edition towards publication (before it killed him). The dream of the perfect dictionary goes back to the Enlightenment notion that by classifying and regulating language one could – just perhaps – distil the essence of human thought. In 1747, in his ‘Plan’ for the English dictionary that he was about to commence, Samuel Johnson declared he would create nothing less than ‘a dictionary by which the pronunciation of our language may be fixed, and its attainment facilitated; by which its purity may be preserved, its use ascertained, and its duration lengthened’. English would not be merely listed in alphabetical order; it would be saved for eternity.

Read the full article in the Guardian.


.

Inostrancevia, devouring a Parelasaurus by Alexei Petrovich Bystrow

New fossils are redefining what makes a dinosaur
Carolyn Gramling, Science News, 21 February 2018

‘There’s a very faint dimple here,’ Sterling Nesbitt says, holding up a palm-sized fossil to the light. The fossil, a pelvic bone, belonged to a creature called Teleocrater rhadinus. The slender, 2-meter-long reptile ran on all fours and lived 245 million years ago, about 10 million to 15 million years before scientists think dinosaurs first appeared.

Nesbitt, a paleontologist at Virginia Tech in Blacksburg, tilts the bone toward the overhead light, illuminating a small depression in the fossil. The dent, about the size of a thumbprint, marks the place where the leg bone fit into the pelvis. In a true dinosaur, there would be a complete hole there in the hip socket, not just a depression. The dimple is like a waving red flag: Nope, not a dinosaur.

The hole in the hip socket probably helped dinosaurs position their legs underneath their bodies, rather than splayed to the sides like a crocodile’s legs. Until recently, that hole was among a handful of telltale features paleontologists used to identify whether they had their hands on an actual dinosaur specimen.

Another no-fail sign was a particular depression at the top of the skull. Until Teleocrater mucked things up. The creature predated the dinosaurs, yet it had the dinosaur skull depression.

The once-lengthy list of ‘definitely a dinosaur’ features had already been dwindling over the past few decades thanks to new discoveries of close dino relatives such as Teleocrater. With an April 2017 report of Teleocrater’s skull depression (SN Online: 4/17/17), yet another feature was knocked off the list.

Today, just one feature is unique to Dinosauria, the great and diverse group of animals that inhabited Earth for about 165 million years, until some combination of cataclysmic asteroid and volcanic eruptions wiped out all dinosaurs except the birds.

‘I often get asked ‘what defines a dinosaur,’ ‘ says Randall Irmis, a paleontologist at the Natural History Museum of Utah in Salt Lake City. Ten to 15 years ago, scientists would list perhaps half a dozen features, he says. ‘The only one to still talk about is having a complete hole in the hip socket.’

Read the full article in Science News.


.

Art and activism
Adam Kirsch, Harvard Magazine,

The man who emerges from Stewart’s book was, like all the most important thinkers, complex and provocative, a figure to inspire and to argue with. At first sight, Locke’s focus on culture and the arts as a realm of African-American self-making may seem to be less than urgent. When we are still struggling as a country to accept the basic principle that Black Lives Matter, do we really need to read Locke’s reflections on painting and sculpture, music and poetry? This was the very critique he faced from many in his own time—militant activists like W.E.B. Du Bois, A.B. 1890, Ph.D. ’95, for whom Locke’s aestheticism seemed a distraction or a luxury.

But Locke strongly rejected such a division between art and activism. Working at a time when the prospects for progress in civil rights seemed remote, Locke looked to the arts as a crucial realm of black self-realization. ‘The sense of inferiority must be innerly compensated,’ he wrote; ‘self-conviction must supplant self-justification and in the dignity of this attitude a convinced minority must confront a condescending majority. Art cannot completely accomplish this,’ he acknowledged, ‘but I believe it can lead the way.’

It was because he had such high hopes for black art that Locke argued for the separation of art from propaganda. This, for him, was part of the point of the Harlem Renaissance, whose experimental aesthetics often alienated conventional taste, both black and white. ‘Most Negro artists would repudiate their own art program if it were presented as a reformer’s duty or a prophet’s mission,’ he wrote in one of his most important essays, ‘Beauty Instead of Ashes.’ ‘There is an ethics of beauty itself.’ Locke’s idea of beauty tended to be classical and traditional—he was wary of popular arts like jazz and the Broadway musical—but his faith in the aesthetic was quietly radical.

Indeed, long before terms like postmodernism and postcolonialism came on the scene, Locke emphasized the way racial identity was imagined and performed, not simply biologically given or socially imposed. He drew a comparison between the situation of African Americans and those of oppressed people like the Irish and the Jews, seeing in the Celtic Revival and Zionism models for a spiritual self-awakening that would have real-world results.

Read the full article in the Harvard Magazine.


.

Serious quantum computers are finally here.
What are we going to do with them?

Will Knight, MIT Technology Review, 21 February 2018

But as IBM’s researchers will tell you, quantum supremacy is an elusive concept. You would need all 50 qubits to work perfectly, when in reality quantum computers are beset by errors that need to be corrected for. It is also devilishly difficult to maintain qubits for any length of time; they tend to ‘decohere,’ or lose their delicate quantum nature, much as a smoke ring breaks up at the slightest air current. And the more qubits, the harder both challenges become.

‘If you had 50 or 100 qubits and they really worked well enough, and were fully error-corrected—you could do unfathomable calculations that can’t be replicated on any classical machine, now or ever,’ says Robert Schoelkopf, a Yale professor and founder of a company called Quantum Circuits. ‘The flip side to quantum computing is that there are exponential ways for it to go wrong.’

Another reason for caution is that it isn’t obvious how useful even a perfectly functioning quantum computer would be. It doesn’t simply speed up any task you throw at it; in fact, for many calculations, it would actually be slower than classical machines. Only a handful of algorithms have so far been devised where a quantum computer would clearly have an edge. And even for those, that edge might be short-lived. The most famous quantum algorithm, developed by Peter Shor at MIT, is for finding the prime factors of an integer. Many common cryptographic schemes rely on the fact that this is hard for a conventional computer to do. But cryptography could adapt, creating new kinds of codes that don’t rely on factorization.

This is why, even as they near the 50-qubit milestone, IBM’s own researchers are keen to dispel the hype around it. At a table in the hallway that looks out onto the lush lawn outside, I encountered Jay Gambetta, a tall, easygoing Australian who researches quantum algorithms and potential applications for IBM’s hardware. ‘We’re at this unique stage,’ he said, choosing his words with care. ‘We have this device that is more complicated than you can simulate on a classical computer, but it’s not yet controllable to the precision that you could do the algorithms you know how to do.’

Read the full article in MIT Technology Review.


.

The anti-European tradition of Europe
Andrei Plesu, Eurozine, 19 February 2018

Allow me to begin with a personal recollection. In the first half of 1998, I took part in a meeting of the Council of the European Union. At the time, the presidency of the Council was held by the United Kingdom, represented by Foreign Secretary Robin Cook. The other participants were the foreign ministers of EU countries and prospective members. With the exception of Bronislaw Geremek, everybody spoke in English during the meeting. To me, Romania’s foreign minister at the time, it seemed polite to speak the language of the meeting’s chairman, Robin Cook. But, during a break, my French counterpart, Hubert Védrine, took me aside and said, in a friendly but rather perplexed tone: ‘I thought Romania was a Francophone country. Why did you choose to speak in English?’ So, after that brief conversation, at the working lunch that followed, I delivered my contribution in French. Romania was then endeavouring to become a member of the European Union and wanted to show it was co-operating with all its potential future partners. However, after lunch, Cook approached me. ‘This morning you spoke in English. Why did you suddenly switch to French?’

This was my first bout of perplexity during my country’s arduous but fascinating path to European integration. It seemed to me that the European ‘Union’ was not the rosy ‘common home’ that everybody was talking about. It was not a pure, enlightened administration. It was a living creature, with its own humours, jealousies, pride, good and bad moods, crises, fevers, and neuroses. Let me add that, since Brexit, my perplexity has acquired a new dimension. I wonder, for example, what the Union’s new lingua franca will be once Britain has left. Will we preserve English, out of a kind of commemorative melancholy, or will we witness a battle for supremacy between French and German, with possible political irritation on the part of the Mediterranean countries?

Engaged as we East Europeans were in the process of re-entering the European community, we were unaware of, or overlooked, the old and lasting tensions of continental history, its constituent polychrome nature. Europe has a long tradition of self-segregation, of multi-dimensionality, of debates on national identity that can go as far as internal conflict. The first failure of our ‘common home’ was the fracturing of the Roman Empire into a western and an eastern segment. Rome broke away from Byzantium, Catholicism from Orthodoxy, Protestantism from Catholicism, the Empire from the Papacy, East from West, North from South, the Germanic from the Latin, communism from capitalism, Britain from the rest of the continent. The spectre of division is what the Belgian philosopher Jacques Dewitte (admiringly) called the ‘European exception’. We easily perceive the differences that make up our identity; we are able at any time to distance ourselves from ourselves. We invented both colonialism and anti-colonialism; we invented Eurocentrism and the relativisation of Europeanism. The world wars of the last century began as intra-European wars; the European West and East were for decades kept apart by a ‘cold war’. An impossible ‘conjugal’ triangle has constantly inflamed spirits: the German, the Latin and the Slavic worlds.

An increasingly acute irritation is taking hold between the European Union and Europe in the wider sense, between central administration and national sovereignty, between the Eurozone countries and those with their own currencies, between the Schengen countries and those excluded from the treaty. All seasoned with the noble rhetoric of ‘unity’, a ‘common house,’ and continental solidarity. Where were we, the newcomers, to place ourselves in a landscape that by no means erred towards monotony? In the following, I shall choose three of the front-lines that marked and still mark the family portrait of Europe’s complicated fabric: 1) the North–South division; 2) the East–West division; and 3) the centre–periphery division.

Read the full article in Eurozine.


.

In praise of negative reviews
Rafia Zakaria, The Baffler, 21 February 2018

‘Startlingly smart,’ ‘remarkable,’ ’endlessly interesting,’ ‘delicious’. Such are the adulatory adjectives scattered through the pages of the book review section in one of America’s leading newspapers. The praise is poignant, particularly if one happens to be the author, hoping for the kind of testimonial that will drive sales. Waiting for the critic’s verdict used to be a moment of high anxiety, but there’s not so much to worry about anymore. The general tone and tenor of the contemporary book review is an advertisement-style frippery. And, if a rave isn’t in order, the reviewer will give a stylized summary of sorts, bookended with non-conclusions as to the book’s content. Absent in either is any critical engagement, let alone any excavation of the book’s umbilical connection to the world in which it is born. Only the longest-serving critics, if they are lucky enough to be ensconced in the handful of newspapers that still have them, paw at the possibility of a negative review. And even they, embarking on that journey of a polemical book review, temper their taunts and defang their dissection. In essence they bow to the premise that every book is a gem, and every reviewer a professional gift-wrapper who appears during the holidays.

It is a pitiable present, this one that celebrates the enfeebling of literary criticism, but we were warned of it. Elizabeth Hardwick, that Cassandra of criticism, predicted it five decades ago, when she penned ‘The Decline of Book Reviewing‘ for Harper’s magazine. It is indeed some small mercy to her that she did not live to see its actual and dismal death. Hardwick would have winced at it and wept at the reincarnation of the form as an extended marketing operation coaxed out by fawning, persistent publicists. In Hardwick’s world reviewers and critics were feared as ‘persons of dangerous acerbity’ who were ‘cruel to youth’ and (often out of jealousy) blind to the freshness and importance of new work. Hardwick thought this an unfair estimation, but she would have found what exists now more repugnant. The reviewers at work now are rather the opposite, copywriters whose task it is to arrange the book in a bouquet of Wikipedia-blooming literary references.

This list of complaints is meant to argue that literary criticism should be critical just for the sake of it, giving an unflinching excoriation of a book’s content or a cold-eyed assessment of what it lacks. Hardwick herself underscored this when she pointed a finger at the ‘torpor,’ the ‘faint dissension’ and ‘minimal style’ that had infected the book review in her time. What’s new is that this faint style has developed a politics or an ethics that gives non-judgment in the book review a high-minded justification. Per its pronouncements, all reviewers (and readers) must check their biases and privilege prior to engaging with a text.

It is a lovely sounding idea, particularly in its attempt to ground the extinction of the negative review in a commitment to fairness and equality. Kristina Marie Darling lays out the rest in her recent essay for the Los Angeles Review of Books titled ‘Readerly Privilege and Textual Violence: An Ethics of Engagement.’ Darling, who is white, and was once a ‘younger female contingent laborer who more than likely qualified for food stamps,’ says textual violence ‘takes many forms,’ the most egregious occurring when ‘the reader makes inferences that extend beyond the work as it appears on the page.’ In the example she offers, a reviewer writing for The Rumpus about a book of autobiographical essays dares to wonder whether the author’s excessively picky eating (showcased in the book) may point to an eating disorder. There it is, then: that sin of considering the content in relation to one’s own views. It is a no-can-do for Darling, who, after going through several similar iterations, concludes with an admonition: ‘reviewers are not arbiters of taste,’ she scolds, but rather ‘ushers in a room full of empty chairs.’

Read the full article in the Baffler.

 

.

The images are, from top down: Dana Schutz’s ‘Open Casket’; still from Ex Machina (2014); a terracotta warrior; Betty Lee, White Matter Fiber Tracts: Visualizations of fiber track data from diffusion MR imaging; runner up, Best Representation of the Human Connectome in the 2011 Brain Art competition (Photo courtesy Laboratory of Neuro Imaging; UCLA); book cover of Philip K Dick’s ‘Do Android’s Dream of Electric Sheep?’; ‘Inostrancevia, devouring a Pareiasaurus’ by Alexei Petrovich Bystrow, from ‘Paleoart: Visions of the Prehistoric Past’

%d bloggers like this: