Given all the suffering, pain and destruction produced by humanity, Émile Torres, who is a non-binary philosopher specialising in existential threats, thinks that it would not be a bad thing if humanity ceased to exist. “The pro-extinctionist view,” they say, “immediately conjures up for a lot of people the image of a homicidal, ghoulish, sadistic maniac, but actually most pro-extinctionists would say that most ways of going extinct would be absolutely unacceptable. But what if everybody decided not to have children? I don’t see anything wrong with that.” Torres has just written a book called Human Extinction: A History of the Science and Ethics of Annihilation. It runs to more than 500 pages and is an impressive study of a neglected subject. Their basic thesis is that while human extinction is an ancient concern, the rise of Christianity removed it from public discourse. Despite its preoccupation with end times, Armageddon and apocalypse, Christianity emphasised the inevitable salvation and survival of humanity. Up until now, Torres has been best known through their trenchant pieces for magazines such as Salon, aeon and Foreign Policy as a thorn in the side of the longtermist movement. The new book is a more academic rendering of the arguments they have rehearsed in these publications. Longtermism is a relatively new branch of moral philosophy that has proved particularly popular in Silicon Valley – Elon Musk, for example, has said that it is a close match to his philosophy. Its proponents include Nick Bostrom, who set up the Future of Humanity Institute (FHI) at Oxford, Toby Ord, author of The Precipice, and William MacAskill, author of What We Owe the Future, both of whom have connections to the FHI. These philosophers argue that we should focus our attention on the deep future and act according to its needs. The idea is that the potential size of humanity in the millions of years to come is almost infinitely larger than the world’s current population and we ought to prioritise those trillions of unborn humans over the more short-term needs of the billions actually alive today. It’s a strand of thought that grew out of the effective altruism movement, which aims to maximise the good done to other people, encouraging followers to seek employment in well-paid jobs such as finance, and then to donate a large chunk of their money to charitable causes. Sam Bankman-Fried, the founder of the cryptocurrency exchange FTX, who has been charged with fraud and money laundering, was mentored by MacAskill and was a staunch advocate and funder of longtermist organisation. Torres was once a longtermist. They wrote a book six years ago under the name Phil Torres, called Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks, which essentially reiterated longtermist arguments. But in the years since, they have not just turned away from longtermism but become its most vehement critic. So what drew them to the cause initially and why the change of heart? “I really was a believer in the vision because I think there is a seductive attraction to grand visions of the future and big numbers,” they say. Torres came from a religious background of evangelical Christianity in Maryland and, having lost their faith, had a “religion-shaped hole”. They first discovered transhumanism, a scientific and philosophical movement that advocates the use of emerging technologies such as genetic engineering, cryonics and artificial intelligence to improve human capabilities. It checked many of the same boxes as Christianity had for Torres: “The promise of immortality, the promise of a future in which suffering is abolished.” Many transhumanists are also longtermists, and Torres “slid from one to the other”. “I just hadn’t thought really deeply about the longtermist views,” they say. “Nor had I studied the history of utopian movements that became violent. And once I did that, I realised that longtermism is similar in all of the most important respects to many of these utopian movements that became violent. Because of that I became very worried about what this longtermist ideology could justify in the minds of true believers in the future.” Earlier this year, the AI theorist Eliezer Yudkowsky, who is associated with longtermist thinking, wrote an op-ed for Time magazine in which he argued that the world should not just institute a moratorium on artificial intelligence development but also be prepared to use nuclear arms to shut down large rogue computer farms that flouted the moratorium. “That sets a precedent,” says Torres. “And you’ve also got people who are less well-known in the longtermist community who nonetheless could be dangerous if they take this techno-utopian vision seriously.” The reason Yudkowsky advocates the theoretical use of nuclear arms against AI is that he believes AI, on its current course, will lead to the complete extinction of humanity and all other biological life. It’s an article of faith among longtermists that an event that led to the loss of 99% of humanity would be vastly preferable to one that kills off 100%. There is not a 1% difference, but the difference between a future filled with human flourishing and no future at all. That’s what’s known as the “opportunity cost”. And almost any action can be considered acceptable to avoid paying it. It’s this kind of unapologetic utilitarianism that Torres believes is the most dangerous aspect of the longtermist perspective. They cite the tendency of longtermists to underplay the significance of the climate emergency because it’s not seen as likely to be the cause of total extinction. In Human Extinction, Torres quotes the philosopher Peter Singer, arguably the godfather of effective altruism, who has also been sympathetic to longtermism but guards against its most radical interpretations. “Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth,” Singer has written. One example of radical longtermism, Torres offers, is what’s known as the “repugnant conclusion”, a phrase coined by the late British philosopher Derek Parfit. It refers to the way wellbeing can be evaluated by quantity over quality. Torres sets out the relevant thought experiment in their book. If we imagine a population of 1 billion people, each with a wellbeing value of 100 (ie supremely well and happy) then it yields a total wellbeing of 100bn units. But if there is a population of 1 trillion each with a wellbeing value of only 1 (a life scarcely worth living), then it yields a total wellbeing value of 1tn. As 1tn is larger than 100bn, a totalist utilitarian would consider it a better outcome, although 1 trillion people would be living in misery instead of 100 billion in bliss. Many longtermists, including MacAskill, reject the repugnant conclusion, but Torres argues that any projection deep into the future will tend towards seeing people “not as ends but as means of maximising value”, an understanding, they note, that gets things exactly backwards: “happiness should matter for the sake of people, not people for the sake of happiness”. For Torres, the happiness of future generations is pure abstraction. As they do not exist, there is no loss if they never do exist. As they write: “I am, tentatively, inclined to agree with Schopenhauer’s sentiment that Being Never Existent would have been best. Those who disagree with this find themselves in the uncomfortable position of arguing that all the good things that have happened throughout human history can somehow compensate for, or counterbalance, all the bad things that have happened – a claim that, I believe, most people would find difficult or impossible to justify after a few minutes of reflecting on the most horrific crimes and atrocities of our past.” If longtermism can be characterised as dangerously utopian then it’s also easy to see how Torres’s position could be viewed as worryingly nihilistic. They reject that label, pointing out that their concern is the avoidance of human suffering, including any suffering that would occur through humanity’s extinction. While they can appreciate the theoretical benefits of not existing, in practice it’s not an end they wish to see or promote. In a sense, then, we’re back to old-fashioned philosophising, in which people of differing opinions robustly test the logic of their positions without vicious enmity. Except that has not been the case in this debate. For Torres, the effective altruists and longtermists are part of a sinister cult with an elitist agenda hidden behind a benign and charitable front, whereas many in that community have accused Torres of factual misrepresentation and intellectual dishonesty. The dispute has been conducted with nasty attacks on individuals and characters, as accusation and counteraccusation have crisscrossed like artillery on the frontline. “Since I’ve started to critique them publicly, I’ve been deluged by tweets that are harassing,” they say. “I’ve gotten threats of physical violence. I got an email last week that referenced a film about suicide and murder. It said: ‘I hope that what happens in the film isn’t necessary for you to change your ways.’” Yet online there are accounts of Torres harassing the philosopher Peter Boghossian and the British cultural theorist Helen Pluckrose. “I mean,” they say, “Peter Boghossian and Helen Pluckrose are extreme individuals with radical far-right views [both would reject that characterisation], and it’s just not true. It’s a coordinated campaign of harassment.” Whatever the truth of the matter, it’s a shame that two mutually hostile camps have taken shape, because there is doubtless much that could be beneficial to both sides if there was a more productive exchange of ideas. Torres has written a book that takes the subject of human extinction very seriously, both as a cultural phenomenon and a potential reality. It also raises a number of valid questions about the longtermism project. Equally, there are many in the community they critique who are motivated by noble instincts and genuinely wish to improve the lot of humanity. Of course, it’s well established that the road to hell is paved with good intentions, as the story of Bankman-Fried appears to illustrate. But it’s fair to say that both sides agree that humanity faces a number of all-too-real existential threats. The problem is that, amid the surrounding arguments, that most critical of concerns has come to look like a point-scoring posture rather than the point itself.
مشاركة :