aniel Kahneman, 87, was awarded the Nobel prize in economics in 2002 for his work on the psychology of judgment and decision-making. His first book, Thinking, Fast and Slow, a worldwide bestseller, set out his revolutionary ideas about human error and bias and how those traits might be recognised and mitigated. A new book, Noise: A Flaw in Human Judgment, written with Olivier Sibony and Cass R Sunstein, applies those ideas to organisations. This interview took place last week by Zoom with Kahneman at his home in New York. I guess the pandemic is quite a good place to start. In one way it has been the biggest ever hour-by-hour experiment in global political decision-making. Do you think it’s a watershed moment in the understanding that we need to “listen to science”? Yes and no, because clearly, not listening to science is bad. On the other hand, it took science quite a while to get its act together. One of the key problems seems to have been the widespread inability to grasp the basic idea of exponential growth. Does that surprise you? Exponential phenomena are almost impossible for us to grasp. We are very experienced in a more or less linear world. And if things are accelerating, they’re usually accelerating within reason. Exponential change [as with the spread of the virus] is really something else. We’re not equipped for it. It takes a long time to educate intuition. Do you think the cacophony of opinion on social media exacerbates that? I know too little about social media, there’s just too large a generational gap. But clearly the potential for misinformation to spread has grown. It’s a new kind of media that has essentially no responsibility for accuracy and not even reputational controls. Could you define what you mean by “noise” in the book, in layman’s terms – how does it differ from things like subjectivity or error? Our main subject is really system noise. System noise is not a phenomenon within the individual, it’s a phenomenon within an organisation or within a system that is supposed to come to decisions that are uniform. It’s really a very different thing from subjectivity or bias. You have to look statistically at a great number of cases. And then you see noise. Some of the examples you describe – the extraordinary variance seen in sentencing for the same crimes (even influenced by such external matters as the weather, or the weekend football results), say, or the massive discrepancies in insurance underwriting or medical diagnosis or job interviews based on the same baseline information – are shocking. The driver of that noise often seems to lie with the protected status of the “experts” doing the choosing. No judge, I imagine, wants to acknowledge that an algorithm would be fairer at delivering justice? The judicial system, I think, is special in a way, because it’s some “wise” person who is deciding. You have a lot of noise in medicine, but in medicine, there is an objective criterion of truth. Have you been on a jury yourself – or spent much time in courtrooms? I haven’t. But I have had many conversations with judges about the possibility of doing research on how noise affects their judgment. But, you know, it’s not in the interest of the judicial community to investigate themselves. I suppose people are instinctively or emotionally still more inclined to trust human systems than more abstract processes? That is certainly the case. We see that, for example, in terms of the attitude to vaccination. People are willing to take far, far fewer risks when they face vaccination than when they face the disease. So this gap between the natural and the artificial is found everywhere. In part that is because when artificial intelligence makes a mistake, that mistake looks completely foolish to humans, or almost evil. You don’t talk about driverless cars in your analysis. But that, I guess, is becoming a key arena of this argument, isn’t it? However much safer automated cars might be statistically, every time they cause an accident, it will be excessively magnified? Being a lot safer than people is not going to be enough. The factor by which they have to be more safe than humans is really very high. It’s 50 years since you and the late Amos Tversky first started researching these questions. Do you feel that your conclusions about measurable human bias and fallibility should have been more widely understood by now? You know, we didn’t have any particular expectations of changing the world when we did our research. And my own experience of how little this knowledge has changed the quality of my own judgment can be sobering. Avoiding noise in judgment is not really something individuals are going to be very good at. I really put my faith, if there is any faith to be placed, in organisations. I wonder if you see your work in almost a satirical tradition, highlighting human folly? Not really. I see myself as really quite an objective psychologist. Obviously, humans are limited. But they’re also pretty marvellous. In Thinking, Fast and Slow, I really was trying to talk about the marvels of intuitive thinking and not only about its flaws – but flaws are more amusing so there is more attention paid there. One of the things that struck me reading the book is that however much individuals and organisations profess the desire to be efficient and rational, there’s a fundamental part of us that is bored by predictability and just wants to roll the dice. Do you think you take enough account of that? There are many domains where you really want diversity and creativity. But there is also a need for uniformity in well-defined tasks. If the effort to achieve uniformity gets people unmotivated, or if it becomes excessively bureaucratic, that in itself can be a problem. That is something that organisations are going to have to negotiate. I was struck watching the American elections by just how often politicians of both sides appealed to God for guidance or help. You don’t talk about religion in the book, but does supernatural faith add to noise? I think there is less difference between religion and other belief systems than we think. We all like to believe we’re in direct contact with truth. I will say that in some respects my belief in science is not very different from the belief other people have in religion. I mean, I believe in climate change, but I have no idea about it really. What I believe in is the institutions and methods of people who tell me there is climate change. We shouldn’t think that because we are not religious, that makes us so much cleverer than religious people. The arrogance of scientists is something I think about a lot. You end your book with some ideas for eliminating noise, creating checklists for decision making, having “designated decision observers” and so on. I was reminded of those studies that show how corporate efforts to reduce unconscious racial and gender bias through compulsory training have been either ineffective or counter-productive. How do you take account of such unforeseen consequences? There is always a risk of that. And those ideas you mention are largely untested but, we think, worth considering. Others in the book are founded on more experience, are more solid. Do you feel that there are wider dangers in using data and AI to augment or replace human judgment? There are going to be massive consequences of that change that are already beginning to happen. Some medical specialties are clearly in danger of being replaced, certainly in terms of diagnosis. And there are rather frightening scenarios when you’re talking about leadership. Once it’s demonstrably true that you can have an AI that has far better business judgment, say, what will that do to human leadership? Are we already seeing a backlash against that? I guess one way of understanding the election victories of Trump and Johnson is as a reaction against an increasingly complex world of information – their appeal is that they are simple impulsive chancers. Are we likely to see more of that populism? I have learned never to make forecasts. Not only can I certainly not do it – I’m not sure it can be done. But one thing that looks very likely is that these huge changes are not going to happen quietly. There is going to be massive disruption. The technology is developing very rapidly, possibly exponentially. But people are linear. When linear people are faced with exponential change, they’re not going to be able to adapt to that very easily. So clearly, something is coming… And clearly AI is going to win [against human intelligence]. It’s not even close. How people are going to adjust to this is a fascinating problem – but one for my children and grandchildren, not me. Your own life began in even more extreme uncertainty – as a boy in occupied France: your father was first arrested by the Nazis as a Jew, then spared and your family escaped into hiding. How much of your lifelong interest in these questions – the need to understand human motivations – was rooted in those anxieties and fears do you think? When I look back, I think I was always going to be a psychologist. I had curiosity from a really early age about how the mind works. I don’t think that my personal history had much to do with it though, it was always there. Do you feel that you’re fundamentally still the child that you were when you were six or seven? Yes. There’s certainly a continuity. I can still recognise something within myself. When you embarked on this work, could you imagine you would still be hard at it at 87? No, I imagined I would be dead! But to my surprise, I still have the same curiosity. I’m collaborating on several projects and investigations since I finished the book. One is how the inability to solve the famous “bat and ball problem” correlates with belief in God and that 9/11 was a conspiracy. It’s all as fun to me as it ever was. Noise: A Flaw in Human Judgment by Daniel Kahneman, Olivier Sibony and Cass R Sunstein is published by HarperCollins (£25). To support the Guardian order your copy at guardianbookshop.com. Delivery charges may apply Daniel Kahneman and his co-authors will discuss Noise at a Guardian Live online event on Sunday 27 June. Book tickets here
مشاركة :