Search the Site

Daniel Kahneman Answers Your Questions

Two weeks ago, we solicited your questions for Princeton psychology professor and Nobel laureate  Daniel Kahneman, whose new book is called Thinking, Fast and Slow. You responded by asking 45 questions. Kahneman has answered 22 of them in one of the more in-depth and wide-ranging Q&A’s we’ve run recently. It’s a great read. As always, thanks for your questions, and thanks to Daniel Kahneman for taking the time to answer so many of them.

Q. Now that we understand reason as being largely unconscious, motivated by emotion, embodied and constituted by many biases and heuristics, where do you see the future of cognitive science going? Are we at the beginning stages of a paradigm shift? -McNerney

A. The only way I know to predict the future of science is to look at the choices of beginning graduate students. The specialization they acquire now will probably determine their activities for the next 15-20 years. By this measure, the near-term future of cognitive science seems to be as an approach to neuroscience, which combines methods and concepts drawn from psychology and from brain research. Signs of emotional arousal are salient in the reactions to many events – and especially to decisions — so the conceptual separation between emotion and pure cognition seems likely to crumble.

Q. I’ve read a fair bit of your journal articles (particularly the ones covering intuition), and while I don’t necessarily agree with everything you say (a fair amount of it, though), I think you make very clear, concise and well-supported arguments. Recently, I’ve been steeping myself in the “Integral Theory” literature. One of the books I’ve been reading on the topic discusses the different levels of thinking at the different “stages” of thinking on the spiral [spiral dynamics]. I haven’t yet had the chance to read this book (still waiting for it to come in at the local bookstore), but I’m curious as to where you see this book as it relates to the levels of thinking on the spiral. Maybe more importantly, as a precursor, what are your thoughts/feelings on “Integral Theory” and its implications for academic disciplines (psychology, economics, etc.) other than its own? –Jeremiah Stanghini

A. You have made me curious, but I don’t know anything about Integral Theory.

Q. As you found, humans will take huge, irrational risks to avoid taking a loss. Couldn’t that explain why so many Penn State administrators took the huge risk of not disclosing a sexual assault? -Tim

A. In such a case, the loss associated with bringing the scandal into the open now is large, immediate and easy to imagine, whereas the disastrous consequences of procrastination are both vague and delayed. This is probably how many cover-up attempts begin. If people were certain that cover-ups would have very bad personal consequences (as happened in this case), we may see fewer cover-ups in future. From that point of view, the decisive reaction of the board of the University is likely to have beneficial consequences down the road.

Q. Is Prospect Theory challenged by how investors are behaving actually in European government bonds? Investors seem to prefer a sure loss of investing in German Bunds, that carry a sure negative real yield, instead of PIGS countries bonds (i.e. Italy), with positive real yields and a possible-but-not-sure-loss. What do you think about it? Thank you very much, Professor. -Matteo Serio

A. This question takes me out of my depth. To apply prospect theory to this case, you would need to know the reference point, as well as the alternatives and the perceived probabilities of different outcomes. I am not sure that German bonds are perceived as a sure loss, and even then prospect theory allows for the purchase of insurance unless the event of an Italian default is perceived as very likely.

Q. I have been thinking about what I call “consumption traps” since I was a kid in the early 1990s. I think a huge number of people in the U.S. over-consume optional goods to the point where it has drastically negative impacts on their life prospects. Why would they do this? My theory is that they fall (or marketers push them ) into “consumption traps.” In college, I discovered that behavioral economics was ferreting out a lot of these inefficiencies in “homo economicus” in the 1980s. One I have long speculated about is something like “sticky expectations.” It seems that some modern consumer goods (particularly electronics) are improving at a rate that drastically outstrips people’s ability to psychologically adjust. Today’s iPod would have been worth $10,000 or more as a consumer good 10 years ago. I am still partially psychologically identical with the person who lived in that time. So when I go out and drop $150 on an Ipod I cannot really afford, I feel a lot better about the decision than I normally would because in my mind some of that residual $10,000 valuation remains. So it activates some of the “this is a tremendous deal, act now” circuitry in my brain even though it is not, at present, actually a deal. On the other hand perhaps people have always been this prone to over-consumption, and thus “this increasing rate of improvement in goods effect” is a theory explaining a phenomenon that doesn’t exist. I haven’t yet encountered anything regarding this. Are you familiar with any studies/research on the topic? -Joshua Northey

A. The idea is new to me, and it seems quite compelling. You propose that the new standard model is evaluated according to an earlier reference point, relative to which it is a luxury. Another possibility is that the current model very quickly becomes the reference point even for people who don’t own the good, so that failing to upgrade is coded as accepting a loss. The two ideas are not incompatible.

Q. Problems in healthcare quality may be getting worse before they get better, and there are countless difficult decisions that will have to be made to ensure long-term system improvement. But on a daily basis, doctors and nurses and patients are each making a variety of decisions that shape healthcare on a smaller but more tangible level. How can the essence of Thinking, Fast and Slow be extracted and applied to the individual decisions that patients and providers make so that the quality of healthcare is optimized? -Brian S. McGowan

A. I don’t believe that you can expect the choices of patients and providers to change without changing the situation in which they operate. The incentives of fee-for-service are powerful, and so is the social norm that health is priceless (especially when paid for by a third party). Where the psychology of behavior change and the nudges of behavioral economics come into play is in planning for a transition to a better system. The question that must be asked is, “How can we make i
t easy for physicians and patients to change in the desired direction?”, which is closely related to, “Why don’t they already want the change?” Quite often, when you raise this question, you may discover that some inexpensive tweaks in the context will substantially change behavior. (For example, we know that people are more likely to pay their taxes if they believe that other people pay their taxes.)

Q. With the launch of Siri and a stated aim to be using the data collected to improve the performance of its AI, should we expect these types of quasi-intelligences to develop the same behavioral foibles that we exhibit, or should we expect something completely different? And if something different, would that something be more likely to reflect the old “rational” assumptions of behavior, or some totally other emergent set of biases and quirks based on its own underlying architecture? My money’s on emergent weirdness, but then, I don’t have a Nobel Prize. -Peter Bennett

A. Emergent weirdness is a good bet. Only deduction is certain. Whenever an inductive short-cut is applied, you can search for cases in which it will fail. It is always useful to ask “What relevant factors are not considered?” and “What irrelevant factors affect the conclusions?” By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.

Q. So of course there’s been a whole slew of research showing that we are quite irrational and prone to errors in our thinking. Has there been research to help us be more rational?-T

A. Yes, of course, many have tried. I don’t believe that self-help is likely to succeed, though it is a pretty good idea to slow down when the stakes are high. (And even the value of that advice has been questioned.) Improving decision-making is more likely to work in organizations (together with Olivier Sibony and Dan Lovallo, I published an attempt in that direction in the Harvard Business Review in June 2011.)

Q. Out of curiosity, why did you think that Freakonomics would change the world for the worst? In analyzing where our intuitions may lead us astray, it seems to be part of an intellectual movement you and Tversky in many ways began. Also, in my life, I have found reason and logic to be the best tools in making my life better, but I supplement with intuition when it comes to trusting people and love. Are there examples of decisions where we should trust our intuition more than our reasoning mind? -vimspot

A. It was a joking comment on the discussion of technological solutions to the global warming problem in Superfreakonomics. I thought that the favorable presentation of some solutions could suggest to readers that there is not much to worry about if the problem is easily solved. Not a serious disagreement. As to your other questions, Tim Wilson (who recently published Redirect) and some colleagues showed that when people choose between two posters, they are more likely to be satisfied with their choice some time later if they made it intuitively rather than by careful deliberation.

Q. I would like to know your opinion on the relation between pleasure, utility and happiness. Is it possible that the maximization of expected utility (estimated upon the recall of past utility) did lead to a different outcome than the maximization of happiness? What about the maximization of pleasure?-martin tetaz

A. I discuss that in the final chapters of the book. Yes, being happy (on average) in the moment and being satisfied retrospectively are not the same thing. People are most likely to be happy if they spend a lot of time with people they love, and most likely to be satisfied if they achieve conventional goals, such as high income and a stable marriage.

Q. Have you applied the “focusing illusion” concept to voting? Might it be an explanation for What’s the Matter With Kansas?, i.e. that voters may ratify a political platform that goes against their interests because of abortion legislation, etc. -frankenduf

A. I wanted to say “you are right,” but the more accurate statement is that I agree with you. A lot of political talk is designed to focus people’s attention on very specific issues about which they feel strongly.

Q. Suppose I’m the Chief Academic Officer at a university and find the number of first-year students who fail academically to be higher than I would expect (after all, their applications and credentials looked like they could do college level work when they were admitted). How can the ideas in the book be applied to improving student learning in college? -Gary

A. Your problem is one of many to which my book provides no solution… One useful idea is to begin by diagnosing the problem, using a specific question: “Why aren’t the students studying more than they do?” When you make a detailed list of answers, you will commonly find some that can be modified, and thereby improve the situation.

Q. It would be interesting to hear your perspective on the rationality of sovereigns as it relates to the people who live in the sovereign. Specifically, in the Euro case, are governments just as irrational as the people they govern? It would seem there is a parallel here as well as of late with the crisis. -Andy C

A. This touches on a very interesting question, to which we do not have a complete answer: do people (and institutions) act more reasonably when the stakes are very high? There are good reasons to expect a negative answer. High-stakes problems are likely to have important unique features, which reduce the relevance of previous experiences. The intuitive judgments of a few people (Angela Merkel, Nicholas Sarkozy, etc.) play a crucial role. There is little reason to believe that they will come up with optimal solutions.

Q. What part of Superfreakonomics did you not like? I’m betting it was the climate change section. Why did you not like it? Is it another case of your bias in finding hubris in others? -kevin

A. You win the bet.

Q. I’m a graduate student in the humanities (but with an undergraduate training in the sciences), and I often employ research from the science of the mind (including your work) in my papers and theses. I find there is a lot of resistance to the idea of heuristics and biases that function on such an automatic or biological level, but not because my professors and colleagues believe man is rational — in fact, they buy fully into the idea that man is irrational, but only seem comfortable with cultural explanations. Personally, I try to take both culture and biology into account, as well as recognize that both feed into each other. From your experience, what is a good, yet polite, way to help intelligent people who are resistant to scientific ideas recognize the value of such insights into the mind’s automatic workings? -Paul

A. It is useful to distinguish the content of thoughts from the mechanisms of thinking. Some biases (e.g., preconceived notions, unscientific beliefs, specific stereotypes) are biases of content and are likely to be culture-bound. Other biases (e.g., the neglect of statistics, the neglect of ambiguity, the general fact that we are prone to stereotyping) are inevitable side effects of the operation of general-purpose psychologi
cal mechanisms.

Q. How can I identify my incentives and values so I can create a personal program of behavioral conditioning that associates incentives with the behavior likely to achieve long-term goals? Basically, I want to build my own Skinner box with retirement savings and weight loss as outcomes, but I have to overcome the incentives of buying toys and eating lasagna. -Basil White

A. The best sources I know for answers to the question are included in the book Nudge by Richard Thaler and Cass Sunstein, in the book Mindless Eating by Brian Wansink, and in the books of the psychologist Robert Cialdini.

Q. Let me begin by saying I am a huge fan of your work. We read several of your articles in my graduate program at UofC (Public Policy), and I have become increasingly interested in the field of decision science. As someone interested in pursuing additional study in the field, where do you see it going? What are some of the most exciting problems you wish you could have addressed but were unable to? -Andrew

A. Some of my best and smartest friends disagree, but I think neuroeconomics has considerable promise. I have no regrets about my choices of research topics, but I sometimes wish I were 20 years younger – I would have switched to brain research.

Q. While in the middle of reading your book, I found myself thinking about the effort required to work in an environment dominated by the opposite gender. I’ve been engaging in a lot of discussion on G+ about the lack of women in STEM. Taking my career in engineering as a single case of anecdotal evidence, I would propose that it requires quite a bit of System 2 effort to interpret communication and intent of co-workers, as well as predict behavior patterns, when you are not the dominant gender. With experience, over time, this becomes a System 1 task. For the males, far more is System 1 from the very start, so the stress is less and they can concentrate on work tasks. This hurdle may be enough to deter women from these challenging disciplines, since their cognitive load is higher to accomplish the same goals. They might assume that this handicap remains throughout their career, rather than decreasing over time. Does this seem likely? If so, what are the implications? Any suggestions about how to ameliorate the situation? My apologies if the answers to my questions are later in the book. –Mary Robinson

A. Being self-conscious takes up mental capacity and is certainly not good for performance. Furthermore, the more self-conscious you are, the more likely you are to interpret (and sometimes misinterpret) the attitudes of others as gender-based, which is bound to make things worse. However, there is hope: self-consciousness is likely to diminish when you are in a stable environment, interacting with people you know well. The trend appears to be favorable: improving attitudes of men, rising representation of women in many male-dominated occupations, so the future is likely to be better than the past.

Q.   You were deeply involved in the mapping of a large number of cognitive biases. What do you think the most promising current directions are in the area of debiasing? -Geoff

A. More likely to succeed in organizations than at the individual level.

Q. Do you have a perspective on how human thinking, intuition, decision-making process and biases have changed over time? I am curious how we would compare, for instance, people who had learned almost everything orally centuries ago, people who had grown up with television but not internet, and the new generation that is growing up with internet and much more interactivity than before. -Joao G

A. In the terminology I use in the book, two things have to happen for a bias to be manifest in judgment: (1) System 1 comes up quickly with a predictably incorrect response; (2) System 2 endorses it. My guess is that the basic machinery of perception and memory has changed little over the last few centuries, but the content of our world knowledge has changed, and the knowledge available to System 2 has changed. I don’t know enough even to guess what the internet is doing to young minds.

Q. I will be starting a psychology PhD next year. If you were starting out in psychology right now, what would you choose as your PhD topic? -Ivana

A. Imagining myself as a beginning graduate student is hard, but I can tell you that if I had been 20 years younger, I would probably have tried to retool myself to study the neuroscience of social cognition and decision-making.

Q. You recommend the use of checklists in business decision-making to counteract a number of the most common biases – confirmation bias, anchoring, etc. I would like to know what you think of the argument that framing bias (and the associated narrow targets) has an even greater impact – for example, framing strategy as being about beating competitors (with potentially misleading analogies from chess, football, judo etc. used as a guide) or framing the focus of marketing as being on branding (thereby encouraging an inside-out perspective) rather than customers (which would encourage a more outside-in approach), even the focus on shareholder value (rather than broader stakeholder value) which has been linked to short-termism that ultimately isn’t in shareholders best interests? -Jack Springman

A. You are absolutely right – or at least I agree with you! Appropriate framing of the problem that is to be solved is essential to everything that follows. And it is certainly the case that a bad frame of the problem will lead to bad decisions. We may not have emphasized this point sufficiently in the checklist we proposed in the Harvard Business Review earlier this year.


Comments