Prospect Theory – Empirically Indeterminate and Conceptually Self-defeating?


My favourite philosopher of neuroeconomics, Don Ross, just published a paper together with his friend and collaborator, the econometrician Glenn Harrison, in which they present what I think is a conclusive critique of prospect theory. I sketch the argument and add my own thoughts that even strengthen the case.

The first point is that prospect theory is empirically indeterminate, at least compared with standard expected utility theory. Apart from having more degrees of freedom, which turns the econometric odds against it, there are the difficulties in handling the reference point. Especially, in empirical tests the problem emerges how the test persons frame the losses and gains. If there is a sequence of gains, how can the experimenter make sure that they only perceive the loss in a specific game round, and not the aggregate loss of the entire experimental session? Since nobody should be objectively harmed when participating in an experimental session, this is a serious issue. Even if you add a ‘pre-game’, in which subjects obtain certain assets that are then conceived as their starting capital for the main test, one can never be sure whether this defines the reference point.

The second point is that prospect theory insufficiently distinguishes between losses as resulting from decision weights or from utilities directly. This matters because in the former case, prospect theory can be reformulated as a rank-dependent utility model in which subjects assign decision weights to objective probabilities. Deviations from rationality norms as defined by expected utility theory would be mere cognitive errors, so that an affect-based loss aversion would not be needed for explaining observations.

The two critiques have serious consequences for the normative argument deduced by Kahneman, because it shows that it is conceptually self-defeating. Liberal economists have always agreed that a paternalistic intervention is justified when people suffer from misinformation and cognitive errors. As Harrison and Ross argue, this is very likely if we consider that decision weights may result from ‘living in a small world’, whereas objective probabilities belong to the ‘large world’. This even applies in evolutionary terms: Evolution optimizes locally, hence over sequences of small worlds. If a violent death from a conflict may be very small objectively, still this very low probability event would count for differential reproduction if “shit happens”. In other words, individually, people might be well advised to use decision weights that differ from objective probabilities because, after all, they live in small worlds, and dead is dead, even if with extremely low probability. But that might nevertheless introduce cognitive errors and dysfunctions into the larger system in which the objective probabilities are defined.

In comparison, if people have certain strong feelings about a certain outcome of a choice, then this does not justify intervention, because this would be tantamount to tinkering with their utility function directly. Now, this argument is devastating for Kahneman’s normative argument. There are two possibilities.

  • Either loss aversion is a cognitive error resulting from decision weights: This would make the construct of loss aversion redundant, and certain misjudgements of objective probabilities can be accommodated in a more general expected utility function.
  • Or loss aversion is affect based (what Kahneman indeed assumes): Then we need to respect this, and no intervention can be ethically or politically justified.

In more general terms, the problem is that Kahneman invokes emotions (the intuitive system I) as major causes for deviations from rationality, but this implies that interventions cannot be ethically justified merely by arguing that rationality as defined by the scientist is violated. If the deviations are cognitive errors, that makes prospect theory redundant, but justifies interventions.

I think the problem is even deeper. Coming back to the empirical issue, the fact is that the reference point can only be fixed within a narrative that constructs these points. An experimenter who applies a series of tests on a subject tries to impose a certain narrative, which is the description of the experiment. The subject may live in a different narrative, which puts the experiment in the context of his everyday life. Thus, the concept of narrative would even become fundamental for the conceptual grounding of prospect theory. But as we know, in Kahneman’s thinking narratives are conceived as another major cause of deviations from rationality.

To sum up, prospect theory seems to suffer from internal conceptual contradictions, which renders it ineffective as a basis for designing paternalistic policies. Harrison and Ross suggest another solution, which is that the rational interventionist might base his recommendations on the value-based appreciation of society approaching higher levels of efficiency. I would suggest a different interpretation. The question is who designs the ‘large worlds’ in which we live. A financial market is a large world by purely human design, which generates certain probability distributions of events. This raises serious ethical issues whether we should indeed take this design action. But financial markets evolved spontaneously during human history (by human action, not human design, as Hayek put it famously). I think the case for intervention is very strong here, because we do not need to refer to emotional aspects in identifying potential damages. It suffices to acknowledge that humans have created a ‘large world’ that may stay in tension with our ‘small worlds’ and the everyday life decisions that we take in it. Thus, in the context of financial markets the assumption is fully warranted that damages result from cognitive errors. Our responsibility does not mainly lie here, but in the design of this artificial ‘large world’.


Harrison, Glenn W., and Don Ross. “The Empirical Adequacy of Cumulative Prospect Theory and Its Implications for Normative Assessment.” Journal of Economic Methodology 24, no. 2 (April 3, 2017): 150–65. doi:10.1080/1350178X.2017.1309753.


Funded by: