Blog

Posts

Explaining norm-divergent behaviour: practical and epistemic contexts

25.06.2018

Decision-theoretic models prescribe how an agent should behave in order to act rationally. But we know both from quotidian phenomena, as well as from the experimental study of behaviour, that human agents diverge from decision-theoretic norms across a wide range of contexts. Divergence from these norms can be more or less severe. Certain cases are very subtle. For instance, Tversky and Kahneman’s (1974) discovery of biases in statistical reasoning (e.g., not taking base rates into account) illustrates divergence from doxastic norms, but this divergence is one which is not easily seen in our everyday encounters with other people. We are interested in more severe departures from rationality, like that seen in drug addiction (divergence of norms concerning the best way to act in the long run, all things considered), or in adherence to conspiracy theories or minority views on matters of scientific consensus (divergence of norms concerning belief or expertise). In these cases, we may find the behaviour of the other so difficult to understand that we have trouble seeing it as rationally structured at all, and we may fall back on causal reasoning.
A tempting (but, we think, incorrect) way to conceptualise this mix of rational and causal reasoning that one falls back on in such cases is in terms of an otherwise rational agent who is compromised by an external compulsion that causes the divergent behaviours. This kind of reasoning is particularly tempting in the case of drug addiction. One can understand all kinds of action that the addicted individual is performing, but they seem to be all structured towards a single irrational and self-destructive end that is adopted by the addicted individual in virtue of an allegedly compulsive desire for the substance that constitutes the addiction.
One empirical context where this sort of reasoning comes into play is in debates around the possibility of informed consent in research in which the recruited participants, i.e. drug users are prescribed the very drug they use. A continuous debate has been ongoing on the part of heroin-addicted persons who enrol in clinical trials which prescribe heroin. Some authors (e.g. Charland 2002) have argued that the heroin-addicted person cannot voluntarily give informed consent in this context, because their addiction—construed as a compulsive desire for heroin—prevents them from having a meaningful ability to choose whether to participate in the trial or not.
To think about the addicted individual’s ability to give informed consent in this way is counter-productive, as it reduces the individual’s agency to a mere platform of (compulsive) desires (Uusitalo & Broers 2015). This is problematic on many levels, not the least because the view questions the very capacity that needs to be assumed in the treatment of addicted persons: rational agency. One of the strongest indicators of successful recovery from addiction is motivation to change (see for instance Kelly & Greene). It is hardly plausible to construe this simply as a generation and cultivation of even a stronger desire than that in addiction. Rather, the motivation is likely to involve deliberation and evaluation of all kinds of reasons as well as one’s values.
But what is the alternative to the agent driven by their desire for drugs? In order to understand the behaviour of the addicted person without voiding her status as a rational agent, one needs to understand a whole range of contextual factors that enter into consideration on the part of the agent. In the case of informed consent, for instance, one needs to consider that although the addicted person may be more vulnerable to engaging in consumption of the substance of abuse at the expense of other choices, they also have a motivation to be cured of their condition. In fact, in most studies on heroin-assisted treatment one of the including conditions for the heroin users to enter the study was that the users had a history of several failed attempts at rehabilitation. They were considered the worst off population and the research on the novel treatment was seen as the last resort. Instead of seeing the failures as support for the irresistibility of heroin (on the cost of the user’s agency), they can (and should) be seen as support for the fact that the users persisted in seeking help. This is why it is plausible to think that they themselves are voluntarily consenting to participate in the trial. (Also, another fact that the heroin users were aware of and what helps us to find this alternative explanation plausible is that the trial is not the quickest way of accessing heroin. This is yet another point that questions the narrow view of heroin users being dictated merely by heroin).
Now, we suspect that a similar temptation to explain norm-divergence in terms of compulsion applies in non-practical contexts, that is to say, in cases of theoretical reasoning that departs from norms of rationality. A timely case of this is that of the public consumption of policy-relevant science. Certain sorts of scientific issues are highly politicised. These include assessment of risks associated with climate change, fracking, and (in the U.S. context) private ownership of firearms. Kahan (2017), in his work on “identity-protective cognition,” has shown that for these issues risk is evaluated in a way that split down partisan lines. For example, people who are very left-wing tend to over-estimate the risks associated with fracking, and people who are very right-wing tend to under-estimate those same risks. That is not very surprising. But what his studies also show is that this over- or under- estimation increases as a function of ordinary science intelligence, as measured by the OSI_2.0. This means that a person who scores better on a set of tasks which measure scientific general knowledge, statistical reasoning, and numeracy is more likely to give an assessment of risk that is skewed along partisan lines. That seems to indicate that the group identity of the agent plays a larger role in her reasoning as her ability to reason scientifically increases. That is counter-intuitive, because common-sense suggests that ordinary scientific intelligence should make a person better at evaluating risk on the basis of scientific knowledge.
It would be tempting to explain this epistemic phenomenon in terms analogous to the distortion in the addicted individual’s practical reasoning. For example, we might understand this situation in terms of a causal force external to the person’s agency which distorts her reasoning in the direction of her political allegiances. For example, one may think that a person can be so reliant on a particular aspect of her identity (e.g., political allegiance) that information that is threatening to this aspect triggers biases that compel her to discount or ignore the information at the expense of her epistemic agency. However, Kahan’s results seem to point us in a direction that is analogous to the case of addiction, because divergence from the norms of rationality increases with the general ability that the person has to follow those norms! That greatly suggests that it is a mistake to conceptualise the divergent reasoning in this way.
One alternative explanation may be that people with a higher ordinary scientific intelligence have less confidence in their own risk assessments, and so they make greater use of social information to produce those assessments when prompted. The quantity and content of available social information is also more likely vary with political identity, because someone who strongly leans Democrat is more likely to spend much of their time in an environment where the information on politicised issues (e.g., fracking) consistently emphasises (or even over-emphasises) its risks. So such a person will encounter this information more often and it will inform their risk assessments. In contrast, a person who is less politically committed will encounter less information of this kind. Furthermore, a person with a lower ordinary scientific intelligence may not draw upon this information at all, because they are less cognisant of the possibility of error and have a higher confidence in their own intuitive risk assessments.
Likewise, we suggest, it may be that conceptualising the divergent reasoning in terms of a compulsive force could undermine attempts at rectifying the effect. For example, on this compulsion model it may make sense to implement a “nudge” style policy which frames scientific information in such a way that avoids triggering the factors that putatively compel the agent to reason badly. But this would fail to be responsive to the agent’s own role in the overall phenomenon. Only with a fuller view of this role will we be able to understand how to make accurate treatment of certain sorts of information a valuable thing for those agents, and to improve discussion of currently hyper-politicised issues in the public sphere.
More generally, in understanding behaviour that diverges from norms, the temptation to explain in terms of compulsion has its origins in a view of human agency which excessively dichotomises rational and irrational modes of cognition. On such a view, cognition is constituted by adherence to norms of (practical or epistemic) rationality. That means that irrational or imperfect cognition appears to us not as a mode of cognition at all but rather as a disturbance of cognition owing to purely causal or compulsive factors. Softening the role of rational norms in their putatively constitutive guise may allow us to dispense with this explanatory fallacy.

References:

Charland, LC. (2002). Cynthia’s Dilemma: Consenting to Heroin Prescription. American Journal of Bioethics, 2(2), 37–47.
Kahan, D. (2017). Misconceptions, misinformation, and the logic of identity-protective cognition. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2973067
Kelly, J. F., & Greene, M. C. (2014). Where there’s a will there’s a way: A longitudinal investigation of the interplay between Recovery Motivation and Self-Efficacy in predicting Treatment Outcome. Psychology of Addictive Behaviors : Journal of the Society of Psychologists in Addictive Behaviors, 28(3), 928–934. http://doi.org/10.1037/a0034727
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131.
Uusitalo, S., & Broers, B. (2015). Rethinking informed consent in research on heroin-assisted treatment. Bioethics, 29, 462-469.

Back
Funded by: