Can we learn from neuroscience about ethics?


One of celebrated results from philosophical thinking about ethics is that we should not succumb to the ‘naturalistic fallacy’, i.e. infer ‘ought’ from ‘is’. Recently, it became fashionable to ground ethics on neurosciences. That is mostly justified by avoiding another fallacy, that one should avoid imposing ‘ought’ on actions which one cannot achieve for binding constraints of nature. However, the naturalistic fallacy looms large here, too.

In the recent issue of the ‘Journal of Economic Literature’ there are two reviews that deal with neuroscientific research on human sociality and ethics. This shows the strong interest that economists have in these issues. I will reflect upon these reviews in this and my next blog post. The first is Aldo Rusticchini’s review of Joshua Greene’s book ‘The Moral Tribe’. Greene’s approach is systematically grounded on the dual systems approach which I have discussed in a series of earlier posts.

In my comment, I wish to select one single sentence: “(…) how do we know that activation in dorsolateral prefrontal cortex is just evidence of the adding and subtracting that are involved in the utilitarian calculation, and not as well the consideration of the wider implications of an action? That is, why should we think that activity is only Bentham’s (…) and not Kant’s?”

If we connect this with the standard dual systems approach, the problem seems to be that even spontaneous action may result from very different moral commitments which may not necessarily be tied to the ‘reflective’ part of dual systems. In other words, even if there is a mechanism that spontaneously generates action, this may in fact reflect values and reflective actions that have become ‘automatic’ via the embeddedness in a specific social context. Let me give an example.

Traditional societies, but even modern societies such as the South of the United States, often have strong cultures of honour. Emotions of honour often trigger violent actions, such as direct revenge and retaliation against offenses. In this sense, honour may seem to be linked to the spontaneous system. But many societies, including the American North (I apologize for the clichés, which I use as a rhetorical shortcut), have moved away from centring on honour as a central ethical notion. This is also reflected in declining levels of violence, such as homicide. This long-run development was analysed classically by Norbert Elias in his theory of the civilizational process. Elias point was that the emotional structure of individuals is endogenous to social change, and that this does not only involve ideational ‘reflective’ phenomena, but mundane practices such as how to use knife and fork at the lunch table.

Elias’ theory has been strongly endorsed by celebrity Harvard psychologist Steven Pinker, again in his most recent book ‘Enlightenment Now’. I think that this argument on violence and honour is important also in our context, as it shows that one should never conclude that current neuroscientific research about the brain reveals anything about ‘human behaviour’ in the generic sense. Neuroscientists always study brains as embedded in specific cultural and social contexts. This is increasingly recognized in the field of ‘cultural neuroscience’ which has already accumulated much evidence on the plasticity and flexibility of neurophysiological structures relative to concrete cultural settings.

That implies for the neuroscientific study of ethics that even if we stick to the dual systems view, we should not think that the ‘fast system’ is just ‘human’ in the generic sense, and that the ‘slow system’ brings in context in via reflection. The fact about declining violence is not about individuals constantly reflecting and constraining their violent urges, but they simply do not have them! Their brains have changed via the evolution of social institutions, norms and values.

Rusticchini comments that one problem of applications of neurosciences and ethics is that the wide universe of human ‘moral sentiments’ is not covered, with most research centring on altruism versus egoism (‘social preferences’). That may also apply for the study of risk. I think that the example of violence might bear many resemblances here. Why should we assume that there are generic human properties regarding risk? There is a psychohistory of risk, in the same way as there is a psychohistory of violence, and there is social contextualization of risk, for example, across different professions.

I think that this explodes any attempt to ground ethics on the dual systems view. Moral sentiments are as ‘artificial’ as philosophical reflections about ethics: After all, Kantian ‘duty’, though rationally justified, is a question of ‘Haltung’ (‘composure’), i.e. a behavioural disposition that is expresses as fast as any other urge to action. As I will show in the next blog, my views are strongly supported by Alós-Ferrer’S second review essay in JEL.

Han, Shihui, Georg Northoff, Kai Vogeley, Bruce E. Wexler, Shinobu Kitayama, und Michael E.W. Varnum. „A Cultural Neuroscience Approach to the Biosocial Nature of the Human Brain“. Annual Review of Psychology 64, 1, 335–59.

Rustichini, Aldo. 2018. „Morality, Policy, and the Brain.“ Journal of Economic Literature, 56 (1): 217-33. DOI: 10.1257/jel.20161260

Funded by: