Hi everyone,
Today I thought I’d talk about some papers I like that are in tension with each other: Learning from samples of one or fewer and Moral understanding and moral illusions.1 You probably haven’t read either of them, but that’s fine.
Against weird thought experiments
Consider the paper The Problem of Abortion and the Doctrine of the Double Effect, by Philippa Foot, which is the paper that originally introduced the trolley problem.2
It’s full of fun thought experiments like:
we may consider the story, well known to philosophers, of the fat man stuck in the mouth of the cave. A party of potholers have imprudently allowed the fat man to lead them as they make their way out of the cave, and he gets stuck, trapping the others behind him. Obviously the right thing to do is to sit down and wait until the fat man grows thin; but philosophers have arranged that floodwaters should be rising within the a stick of dynamite with which they can blast the fat man out of the mouth of the cave. Either they use the dynamite or they drown. In one version the fat man, whose head is in the cave, will drown with them; in the other he will be rescued in due course. Problem: may they use the dynamite or not?
Do you feel like this is a representative example of the sorts of ethical decision making you need to get better at in your day to day life, or does it strike you as perhaps a bit contrived?
“Moral understanding and moral illusions”, by Daniel Wilkenfeld, is an argument that the use of these “recherché”(“recherché” means “weird”) thought experiments in ethics harms one’s ethical understanding.
The argument starts from the idea that understanding consists of having good compact mental representations of the problem domain, a view he describes as “understanding as compression”. If something looks irreducibly complicated to you, you don’t understand it. If you can see it as simple, you do understand it, because it’s easier for you to work with the simple representation.
The inclusion of weird examples into what you’re trying to understand makes it harder to come up with these compact representation. This is in some sense obvious: The simplest description that covers 90% of the cases is typically going to be simpler than the simplest description that covers 100% of the cases, because it can’t be more complex (because anything that works as an explanation for 100% of cases also works as an explanation for 90% of cases) and can often be made simpler by ignoring unlikely complicating factors and complex boundary conditions.
Further, in a practical sense, we don’t generally need to cover anywhere like 90% of cases to get good ethical behaviour, because it depends on what “90%” means. We need to cover far more than 90% of cases that crop up in our every day life, but it’s very easy to come up with weird exotic cases that never come up in practice. Our day to day life probably doesn’t include anything approaching 1% of the cases of ethical decision making that real people are encountering on a day to day basis, let alone of cases that an adversarial philosopher can come up with.
This broadly tracks the research on how experts look at their areas of expertise, at least from my relatively limited understanding of it (mostly based on Gary Klein’s “Sources of Power” and Anders Ericsson's “Peak”). Experts have good mental representations of the sorts of problems they encounter, and these representations are very specifically designed around typical examples rather than arbitrary ones.
The classic example demonstrating this is that grandmaster chess players are very good at memorising chess boards that arise from real play, but no better than average at memorising chess boards where the pieces are randomly arranged. In the former, they have good representations in terms of how things got there and what’s happening - this looks like it came from this known opening, that bishop is threatening that rook, etc. - while in the latter it’s essentially arbitrary information.
For information theoretic reasons this is more or less essential: You cannot create mental representations of an arbitrary position on a chess board that are any better than brute force memorising the position of every piece (which is doable but hard), but the space of typical chessboards is much smaller and so admits some shortcuts. As a result, if you tried to apply a general skill of memorising arbitrary chessboards, you will tend to be outperformed on typical chessboards by a grandmaster.
Although memorising a chessboard is not a typical example of a chess skill, it does helpfully correspond to skill simply because ease of working with the representation is the same as ease of memorising it: Both are about having efficient and easily understood small lists of relevant things to work with. This tends to support the view of understanding the paper is using.
I do think the paper fails to make its desired point, for reasons I will get into in the final section, but the basic point is valid: Our ethical expertise needs to be based on typical ethical encounters more than arbitrary ones, because all expertise is, and weird thought experiments do not count as typical ethical encounters.
Learning richly from history
“Learning from samples of one or fewer” is an account of how we can learn to be good at things that don’t, and in many cases ideally wouldn’t, happen often enough for us to get adequate experience of them.
For example:
Ideally we’d have zero plane crashes, and we would like to learn how to have fewer plane crashes without having to do the expensive trial and error process of waiting for a plane to crash.
A military that has not fought a defensive war in the lifetime of its current members would still like to know what to do in the event of an invasion. Ideally they would learn this without getting invaded.
Bad news for fans of the previous paper: The answer is, at least in part, thought experiments.
Specifically they talk about a practice of what they call “experiencing history richly” - learning more out of history than you can get by simply treating it as a small number of discrete events.
One aspect of this is the use of thought experiments about how things could have gone instead:
If a basketball game is decided by one point, one team wins and the other team loses, with consequences that may be vital for a championship. But the outcome will normally be interpreted by experts as a draw from some probability distribution over possible outcomes rather than simply as a ‘‘win’’ by one team and a ‘‘loss’’ by the other. In general, if a relatively small change in some conditions would have transformed one outcome into another, the former will be experienced to some degree as having been the latter. In such a spirit, the National Research Council (1980) has defined a safety ‘‘incident’’ as an event that, under slightly different circumstances, could have been an accident.
In this way, by starting from a real history, and generating lots of “thought experiments” which are basically simulated variants of what actually happened, you can learn nearly as much from a near-miss as from a catastrophe.
They also talk about hypothetical histories - things that are not particularly near to what actually happened, but are still fundamentally rooted in your understanding of history itself. For example:
A pervasive contemporary version of hypothetical histories is found in the use of spread sheets to explore the implications of alternative assumptions or shifts in variables in a system of equations that portrays organizational relations. More generally, many modern techniques of planning in organizations involve the simulation of hypothetical future scenarios, which in the present terms are indistinguishable from hypothetical histories (Hax and Majluf, 1984). The logic is simple: small pieces of experience are used to construct a theory of history from which a variety of unrealized, but possible, additional scenarios are generated. In this way, ideas about historical processes drawn from detailed case studies are used to develop distributions of possible futures.
From the “understanding as compression” point of view, near histories are more useful for understanding than hypothetical histories, in that they are much closer to the distribution of real events and thus correspond to the sorts of representations we actually want, but given care we can try to construct hypothetical histories that are as realistic as possible.
This is, I think, a crucial thing for us to try to do with our ethics. In Being disappointed in people, I talked about how we can use disappointment in our past actions to drive ethical development, but ideally we would be able to improve our ethical development without ever doing anything too disappointing. Additionally, when we have transgressed ethically, we can look at near histories where we did better, and engage in the sort of “experiencing history richly” that this paper talks about.
As I said at the start, these two papers aren’t really opposed so much as in tension. The synthesis is very straightforward: We need to ground our thought experiments in plausibility. We can learn from near histories, and hypothetical histories, and hypothetical histories, but they most usefully guide our ethical expertise when we don’t get too weird with our thought experiments.
The proper use of weird thought experiments
One of the things that I was surprised by when I finally read Philippa Foot’s paper introducing the trolley problem was how much I liked it. It’s actually a fairly reasonable paper doing fairly reasonable things, it just doesn’t look that way when you consider it as a series of weird examples.
The reason is that she isn’t actually trying to use these examples as action guiding in the same way that you would use real examples. She’s not making any claim that you should learn from these thought experiments in the same way that you would learn from a real scenario, instead she’s using them for two things: Testing an ethical argument that someone else is making, and trying to figure out some of the ethically salient features. These are, I think, perfectly reasonable use cases for even weird thought experiments, with some caveats.
The first case is done in response to someone’s claimed ethical arguments, specifically the Catholic argument against abortion from the doctrine of double effect3. She uses thought experiments to test the boundaries of whether this ethical principle can really be as universal as claimed. Speaking as a mathematician and also someone who writes software to do this sort of thing, coming up with weird counterexamples is precisely what you do when someone makes a wrong universal claim.
An important feature of the use of ethical thought experiments here is that you don’t actually have to know what to do in those scenarios. In this paper Foot doesn’t actually come down on the side of saying that you definitely should or shouldn’t pull the lever4 - the example just serves to illustrate that the doctrine of double effect doesn’t seem quite as clear cut in this case as we’d expect.
The second is, I think, more interesting. Through the paper she is less concerned with what the right behaviour in the scenarios she describes, or trying to come up with a consistent set of rules governing them, but instead is asking what the ethically salient features of the scenarios are. She asks both whether the doctrine of double effect is really pointing them out, and also suggests that we can look at these examples in terms of positive and negative duties.
Importantly, she comes to no firm conclusions about these experiments, and does not suggest that they are examples that we should treat as action guiding in the same way as a real event. Instead she is trying to use a series of principles and examples to guide us to look at the problem in a more informed way. This is, in fact, fairly in line with my understanding of how expertise works, and suggests a useful role for weird thought experiments in developing ethical expertise - we cannot take them seriously in the way we would a more grounded set of hypotheticals, but we can perhaps still use them to develop the ways we think about ethics as long as we don’t take them too seriously.
Postscript
The image for this post is a stolen meme, and I don’t know who it’s originally by, but nobody cares about meme theft, especially memes about ethics.
Spread the news!
If you liked this issue, you are welcome and encouraged to forward it to your friends, or share it on your favourite social media site:
If you’re one of those friends it’s been forwarded to and are reading this, I hope you enjoyed it, and encourage you to subscribe to get more of it!
Community
If you’d like to hang out with the sort of people who read this sort of piece, you can join us in the Overthinking Everything discord by clicking this invitation link. You can also read more about it in our community guide first if you like.
This latter is behind a paywall and as such you absolutely should not use this link to bypass that paywall using sci-hub, as you would be doing a grave disservice to a parasitic academic publishing industry that contributes nothing of value and locks up a huge amount of important knowledge for its own profits, even though this would be in keeping with a longstanding academic tradition of stealing ethics books.
I’m actually going to argue later that she’s doing something reasonable in this paper, but hold on to that thought while I make fun of it.
An ethical principle that after reading the paper I must admit I still don’t 100% understand, but seems to be something about how when your actions have multiple effects it’s only the effects you intend that matter ethically? This seems obviously wrong but I think that may be a failure of my understanding and the actual principle might be slightly more reasonable.
There’s no actual lever in her version, she’s reasoning from the viewpoint of the tram driver. Also it’s a tram because she’s not from the USA so doesn’t use “trolley” wrong.
The story of the fat man in the cave seems clearly inspired by chapter 2 of Winnie-the-Pooh, providing further evidence (if any were needed) for John Tyerman Williams' observation that all Western philosophy is either a preface to or a commentary on Winnie-the-Pooh and The House At Pooh Corner.