Back in Learning and teaching practical wisdom, I talked about how David Chapman (
) and I are to some degree working on the same project from different angles, that of practical wisdom, which I defined as follows:Practical wisdom is the quality of knowing what to do in a particular practical context, and of being able to articulate why a particular action is the thing to do. The action is more important than the articulation of why it’s right, and is sufficient to count as practical wisdom in itself, but the ability to articulate its rightness is an important part as well, and shows greater wisdom than just the action.
I think one interesting thing when comparing my work with David Chapman’s is that we’ve often ended up taking the same topics and problems and ending up with opposite approaches and perspectives. This is less a disagreement about the actual facts on the ground, but I do think we have genuine disagreements about what’s important and how to conceptualise it.
This post is about one particular place where I think we come to a different emphasis. It’s not really me setting out to disagree with David so much as realising that something I’ve been thinking a lot about recently overlaps with his work but I frame it sufficiently differently that it’s interesting to contrast the two.
Can you count clouds?
Specifically, this post is about what David calls nebulosity:
“Nebulosity” means “cloud-like-ness.”
(…)
From a distance, clouds can look solid; close-up they are mere fog, which can even be so thin it becomes invisible when you enter it.
Clouds often have vague boundaries and no particular shape.
It can be impossible to say where one cloud ends and another begins; whether two bits of cloud are connected or not; or to count the number of clouds in a section of the sky.
If you watch a cloud for a few minutes, it may change shape and size, or evaporate into nothing. But it is impossible to find an exact moment at which it ceases to exist.
It can be impossible to say even whether there is a cloud in a particular place, or not.
Meanings behave in these ways, too.
David doesn’t provide a precise definition of nebulosity:
I will not give a precise definition of “nebulosity” here. Instead, I present analogies. I apologize if its meaning seems frustratingly nebulous at this point. Better understanding of the term should emerge gradually as we go along in the book.
My stated position remains that people should define things, so I’m going to try to define things. I’m not going to try to precisely define “nebulosity” because I think David means several different overlapping and interacting things by it, but I do want to pick out one feature of it that I think is important and define it.
Clouds are continuous not discrete
The key feature of nebulosity that I want to point out is ontological fuzziness.1 Ontological fuzziness is the property that the boundary between some thing or concept and the region outside of it is a subject of genuine plausible disagreement, even between two reasonable people who have a shared practical handle on the concept where they can usefully talk about it together. Even, in fact, to the point where a single reasonable person will find it increasingly difficult to pinpoint the exact boundary when pressed. A philosopher would call this vagueness but I try not to adopt terms of art that are single words with a colloquial meaning different from their technical one.2
Ontological fuzziness is a major part of what Chapman means by “nebulosity”3. It doesn’t capture the whole of it - e.g. one thing that he points out is that clouds do not look like clouds from the inside, and that is a different feature of nebulosity than ontological fuzziness - but it is an important one and is what I want to focus on here.
Clouds are still a good starting example of this. You will, usually, be able to point at a cloud and say “look, there’s a cloud there”. But that cloud is ontologically fuzzy - you won’t be able to define the precise point at which it stops. As a result, you might not be able to answer questions like “Is that one cloud or two?”, let alone “How many clouds are there in the sky?”
But there is, I think, an important character to the question “How many clouds are there in the sky?” that I’d like you to reflect on: It’s a dumb question.
I don’t mean it’s a meaningless question. Its meaning is under-defined and would need to be pinned down to answer it (that’s the ontological fuzziness at work), but that’s true of a lot of important and good questions too. What I mean is that there’s nothing important downstream of answering it that particularly depends on it in a way that the ontological fuzziness influences. Your decisions may often depend on the answer to the much better question “How cloudy is it?”
The interesting thing about the question of “How cloudy is it?” in contrast to “How many clouds are there?” is that it is still dependent on the same ontological fuzziness, but it’s no longer a problem because you’re no longer asking a discrete question.
Let’s take a yet more precise version of this question: What’s the current cloud cover? Here you’re asking something like “What percentage of the sky is covered by cloud?”.4
Note that this question is still a bit under-defined: What does it mean to be “covered” by cloud?5
The interesting thing about this question in contrast to the “How many clouds are there?” question though is that you can start to answer it before you’ve pinned down the ontological fuzziness much.6
If you look at the sky, you can pretty rapidly estimate the percentage cloud cover as follows:
Pick a point at random on the sky, N times times (where N is a couple hundred say).
Count up the number of times the point you picked was obviously a cloud, call that C.
Count up the number of times the point you picked was obviously not a cloud. Call that B.
The cloud cover is somewhere between C/N and (N - B) / N.
You can use (N - B + C) / 2N as your estimate if you’d like, or depending on which error rate matters you might just want to pick one of the endpoints (e.g. pick the larger number if you’re trying to decide whether to bring an umbrella).
Most of the time what you will find is that this process gives a pretty tight range in (4), because the nebulous boundary is actually a pretty small fraction of the sky, especially if your point sampling is fine grained enough. This is, I think, a recurring theme of ontological fuzziness: Most of the time the cases you care about are not actually in the fuzzy boundary. Edge cases exist, undoubtedly, but they mostly shouldn’t affect your day to day operations.7
Here’s another measuring process:
Take a picture with your phone camera.
Run some program on it that calculates the average cloudiness of each pixel in that image.8
Here’s another: Look at the sky and go “I reckon the cloud cover is about…”
These will not produce exactly the same results. The second one won’t even reliably produce results in the range of the first one (both because of the randomness and because of the slightly different per-point judgements), and the third will (especially with some practice) be pretty well correlated with the other two but show significant interpersonal variation. But they’re all pretty reasonable instantiations of the question, and you could create a calibrated scale for each where you could reasonably well (though imperfectly) predict each from the other.
What’s happened here is that the ontological fuzziness has turned into a measurement problem, because we’ve acknowledged that the real problem was that we were pretending that there were discrete categories where in fact there was a continuous property.
In fact it’s slightly worse than that, because there isn’t a continuous property per se. We’ve defined “cloud cover” as the fraction of the sky covered by clouds, but this isn’t exactly a knowable property of the system, it’s a function of measurement.
I don’t want to get even further off into the weeds than I already am here, but if you want to read more about how properties emerge from measurement, I would extremely strongly recommend the book “Inventing Temperature” by Hasok Chang, which is about the process of inventing reliable thermometers. Importantly, although we’ve got a reasonably rigorous measurement independent definition of temperature now, thermometers preceded that and we had to invent temperature from human experience and calibrating a wide variety of incompatible measurement devices against each other.
The point I want to emphasise here though is that we’ve gone from a position where our approach (carving the sky up into distinct clouds) was fragile in a way that could cause genuine disagreements (is this one cloud or two?) and replaced it with something where there aren’t so much disagreements as mutual approximations. It’s not that we’ve eliminated all the uncertainty and decisions in what we’re measuring, it’s that in moving to the continuous quantity we are no longer gated on resolving the ontological uncertainty in order to move forward because we can pretty much pick anything reasonable and expect to get the same results most of the time.
This is what happens when you move from incorrectly regarding something as discrete to continuous, and I think is the main source of ontological fuzziness: Confusing a category for a quantity.
Another common example of this is the Sorites paradox: How many grains of sand does it take to make a heap? The Sorites paradox argues that you can always take away a grain of sand from a heap and it’s still a heap, therefore one grain of sand is a heap9. This is a problem if “heap” is a category and not a big deal if you consider “heapiness” to be a quantity.
This leads to one easy response to many cases of ontological fuzziness: Stop pretending the thing is a discrete category, and respond to it continuously. e.g. whether some sand is a heap or not doesn’t matter compared to how heavy it is, or how many shovel loads it will take to move it from point A to point B.
You will still sometimes need to make discrete decisions around these things - do I need to enlist a second person to move this heap, should I bring an umbrella in case it rains, etc. but a key difference here is that as you approach the boundary the importance of the decision goes down, not up. Trying to decided whether there is one cloud or two can take up an unbounded amount of work. Knowing whether the cloud cover is 33% or 34% barely affects your decision of whether to bring an umbrella, so if it’s not obvious you might as well just toss a coin or something.10
Ontological fuzziness is ontologically fuzzy
OK so clouds are continuous not discrete, and should be regarded as such. What about chairs?
Chapman quotes Feynman:
What is an object? Philosophers are always saying, “Well, just take a chair for example.” The moment they say that, you know that they do not know what they are talking about any more. The atoms are evaporating from it from time to time—not many atoms, but a few—dirt falls on it and gets dissolved in the paint; so to define a chair precisely, to say exactly which atoms are chair, and which atoms are air, or which atoms are dirt, or which atoms are paint that belongs to the chair is impossible. So the mass of a chair can be defined only approximately.
Well, OK. Chairs are ontologically fuzzy. Does this mean that, like clouds, you can’t count chairs?
As Feynman points out, there is indeed a very narrow boundary region to a chair like the wispy edges of a cloud where things are ambiguously both chair and not-chair. If you put two chairs close enough together, they overlap in this boundary of ambiguous chair region where it’s not totally clear if the intermediate region belongs to one chair or the other. Therefore there is no clear point at which one chair ends and the other begins. So how many chairs are there now?
There are two chairs. This isn’t a trick question, obviously there are still two chairs. Yes there is, if you look closely, a certain ambiguity of form, but functionally speaking there are still two chairs and nobody seriously asking how many chairs there are would ever reasonably expect an answer other than two here.
This would be true even if you bolted the two chairs together in some way so that they were now “a single object”. That single object would be two chairs bolted together, because when you’re asking about how many chairs there are, you’re asking about a functional property related to the ability to sit on it.11
This isn’t, by the way, something that I’m disagreeing with Chapman on. I think this is very close to the point he is making (or at least highly overlapping with it) in slightly different words.
But the thing I’m pointing at is this: Chairs are ontologically fuzzy, but they’re not very ontologically fuzzy, are they? “That chair” is a pretty precise delineation of what you mean in a way that “That cloud” is not.
Some things are in between. For example a river. A river has a bank where the boundary between “river” and “not river” is a little ambiguous. This is both true at any given moment and also rapidly changing over time. There’s also a degree to which you need to decide whether streams and tributaries count. So, the river is somewhat ontologically fuzzy. Certainly to a much larger degree than the chair, but also to a much smaller degree than the cloud.
Or, take a pot of water. Where’s the boundary? How fuzzy is it? Depends on the water temperature of course, and the humidity in the air. If the pot is boiling, or if there’s a low mist condensing over it from the hot humid air hitting the cool water… There’s never a lot of ambiguity in the boundary there (although what about, say, the boundary between a slurry of ice and the water it’s on top of…?) but there is always some, and it varies up and down continuously, so that how fuzzy it is is itself fuzzy…
Anyway, I’m drawing this out more than it deserves. The point is in the heading: Ontological fuzziness is, itself, a quantity (or perhaps a quality) - it’s not something that things are, or are not, but a matter of degree.
There is ontological fuzziness in everything we do but, and this is key, if you are regularly encountering it in a practical context then you are doing something wrong. This doesn’t mean that the category is bad or useless, it just means that when you are operating at its boundary it’s probably not the right tool for what you’re trying to do.
Ontologically fuzzy squirrels
The following anecdote comes from William James’s collection, “Pragmatism”.
Some years ago, being with a camping party in the mountains, I returned from a solitary ramble to find every one engaged in a ferocious metaphysical dispute. The corpus of the dispute was a squirrel - a live squirrel supposed to be clinging to one side of a tree-trunk; while over against the tree’s opposite side a human being was imagined to stand. This human witness tries to get sight of the squirrel by moving rapidly round the tree, but no matter how fast he goes, the squirrel moves as fast in the opposite direction, and always keeps the tree between himself and the man, so that never a glimpse of him is caught. The resultant metaphysical problem now is this: Does the man go round the squirrel or not? He goes round the tree, sure enough, and the squirrel is on the tree; but does he go round the squirrel?
In the unlimited leisure of the wilderness, discussion had been worn threadbare. Everyone had taken sides, and was obstinate; and the numbers on both sides were even. Each side, when I appeared therefore appealed to me to make it a majority. Mindful of the scholastic adage that whenever you meet a contradiction you must make a distinction, I immediately sought and found one, as follows: ‘‘Which party is right,” I said, “depends on what you practically mean by ‘going round’ the squirrel. If you mean passing from the north of him to the east, then to the south, then to the west, and then to the north of him again, obviously the man does go round him, for he occupies these successive positions. But if on the contrary you mean being first in front of him, then on the right of him, then behind him, then on his left, and finally in front again, it is quite as obvious that the man fails to go round him, for by the compensating movements the squirrel makes, he keeps his belly turned towards the man all the time, and his back turned away. Make the distinction, and there is no occasion for any farther dispute. You are both right and both wrong according as you conceive the verb ‘to go round’ in one practical fashion or the other.”
Although one or two of the hotter disputants called my speech a shuffling evasion, saying they wanted no quibbling or scholastic hair-splitting, but meant just plain honest English ‘round,’ the majority seemed to think that the distinction had assuaged the dispute.
This is an example of the ontological fuzziness of the category of actions “to go around”.12 The man and the squirrel are in a genuinely ambiguous case in the border between “going round the squirrel” and “not going around the squirrel”.
As James points out though, there’s no real ambiguity about what’s actually going on here. The facts of the matter are clear to all, but in one reasonable interpretation of the question, those facts mean that the answer is yes, in another, they mean that the answer is no. Define your terms more precisely, and the problem goes away.
Hasok Chang in his “Realism for Realistic People” makes the good point that this is not a purely semantic distinction. There are reasons that may inform how you define your terms:
one might wonder if what James advocates isn’t just a matter of defining one’s terms carefully. But I think that the sort of disambiguation offered by James is tied closely to potential practical ends. If my objective is to make a fence to enclose the squirrel, then I have gone around the squirrel in the relevant sense; if my objective is to check whether the wound on his back has healed, then I have failed to go around the squirrel in the relevant sense. It is the pragmatic purpose that tells us which sense of ‘going round’ we ought to mean. Semantics should be a tool for effective action.
Language is being used as a way to communicate about action, and the relevant meaning is the one that supports the relevant action. You can argue about category membership all you want, but without understanding how your categories support action, it’s the equivalent of arguing about whether a hot dog is a sandwich - a fun internet game,13 but fundamentally unserious.
This is why I labelled the question “How many clouds are there?” a dumb question - it’s not usefully action supporting in the way that “How cloudy is it?” is.
I think most direct encounters with ontological fuzziness are like this: The fuzziness goes away once you narrow in on what’s actually important for your action. Ontological fuzziness is a sign that you’re asking the wrong question, and if you’re regularly hitting a piece of fuzziness then you should probably be asking better questions.
Who decides the right questions?
A couple years back I got very into a subject that I thought of as “the legibility war”. I still think in these terms sometimes, I just stopped talking about it so much. The basic idea is that we are all constantly in conflict14 over the question of how to interpret the world. We want the world to fit into tidy easy-to-understand boxes, the problem is that “easy to understand” is not a universal property. What’s easy to understand to me is not easy to understand to you, and vice versa.
Specifically what we want to do is lower the cost of our interactions. It’s much easier if you can treat someone as a member of some identity label and respond generically to them like you would any other member of that label.
This might seem bad, in that it’s stereotyping and we’re not supposed to do that, but actually it’s often good. For example, “can you use the disabled parking spot?” is quickly decided by “do you have a disabled badge on your car?”, because there’s a very visible rule associated with a visible label. It’s unfortunate that you might be (e.g. temporarily) disabled and not have such a badge and this will cause problems, but the fact that a disabled person doesn’t have to put in a lot of effort to demonstrate that they have the right to use it is a good thing for them.
A less uncontroversially good example of this sort of categorisation-based decision is “Do I have ADHD?”.
For some people this is purely a question of self-understanding, in which case I think its major benefit is that it generates more practical questions like “Do I struggle with remembering tasks I have to do?” and “Can I handle boredom?” and “What sorts of strategies have helped other people like me?”. It doesn’t purely reduce to these practical questions, but unlike the more specific practical questions you’ll probably never get a definitive answer to it because the boundary is just genuinely unclear.
For others though, the reason they want to know if they have ADHD is that they need medication, and suddenly the question of where the boundary is becomes extremely non-academic and not like a fun game about hot dogs at all.
Because ADHD is incredibly ontologically fuzzy. It’s not even clear whether it’s one thing or several different things, or even if the question of whether it’s one thing or several different things is well defined, but either way every one of the things it comprises is a continuous spectrum of variation, and we’ve picked some arbitrary line and called that the level at which you have ADHD. There’s not that much difference between someone right above the line and right below the line, except that the one right above the line gets medication and the other doesn’t.15 When you find yourself at thresholds like this you suddenly care about other people’s ontologically fuzzy categories quite a lot, and the fact that they’re asking the wrong question (The right one is “Would medication help this person in the sorts of ways we want to help people?”) isn’t very comforting.
This is an entry point into one of the rare categories of almost completely unfuzzy ontologies: Entirely human created and declared categories. “ADHD” is an ontologically fuzzy category. “Has an ADHD diagnosis” has almost no fuzziness to it whatsoever, because it’s an official label that has been applied to you (regardless of whether it’s correct). A lot of human activity is about converting the former into the latter, and there is a lot of suffering around the thresholds.
Many categories in mental health work like this.
has a very good recent piece on this in the context of BPD, and the tensions between it as a label that can help you understand yourself and one with actual diagnostic consequences and a great deal of stigma.One of the core texts for understanding these sorts of categories applied to humans is “Sorting things out: Classification and its consequences”16, which is about the nature of classification systems and the politics of their definition. It’s quite dry in places, but I highly recommend it.
In general, the thing I am pointing at with “legibility war” is this: The world is full of categories. Many of them were designed to be applied to you, not by you. Be careful of that.
Building your own descriptions
In truth, this is all groundwork for the things that started me thinking about all of this. I think it’s important groundwork, but I think also you and I have both run out of steam at this point, so I’m going to wrap up and maybe I’ll write about the intended topic when the muse drives me to. This sections is just a sneak preview of that.
Most of what I want to leave you with this this:
The most important thing to understand from all of this discussion of ontology is that labels aren’t real. There isn’t generally a definitive answer to the question “Is this X or Y?”. There isn’t a true fact about how many clouds there are, or whether the man is going around the squirrel, or whether I have ADHD.17
The second most important thing to understand from it is that labels are definitely real. They describe real things, we use them to communicate as accurately as we can about those things, and they inform our decisions.
Labels are tools, things we create in order to engage with the world, and you can judge them by how well they let us do that. Sometimes shop bought labels are fine, but sometimes you’ve got your own bespoke problem and you need to invent new words and terms for it.
If you want to act better in the world, it helps to understand it. You can do that by describing it, and how well you describe it depends on the language available to you, and it helps to improve that.
What David highlights as nebulosity, and what I’m calling ontological fuzziness, is an important feature of the world, and it is something you have to confront from time to time as you realise you’re operating with labels that don’t map to the actions you want to take, but equally often you want to take the opposite approach and throw away the fuzzy boundaries and carve up the clouds in a direction that’s useful to you rather than the one that’s resulting in fuzziness. It’s much like the distinction between walking through walls and grip fighting - sometimes you have to break out of the categories, sometimes you have to invent new ones.
So I suppose the topic I’m driving at is something like… personal ontology. How you describe your experience of the world, and how to use this to live better. It’s something I’ve talked about a bit before in Intellectual DIY and what to do when the labels don’t fit is a longstanding topic.
This is ultimately a practical question, but it is also a deeply emotional one, and you can tell this because of the questions driving it: What should I do? How?
Keep reading with a 7-day free trial
Subscribe to Overthinking Everything to keep reading this post and get 7 days of free access to the full post archives.