Looking for dragons drives progress
Bell Labs’ problem
First, let’s talk about Bell Labs. Bell Labs was an/the R&D department at AT&T and invented a really alarmingly large fraction of the computing infrastructure we use today. I didn’t really appreciate quite how much until I read “The Idea Factory” by Jon Gertner (I just thought of Bell Labs as the unix and C place), but examples include transistors, information theory, mobile phones, and a whole host of other things.
People talk a lot about what it was in the environment of Bell Labs that made it so productive - whether it was the people involved, the organisational structure, or something else - but I think such discussions often miss the basic point: The thing that made Bell Labs so productive was that it was their job to make a better telephone network.
From “The Idea Factory”, chapter 3:
IN THE FIRST DECADE of the twentieth century, the transcontinental phone line had been one example of how the challenges of expanding the phone system led to inventions like the repeater tube. But it was only one example. Following the rapid development of the telephone business in the early twentieth century, everything that eventually came to be associated with telephone use had been assembled from scratch. The scientists and engineers at Bell Labs inhabited what one researcher there would aptly describe, much later, as “a problem-rich environment.” There were no telephone ringers at the very start; callers would get the attention of those they were calling by yelling loudly [This seems unlikely to be true. Bell Labs started in 1925 and the telephone ringer was invented in 1878 - DRMacIver] (often, “ahoy!”) into the receiver until someone on the other end noticed. There were no hang-up hooks, no pay phones, no phone booths, no operator headsets. The batteries that powered the phones worked poorly. Proper cables didn’t exist, and neither did switchboards, dials, or buttons. Dial tones and busy signals had to be invented. Lines strung between poles often didn’t work, or worked poorly; lines that were put underground, a necessity in urban centers, had even more perplexing transmission problems. Once telephone engineers realized they could also broadcast messages via radio waves, they encountered a host of other problems (such as atmospheric interference) they had never before contemplated. But slowly they solved these problems, and the result was something that soon came to be known, simply and plainly, as the system.
The system’s problems and needs were so vast that it was hard to know where to begin explaining them. The system required that teams of chemists spend their entire lives trying to invent new, cheaper sheathing so that phone cables would not be permeated by rain and ice; the system required that other teams of chemists spend their lives working to improve the insulation that lay between the sheathing and the phone wires themselves. Engineers schooled in electronics, meanwhile, studied echoes, delays, distortion, feedback, and a host of other problems in the hope of inventing strategies, or new circuits, to somehow circumvent them. Measurement devices that could assess things like loudness, signal strength, and channel capacity didn’t exist, so they, too, had to be created—for it was impossible to study and improve something unless it could be measured. Likewise, the system had to keep track of the millions of daily calls within it, which in turn required a vast, novel infrastructure for billing. The system had to account for it all.
“There is always a larger volume of work that is worth doing than can be done currently,” Kelly said, which was a way of acknowledging that work on the system, by definition, could have no end. It simply kept expanding at too great a clip; its problems meanwhile kept proliferating.
I love the term “problem-rich environment” and use it often, but I think it slightly misrepresents the nature of the environment. In fact there was only one problem at Bell Labs: make the telephone network more profitable. The environment was problem rich, because that one problem is fractally complicated, but it wasn’t full of a mess of unrelated problems - the problems had a fundamental unity to them stemming from a common goal.
Problems that invent the future
I don’t want to downplay the specifics of Bell Labs - it employed some amazing people who did amazing things - but I do think the problem that Bell Labs was solving was more important than the specifics of the lab’s structure or people - you could certainly screw up badly enough that you didn’t solve the problem, but I don’t think there was any route where you solved the problem that didn’t lead to something like the embarrassment of riches that Bell Labs gave us.
The problem looks to me like it has the following essential characteristics:
It starts from something you already have (a phone network) and has a goal of making it substantially better in some definable way.
Success is genuinely difficult (in the sense from Difficult problems and hard work - there’s a genuine risk of failure, you don’t have a mechanism for just putting in routine work and getting out results until you’re there).
Success is conceivable - you know roughly what the final solution looks like, even if you’re not clear on the details.
Success is complicated - getting there has many subproblems which are related but can be worked on more or less independently of each other.
Success stacks - you can build on prior successes to unlock new possibilities and problems that weren’t open to you before.
Success can be approached incrementally - small improvements to the status quo are valuable immediately, rather than requiring a big all or nothing result at the end.
I think any competent and sufficiently well-resourced attempt to solve such a problem is likely to produce valuable results, especially if it’s ultimately successful.
The telephone network had all of these properties:
The system already existed as a collection of (initially separate) phone networks that were making them money.
Covering the entirety of the US, let alone getting transatlantic phone calls working reliably, required dealing with interference over long distances that they had no idea how to solve initially. Similarly, scaling up to many more people than they had required new switching technologies.
But fundamentally they didn’t have any problems that seemed like they didn’t understand what the solution looked like - they were big leaps, but they knew roughly what they had to do, they just didn’t know how to do it.
There were so many different issues - from usability, to durability of wires, to interference, to routing, etc.
Every time you expand the network you get new and interesting problems, and many of the technologies are general purpose ones which have a wide variety of applications (e.g. the transistor was originally invented for creating more robust amplifiers, but it turned out to have uh one or two other applications).
And every time they solved one of these issues, AT&T made more money.
Let’s call these problems “dragon problems”, because they take you into the part of the map that says “Here be dragons” on it - you know where you’re going, but you’re going to have to learn a lot about what’s actually there on the way.1
Another good example of a dragon problems is the space race, and going to the moon in particular, which indeed had many successful spin-off technologies. Additionally, the military is very good at continually generating dragon problems (because they’re in a constant arms race to do better than their enemies), and is also responsible for many important technological developments.
Why do dragon problems drive progress?
Another book I’m a big fan of is Steven Johnson’s “Where Good Ideas Come From”, which he summarises well in this YouTube video. One of the interesting things he describes in it (which he doesn’t actually mention in the video) is how innovation comes from exploring the adjacent possible.
The adjacent possible is an idea that comes from Stewart Kaufmann and describes how evolutionary systems grow - at any given point you’ve got the set of things that already exist, and the adjacent possible is the set of things that could exist as the next generation from the current possibilities.
Biological systems evolve through repeatedly generating candidates from the adjacent possible (through reproduction and mutation), and the ones that do well go on to be entrenched in the next set of possibilities and go on to be part of the core possibilities that give rise to the next adjacent possible. Many generations of this can give rise to spectacularly large changes, but they always proceed through these steps of exploring the adjacent possible.
Innovation, both technological and otherwise, is another evolutionary system - innovations build on and alter things that came before - and works by exploring the adjacent possible in the same way.
With innovation in particular, exploring the adjacent possible can be a slow process, because often you need to combine very disparate ideas and have to wait for them to diffuse through the world - imagine that the thing you needed to solve your current problem at work was some obscure result in pure mathematics that was worked on by some random researcher in a small university in Poland2 - just because your problem is solvable by the current state of the art doesn’t mean you have access to that solution. Something isn’t part of the adjacent possible just because you can make it by combining what already exists - you need those things you need to be able to meet and combine.
Additionally, often you run into problems where the thing you need isn’t actually in the current adjacent possible at all but is relatively close - e.g. it’s pretty common in computer science to discover that the thing people were doing twenty years ago was actually a really good idea, it’s just that computers weren’t fast enough yet or they hadn’t figured out this one algorithm that they needed to make the idea useful.
When you get a group of people together to slay a dragon problem, what you see is a rapid localised expansion of the possible. Normally the possible expands slowly and somewhat undirectedly. You can imagine it as a big sphere slowly agglomerating stuff onto it as more things become possible. In contrast, setting out to slay a dragon drives a great big spike out into the unknown, as each problem you solve opens up new regions of adjacent possible to you that have not yet become possible to the broader world.
Additionally, when you create a Bell Labs like environment (which I mean in the loosest possible sense - any group of people working on these problems and freely intermingling and drawing on each other as resources counts), you solve the discovery problem for the adjacent possible pretty well, because often you’re so far off the beaten path of the rest of the world that you know full well where to go find the things you need - it’s one of the other hundred-ish people in the building with you (above that scale you probably need help navigating that, but that’s still viable).
The complexity of the dragon problem is part of what’s key for driving this - by ensuring that you have many different problems (Gertner’s “Problem Rich Environment”) you end up generating many different ideas, but the ultimate unity of those problems (you’ve still got to save the dragon) means that you’ve got a good chance of them interacting well and generating a rich adjacent possible as they require many of the same skills and disciplines.
When you don’t know where the dragons are
One of the things I emphasised about dragon problems is that they start from something that looks more or less like the desired thing. For example, going to the moon is a dragon problem in the mid 20th century because you’re starting from an established base of knowing about rocketry and planes. It’s not in the 18th century because at that point you’ll be thinking in terms of how to build a better balloon, and that won’t get you where you need to go.
Part of why this is important is that it often breaks the other assumptions if you don’t have it: If you cannot currently conceive of the sort of solution you need, there’s a very good chance that you’ll end up making lots of incremental progress right up until the point where you hit a complete wall because what you need is something entirely outside your ken. Ed Boyden uses the following example:3
As a thought experiment, try to imagine landing on the moon 500 years ago before we understood gravity, calculus and all the you know the properties of aerodynamics like rocketry in support. You might find people tying kites to chairs or trying to fly hot air balloons into outer space. If you think about it, all the financial resources on the planet 500 years ago would not have got you anywhere near the moon.
I think you’d probably learn lots of interesting things about balloons in the upper stratosphere, and trying to fly a balloon as high as you go might teach you a lot about physics and gravity, but yeah you’re going to hit a wall if you try to do something like this.
It’s easy to create this sort of failure by over-reaching, and it’s hard to know in advance if you have. As the line4 goes, “If we knew what we were doing it wouldn’t be called research”. As a result it’s sometimes hard to figure out if you’ve got this sort of problem except in retrospect.
I was going to suggest that fusion might be such an example of a current failure to be a dragon problem, but some googling found this brochure from ITER that suggests actually fusion has produced plenty of spin-offs, which is interesting if true.
One example that’s come up is I’ve previously asked whether you can throw money at the Collatz conjecture. Various people have quoted Erdos at me telling me mathematics is not ready for such problems.
Another example is that trying to solve death is probably too hard right now - various people are trying to prove me wrong, and I very much hope they do, but my suspicion is that human immortality is currently too far outside of the region of what’s understood to drive progress in the same way. It contains many subproblems that might be dragon problems (e.g. curing cancer, curing various aging related problems, etc) but there isn’t a unified whole that works in quite the same way as the more well defined dragon problems.
I think one of the core reasons these problems don’t work is that they don’t give you the sense of direction that dragon problems do. You might call them “grail problems” - they’re more like going on a quest for the holy grail than going to a specific “here be dragons” region. You know what you’re looking for, but you have no idea where it is, and so you wander the earth looking for it without ever really getting a sense of where to go. A dragon problem, in contrast, gives you a clear direction.
Another advantage of starting from something existing, whether it’s a body of knowledge as in the space program or an actually working object like the phone system (and I tend to think the latter is preferable for this), is that you’ve got an actual body of expertise to draw on. You’ve got rocket scientists, telephone engineers, etc.
Without this body of expertise, it doesn’t really matter if the map says where the dragons are if you end up haring off in entirely the wrong direction.
Many not quite examples
There are a number of things that almost-but-not-quite fit the conditions of dragon problems, or are dragon problems but haven’t produced the wealth of benefits that others have.
Much of pure mathematics has the qualities of a dragon problem, and indeed pure mathematics produces many useful spinoff technologies, but it feels like most of the really big dragon-like problems in pure mathematics end up being solved by some random guy who decided to lock himself in his room for a decade and just solve it. The vast majority of mathematics is not like this, but the vast majority of mathematics isn’t solving dragon problems and is instead more incremental exploration.
I think the two main things that tend to fail in pure mathematics are that often we have no idea what the final proof will look like (it fails the conceivability test), and also the big ambitious results are often hard to approach incrementally. Additionally, pure mathematics doesn’t tend to be resourced like the dragon problems that do produce great results.
Finally, I think one problem pure mathematics might have is that there’s often less interest in solving the problem than there is in being the specific person to solve the problem. Status games where you’re all aiming to be the dragon slayer don’t produce the sort of concerted communal effort in the way that something like Bell Labs or the space race do.
This is I think a lot of why the dynamic is some random guy going off on their own and solving hard problems - the resources aren’t really there to do that collectively, and also they really want to be the one to solve the problem.
When I’ve talked about this in the past, people have told me that this is just intrinsic to the nature of mathematics and there’s nothing you can do about it. I’m not convinced, but certainly it does seem intrinsic to the culture of mathematics at the moment.
Another example is big cost-cutting projects. In response to an earlier draft of this I was asked why efforts to cut costs at the NHS (so as to be able to see more patients on the same budget) don’t seem to behave like dragon problems, or at least don’t generate huge amounts of innovation in the same way. It’s a good question and I’m not sure what the good answer to it is.
Two possibilities that come to mind:
Cost-cutting tends to really be a collection of many different smaller problems. As a result it doesn’t tend to stack in the same way as something like a phone network does - solving one problem doesn’t unlock new problems, it causes you to move onto another unrelated area to look for more low-hanging fruit.
Squishy human problems like this tend to fight back. People don’t like it when you cut the cost that they’re profiting from, people don’t like it when you offer them a worse service for cheaper (especially if, as in the NHS, they’re not the ones footing the bill except in waiting time).
It’s also not totally clear to me that these sorts of project haven’t resulted in a lot of knock-on benefits. Certainly things which become cheaper over time often unlock the next revolution (e.g. there’s a lot we can do today that we couldn’t 30 years ago not because it was impossible then but because computers were too expensive to make it worth it).
Dragons and where to find them
So, if it’s true that dragon problems are a fruitful source of innovation, what do we do with this?
Partly I want to go “Ah, that’s a policy question. I don’t do policy” - this is the sort of scale of question that you need a rather large budget to engage with, and I don’t have a rather large budget.
I do think policy-makers should be looking for more dragon problems, especially where they currently instead have grail problems. e.g. instead of something like “solve climate change” (grail), projects specifically oriented around ambitious energy storage goals, carbon sequestration, etc. are more likely to succeed. Many of these are already ongoing, but I don’t know if they’re structured in a way that makes them properly draconic.
At a smaller scale, I think looking for the structure of a dragon problem in your life or business is probably a good way to discover interesting things with useful knock on effects. This doesn’t usually require radically changing what you do - instead it’s more like taking something you do and making it more so, even if it doesn’t necessarily make a lot of sense to do so. Siracusa’s piece, “The Case for a True Mac Pro Successor” is a good example arguing for this sort of project - concept cars and high end computers which have too small a market to make them really worth it in their own right, but that push the envelope of what you can do in a way causes you to develop new capabilities.
There are probably some good examples of this in your personal life too. e.g. I think setting an ambitious writing goal works pretty well as a dragon problem.
Looking for these smaller dragon problems probably won’t achieve breakthroughs in what’s possible for humanity overall, but the notion of the adjacent possible works just as well for capabilities at a smaller scale than humanity overall, and setting out to go look where the dragons are in your own region you’re bound to find something interesting.
Want to go find some dragons at work? Want some help with figuring out how? Or with anything else really? Well, I offer consulting services and would be delighted to help. My expertise is primarily with software companies, but I’m happy to talk to people in other industries too. You can find out more on my consulting site, or just book a free intro call to chat about your company.
In addition, if you’re a software developer I offer open-enrolment group coaching sessions every Friday morning UK time. This is an opportunity to talk about the challenges you face at work with me and up to three other developers. It’s a mix of me providing coaching and moderating discussion between you all, allowing you to get a wide variety of perspectives and a more affordable version of my one-on-one coaching practice. You can sign up for these group coaching sessions here.
The cover image is a picture of the Hunt-Lenox globe saying Hic Sunt Dracones (here be dragons). This is apparently the only actual example of “Here be dragons” on an old map, and the whole thing is an urban myth, but I’m not going to let that stop a good metaphor.
If you’d like to hang out with the sort of people who read this sort of piece, you can join us in the Overthinking Everything discord by clicking this invitation link. You can also read more about it in our community guide first if you like.
Thanks to cian, ksotala, and hyperpape for discussions on said discord that improved this issue.
If you liked this issue, you are welcome and encouraged to forward it to your friends, or share it on your favourite social media site:
If you’re one of those friends it’s been forwarded to or shared with and are reading this, I hope you enjoyed it, and encourage you to subscribe to get more of it!
Also because it makes me think of big ambitious projects as setting out to slay a dragon, which is a cool image. Though NB do not slay dragons in canons where they’re sentient creatures. That’s murder that is.
Those of you who are researchers in pure mathematics working in a small university in Poland, assume what you need is a piece of software maintained by some random software developer in Alabama.
I confess I haven’t actually watched this video. Thanks to Neil Hacker for pointing me towards the example.