How to understand groups of people
Hi everyone,
I was talking on discord earlier about a model I use for understanding complex systems involving people, and realised that I’d never written it up anywhere. So, here is me writing it up somewhere!
This model is based on a mix of understanding economic models of behaviour and other tools for reasoning about designing complex systems, and as such it tends to have me disagreeing with both people who embrace and and people who reject economics as a tool for thinking about society.
Obviously, I think my approach is the best, and all these people are wrong (or at least missing something). Maybe by the end of this you will too.
User personas
But first lets talk about user personas, because they’ll be crucial to what comes next.
This is the most important section of this entire piece if you don’t already know what a user persona is, and if you take away nothing else but this section I’ll consider my time well spent. If, on the other hand, you already know what a user persona is this section is mostly going to be a recap and you can probably skim or skip it.
User personas are a concept I have from software development, although we may well have stolen them from someone else. They are an incredibly helpful conceptual tool for designing any system that people will interact with in some way.
A user persona is a fictional person who stands in for some broader class of people. You create them in order to have a concrete example of someone to have in mind when thinking about how they will use your system.
For example you might have:
Bob the business user. Bob wants to use your software to solve some very expensive business problems, so he is very motivated. He is also very experienced with your sort of software, which means that he can be relied on to get the basics immediately but also has a lot of preconceptions about how it should work, and can easily find alternatives if yours isn’t good enough for him. Additionally, he is very tight for time - time is money, don’t you know?
Sally the student. Sally is young and inexperienced but very bright. She has a lot of time, but not a lot of budget. She wants to use your software to help her do better in her degree, so she is quite motivated, but that motivation doesn’t necessarily translate into being able to afford much money. It does mean she’s likely to experiment and find a way to make your software work for her use case, no matter how much creativity that requires.
Gordon the grump. Gordon is using this software because his boss told him to use this software. He has absolutely no motivation to use it, and will do the bare minimum of work and complain loudly to the person who controls the actual budget at the slightest opportunity.
Doris the disabled user. Doris is any one of the above users (e.g. Bob), but with the addition of some disability. For example, she might be partially sighted, or not have the use of both hands. Really you should have a whole suite of such user personas, I’m just using one for convenience of explanation.
You get the idea. They’re people who you can imagine using the system and whose needs and preferences it’s worth taking into account to make sure the system works as you want it to.
When designing a feature you can then ask “How would Bob use this? Is it too slow?” or “How can we make sure that this has enough signposting that Sally can quickly figure it out?”.
User personas can span a wide range of levels of detail. You can go into more detail than this if you like, giving each of them more concrete contexts and personalities (what’s Sally’s degree in? What’s Bob’s business use case? Why does Gordon resent using the software and what’s his relationship with his boss that he’s being forced into it?), or you can just abstract them further (e.g. having a newbie and experienced user persona but don’t really flesh out more than that). Sometimes you’ll keep a stable set of user personas around to test all your design against, sometimes you’ll invent a new user persona on the fly to test a new feature.
Whatever level of detail you go for, the most important features of a user persona are to be able to answer the following questions:
What do they want and how much do they want it? (Bob wants to make money, Sally wants a degree, Gordon wants to keep his job while doing the bare minimum of work)
What resources do they have? (Bob and Sally have smarts, Bob has experience and money, Sally has time. Gordon on the other hand has very few resources because he doesn’t care to commit them)
What constraints are they operating under? (e.g. Gordon can’t actually stop using the site because he’s not allowed, Doris needs to be able to use the site on a screen reader).
User personas are not just for software. For example, when designing a building, or a transport network, or a community, it’s worth developing user personas that represent different people who will participate in it, for exactly the same reason that you need to do this with software.
Additionally, I often recommend people have user personas in mind when writing (though I frequently call them “model readers” or something like that). You might have noticed that I used such personas at the beginning of this section, when I signalled two different ways through the piece depending on whether you were already familiar with user personas.
My core model readers for this piece are on the abstracted end of the spectrum, but are representatives of the following three groups:
Someone who have some level of economic fluency but who lacks the user persona perspective on the problem and has some missing pieces of their understanding as a result.
Someone who thinks economics is useless because it deals with stupid unrealistic toy models of humanity that have no bearing on human behaviour. They may or may not already understand the user persona perspective.
An amalgamation of friends I’ve had this conversation with, who already basically understand the thing I’m pointing at here but could use a better conceptual toolkit for reasoning about it. Usually they don’t understand the user persona perspective, but they may also just have not yet put the pieces together.
Hopefully I’ve managed to produce something useful for all such readers.
The homo economicus user personas
There’s a term that is somewhere between serious and running joke, which is “homo economicus” - economic man. The fictional species that economic models are made for, consisting of roughly human shaped entities with clear well defined goals that they rationally pursue.
The key feature of homo economicus is that whatever incentives your system creates, they follow. This isn’t necessarily completely sociopathic, in that their goals may not just be something like “make money” - a perfectly rational effective altruist is just as much homo economicus as the more classically imagined perfectly rational hedge fund trader - but the point is that whatever your system gives them that they consider rewards worth the cost, they will take advantage of, while avoiding behaviours that either don’t lead to their goals or that you actively punish for.
I think most systems you design should have homo economicus user personas in mind.
First, note that there’s nothing stopping you from having such user personas. User personas are fictional. You can have a Barry Trotter, boy wizard, user persona who interacts through your system entirely through his wand and is triggered by pictures of cupboards, nobody is going to stop you. How useful Barry is as a user persona is somewhat questionable, but you can design a system with him in mind and it probably won’t cost you more than effort as long as you also consider other user personas.
This side steps the objection that homo economicus doesn’t exist, because neither do any of your other personas. The question is not whether a user persona exists, but whether they usefully represent people who actually exist, and I think homo economicus is a very useful representation of a class of real user behaviours.
Homo economicus is a stand in for participants in the system who have relatively straightforward goals (e.g. wanting to make money, wanting to have job security, wanting social status) and the willingness, ability, and resources, to strategize to achieve those goals.
As such, almost nobody starts out as a homo economicus persona, but if your system has a strong set of incentives the amount of homo economicus type behaviour tends to go up over time as people learn how to use the system, observe other people’s strategies, etc.
One of my favourite little examples of this is a Vickrey auction. Vickrey auctions are very simple - everyone submits a sealed bid, and the person with the highest bid wins, but pays the value of the second highest bid. This creates a fair system where the correct strategy is to bid your “true value” for the thing. Unfortunately, nobody actually understands this when they first encounter the Vickrey auction. However you can get people to start following this strategy in one of two ways:
Just do a lot of Vickrey auctions and let people learn that when they overbid or underbid they’re usually disappointed.
Walk them through the details of the mechanism and why this is the optimal strategy.
After you do either of these things, people suddenly turn into homo economicus (more or less) and follow the incentives1, but you need to either wait for them to learn the strategy or to help them learn it faster before that happens.
You can see this at play even in long running systems as they respond to massive changes. For example, it took a while for markets to correct to COVID because people had to learn new strategies and catch up with the changed state of the world, even though most participants in the market pre-2020 were already fairly homo economicus in their behaviour (disclaimer: I don’t follow the markets very closely, so this is probably an oversimplification of what happened).
This tendency to become more economicus over time is one of the biggest reasons that it’s important to think about the homo economicus personas - if you don’t, things will seem fine at first, but over time they will get gradually worse as people’s behaviour adjusts to the incentives you’ve created.
One particularly classic failure mode of systems that aren’t designed with homo economicus in mind is Goodhart’s law, which is that any measure that becomes a target ceases to be useful as a measure. If you reward people for shipping features, maintenance will suffer. If you reward people for number of widgets produced, widget quality will suffer. If you reward people for number of papers published, they will slice all their work into many small papers and milk every idea for as many publications as it's worth. A particularly bad example of this is Wells Fargo employees creating fake accounts for people without their knowledge to hit their performance targets. As well as obviously being unethical, it also completely failed to actually achieve its target of making Wells Fargo money.
Goodhart related problems are very easily detected by asking yourself “What would homo economicus do?”.
Another reason that the relative fraction of homo economicus behaviour goes up because people who aren’t willing to follow the incentives drop out of the system - if you’re more of a maintenance type, you leave a company that only rewards new features. If you want to publish one amazing paper per year rather than ten mediocre ones, you leave a university where the latter is what gets you promoted, if you won’t falsify records you get fired because your manager sees how much better your colleagues are at hitting their targets, etc.
This latter particularly happens when there are already a number of homo economicus like people in the system. If everyone is equally “irrational” a system can persist in its irrationality for a very long time, but as soon as someone starts gaming it everyone else gets upset and gradually starts to leave.
As such, considering a homo economicus persona is very important because it focuses you on the central question for all groups of people: What sort of person do you want to be able to thrive here?
Personas with different goals
The big failure mode of economists that I think everyone is pointing to when they complain that economists are modelling a version of humanity that doesn’t exist is that not everyone is going to behave like homo economicus. It’s not even that people are irrational (sometimes they are, sometimes they aren’t), but they have different goals and levels of knowledge and investment in the system - as per the above, nobody starts out as a homo economicus.
But some people you really shouldn’t model as homo economicus, because they’re playing an entirely different game.
I first realised the “homo economicus as user persona” model when I read the book Radical Markets. Radical Markets is a very interesting book, full of neat mechanism designs that elegantly solve many important problems. Implementing them would be a disaster, and result in a beautifully elegant utopian society perfectly designed as an engine of human misery.
Let me give you an example. One of the questions they’re interested in is property tax. How do you know how much to tax a property? It depends on the valuation of the property, but how do you ensure that the valuation is up to date?
Easy, they say! Everyone is responsible for setting their own valuation. You declare a price for your house, your house is taxed based on that price, and if anyone offers you that price you are required to sell it to them.
This perfectly and elegantly aligns incentives - you don’t undervalue your house because then you might sell it for too little, you don’t overvalue your house because then you will pay too much tax. Such a neat design! It solves so many social problems!
Also I hope your abuser isn’t substantially richer than you. If they are then boy do you have a problem, because they can just keep buying your house out from under you.2
One of the user personas I didn’t mention above is that you also need personas for abusers and their victims - people who are using your system with the intent to harm someone else, or are themselves at risk of being harmed3. Abuser behaviour tends to be “rational” in the sense that they have a simple goal that they are willing to strategically pursue, but that goal is not their own wealth but instead someone else’s suffering.
You can see similar but less bad behaviours whenever someone enters a system that they’re mostly immune to the incentives of. I once crashed the strawberry economy of an online game because I thought it would be funny and I wanted to see if it would work. This was absolutely a dick move, but I wasn’t actively out to hurt anyone in particular, I just didn’t care much if I did (it was, after all, just a game), and my incentives were deeply misaligned with people who wanted to make money out of strawberries.
The game in question is called Economies of Scale, and is a game about producing, buying, and selling commodities. You make things to turn into other things, etc and have simulated factories and markets and such. A lot of the basic things you produce are agricultural because they have the fewest dependencies. Of these, strawberries formed the bedrock of the market and were valued quite highly.
Too highly, I noted, and once I had plenty of cash coming in I thought it would be funny to see what would happen if I just started selling strawberries massively below cost.
The outcome was of course fairly predictable - what happens is that the market crashes, gutting a significant chunk of newbies' potential source of income, because people can no longer sell strawberries at a profit.
Eventually I stopped doing it and the market corrected to selling strawberries basically at cost instead of their previous somewhat inflated price. The market would have already corrected without me but the problem was that it wasn't very responsive because economies of scale is not a terribly widely played game. So it was very sensitive to what one mid sized player felt like doing for the lulz. For all I know, it might have stayed that way forever - people tended to graduate out of the strawberry economy, so the incentives to lower the prices were fairly low.
Nevertheless, if I’d felt particularly invested in keeping the prices of strawberries unsustainable, I could absolutely have done that. It was a tiny entry on my profits and losses.
You can also see similar behaviours whenever you get someone who has enough power that their individual decisions have a major effect on a market, especially when they switch to a different maximisation game. e.g. It’s comparatively hard to reason about markets that Elon Musk touches, because everything is so sensitive to what he feels like tweeting today or how he currently feels about space.
As such, even if they’re merely chaotic rather than malicious, if your system contains people with outsized power you need personas for them too.
Prey personas
Another important model of persona are people I think of as prey personas (these are different from the targets of abusers) - people whose main role in the system is to “feed” other users. Ideally you want to protect these people from being exploited, although sadly many real life system designs instead try to figure out how to exploit them.
Classic examples of such personas are gambling addicts (or just naïve gamblers) and day traders in a market. They’re not very good at what they’re doing, and the rational thing for them to do is either to get good or get out, but often they won’t do that and will persist in their foolishness without becoming wise.
From the point of their homo economicus predators however, this is great, because they’re the only viable source of profit in the system.
Gambling (e.g. making bets on horses, although bookies complicate matters here, not so much games of skill like Poker where it’s even more complicated), is a particularly good example of this model, because if everyone is maximally good at gambling and has well calibrated probabilities, nobody makes money because nobody takes a bet they expect to lose on. However, if you have a sufficient supply of suckers around, you can make money by betting accurately against people who don’t.
Which leads to why I think of these as prey persona: It ties in to predator/prey population models.
Consider a simple two-species model with wolves and deer. There is a stable equilibrium where wolves eat deer at roughly the rate at which deer are born, but you can also get oscillation: If the deer population explodes, wolves have a glut of food, so more wolves are born, so deer get eaten faster, at some point the wolf population overshoots and the deer start dying off, so now wolves start starving and die off, which means that the deer population can climb back up...
The most important part of this though is the rather banal observation that you can’t have a purely wolf population. It might be better to be a wolf (you don’t get eaten), but the wolves cannot sustain themselves without deer to eat. As such the stable population of wolves is some fraction of the stable population of deer, and an attempt to introduce more wolves will probably lead to wolf die off.
Gambling is like this too only with somewhat less death. You have savvy gamblers who make money off the sucker gamblers. It’s much better to be a savvy gambler, but as the ratio of savvy gamblers goes up, the number of suckers they can profit off is shared between a larger number of people, the suckers have a worse time, and so first the sucker population crashes, then the savvy population mostly get out of the game because it’s no longer worth it, giving the sucker population time to rebound…
The result is a dynamic that partially falsifies my claim that homo economicus tends to become more prevalent over time. In many systems, homo economicus is reliant on a certain degree of irrationality in the system in order for it to be worth participating, so there is no long-run fully economically rational stable system - you need to consider a hybrid system consistent of both homo economicus and their prey.
Taking heterogeneity into account
If there’s a simple take-home message to all of this, it’s that people are complicated, and you can’t design systems or understand group behaviour without taking that into account. In particular, people are complicated in highly heterogenous ways - I’m complicated, you’re complicated, and some of those complexities are shared and some are very much not.
User personas provide a great, and often neglected, tool for trying to get a handle on some of that complexity. Incentives, as analysed by economists, are another great tool. These two tools work much better together, and you should be using them that way.
Postscript
As well as writing interesting things on the internet, I also do consulting and coaching for software companies who want to make better decisions, get better at developing software as a team, and generally help people develop their soft skills and build better human systems. If that’s you, you can check out my consulting site at consulting.drmaciver.com, or drop me an email to chat about it at david@drmaciver.com.
The cover image for this post is a lightly modified version of Spot the Cow from Keenan Crane’s 3D model repository.
This post started as a conversation on the Overthinking Everything discord server. If you’d like to hang out with a bunch of complicated people who talk about this sort of thing, you can join us there by clicking this invitation link. You can also read more about it in our community guide first if you like.
We’re currently experimenting with the structure a bit and when you join you’ll only have access to the public half of the discord while we figure out onboarding (the influx of new users from Hacker News was a bit painful and we had to take measures).
Finally, if you liked this issue, you are welcome and encouraged to forward it to your friends. If you’re one of those friends it’s been forwarded to and are reading this, I hope you enjoyed it, and encourage you to subscribe to get more of it!
Although how quickly they do this is somewhat questionable. What I found when offering a Vickrey auction for my services was that I was reliably earning less per hour than I could find people willing to pay. I suspect the rate would have crept up over time as more people participated, but I didn’t want to wait so I just went back to charging an hourly rate!
It’s possible to argue that this isn’t as much of a problem as I think it is, because it results in a substantial transfer of wealth from your abuser to you, and can only be sustained for so long. Additionally, I’m sure that one can patch this out of the system by making modifications. My argument isn’t that the radical markets proposal can’t be rescued, it’s that this is the sort of thing you have to at least consider and they very much haven’t shown any evidence that they have.
I recommend this post and talk by my friend Alex Chan as a good intro to this sort of design thinking.