Barnabé Monnot

Research scientist @ Robust Incentives Group, Ethereum Foundation.

Research in algorithmic game theory, large systems and cryptoeconomics with a data-driven approach.

Publications + Talks

Posts

Reviews

Teaching

News

About + CV

The art of system thinking

For a long time, it felt difficult to properly pin down what exactly I was doing in my PhD. It was too far from the maths I studied, and it was neither computer science nor economics, but had elements of both. The answer might have been hiding in plain sight. My department’s name, Engineering Systems & Design, is really the most apt description.

What does that look like? In a previous post, I argued that Design should be understood in the sense of Economics, implying that our department is a hybrid of Engineering and Economics: engineering as the optimising machine, economics as the oracle that tells you what to optimise, exactly.

This is still a bit abstract for a twenty-seconds pitch at the family Christmas dinner. I can point out that some people here are working in water resources, others in transportation of people and goods, more in energy markets and allocations… Systems engineering is pluri-disciplinary by nature, but enumerating different examples always seems to muddle the point rather than shed light on it. Confusion abounds!

This post intends to provide a bit of a back-story for system thinking. It is a curated story, meaning that I will not give a textbook explanation, but instead provide a wide range of examples to explore how system thinking can be applied, where it fails and how to conceptualise it. Footnotes are at the bottom of the page — but you can also read through without them.

System thinking in practice

Let us start with this assumption, central to system thinking:

Bad systems promote bad outcomes.

To be general, systems are either put in place — or emerge — to allocate resources given a certain input. Road signs and road laws allocate lanes and space to different users, stock markets allocate funds to companies, reservoirs allocate water to utilities, democracies allocate laws etc. When the rules of the allocation are not well-designed, bad outcomes ensue.

Take a simple example from one of my favourite systems: mobility in cities. Asking cyclists, PMDs and pedestrians to coexist on sidewalks that are not properly designed for this interaction, as is the case currently in Singapore, promotes bad outcomes. This is a situation that I discussed at length before, and there are plenty of metrics confirming this point: the number of accidents involving individuals of one group colliding with another is clearly increasing, while pedestrians are understandably frustrated by the congestion around and cyclists lament at the time wasted on the sidewalk when they could zip by if an appropriate protected dedicated lane was available. Bad outcomes here, i.e. colliding with pedestrians, obstructing the way or moving slower than expected, are not so much a moral choice (“I want to go and annoy pedestrians”) as the result of an improperly designed system (“It is unsafe for me to go on the road, thus I will cycle on the sidewalk”).

In a bad system, there are also opportunities for agents to “game the system”, or exploit its flaws for their own profit. A hardcore system thinker will say: “You are not gaming the system if it is badly designed, but simply taking it to its logical conclusion of unfair / inefficient allocation”. This argument has been heard in multiple places throughout the relatively short history of blockchain, perhaps the most exciting system to have appeared in recent memory. For instance, when funds were taken from the contracts implementing the DAO on Ethereum using a vulnerability in their code, the debate essentially boiled down to deciding if this should be treated as a design flaw, with the consequence of hard-forking, or as the natural conclusion of the system’s functioning, in which case there should be no fork (spoiler: there was).

This distinction applies neatly when a system is put in place, or engineered. But it is more often the case that systems somehow emerge from groups of individuals simply going about their business or daily interactions. I will later make some of these ideas more formal, but we have plenty of references to start with.

The Wire, for instance, may have been the greatest representation of system thinking applied to social structures. In the words of its creator David Simon, it is “about the American city, and about how we live together. It’s about how institutions have an effect on individuals. Whether one is a cop, a longshoreman, a drug dealer, a politician, a judge or a lawyer, all are ultimately compromised and must contend with whatever institution to which they are committed.” Institution can readily be understood as system here, as both are self-enforcing and provide a set of rules, either explicitly (via laws) or implicitly (via a code of honour, traditions or other).

The show turns on its head the idea that some people simply act badly and stresses instead how a stable set of rules naturally promotes bad outcomes. The war on drugs depicted there is a constant backdrop (since “it’s not a war if it never ends”), implying the stability of the social structures / institutions at play here, that emerged out of centuries of government or decades of abandonment and urban decay.[1]

System thinking vs. individual responsibility

The drug dealer or the cop in The Wire must be seen as consequences of the institutions over which they have limited control, as swimming against the current is typically not rewarded by the system in place — look no further than the tragic ends of Wallace or Frank Sobotka, or the constant hurdles faced by Daniels’ detail.

However, offloading individual responsibility to the overarching systems and social structures is not always a satisfying answer. First, it begs the question of what to make of our own individuality and agency, and is certainly repulsive to any self-respecting humanist. Second, in the face of abhorrent moral behaviour, how much can we simply attribute to following the system in place? This question was most critically raised during the Nuremberg trials, in the face of the “banality of evil” as Arendt described it, who further attempted to analyse the moral and legal implications of agents following and implementing policy.[2]

An additional criticism levelled at system thinking is how to even define the agency of users within the system. It is often necessary to model users as self-interested players — in the game theory sense of the word — interacting with each other, endowed with a certain degree of rationality. As a cyclist, I am interested in minimising my own travel time and maximising my safety and though it is possible to model the discomfort that I impose on other users as part of my decision factors, it leads to further thorny questions down the road and is sometimes best left out of a model for the sake of tractability and interpretability.[3]

The question of rationality is even less settled, and much work has been produced in behavioural economics to dispute its most critical assumptions. System thinking starts breaking down if the rules of the system itself are no longer governing the behaviour of the agents interacting within it: what if we were all terrible navigators and had no idea how to pick the fastest route? Or for a more realistic example, if we had biased perceptions of our chances to win at the stock market? (spoiler: we do) A silver lining: it is possible to design systems that do not assume the best of us (or the worst, when you think about it). And if rationality cannot be assumed because our agents do not understand their action space, e.g. if they cannot find the best route for themselves, it is a strong hint that there is a better system design out there.

Conceptual nuggets for system thinking

We have strayed far from the starting point of our conversation, which is after all what to make of systems engineering. But hopefully, the detours allowed us to adopt a critical view of system thinking, imperative to properly formalise it. Here I will go through an example adapted from the classical Pigou network, which has motivated much of my work so far. The example illustrates some of the principles discussed above in a visual metaphor.

Let’s look at the following picture: Price of Anarchy Photoshop skillz > 100

This is the physical embodiment of the forces at play in systems engineering. In urban design terms, this situation is called a desire line: an unplanned patch of vegetation is employed as a shorter route instead of a planned section of sidewalk or road. As more people go to the shorter route, their feet stomping on the grass start leaving a clear trail.

From a user perspective, it is a success: the user has found a shortcut to reach her destination on the other side of the lawn. This situation is an equilibrium: none of the users walking on the grass would prefer to deviate to the path with the yellow arrow. But each user also creates negative externalities — here, stomping on grass.[4]

Assume that the system designer wants to minimise how long people take to go to their destination and the amount of trampled lawn. Assume also that if only half of the users were to use the longer paved path instead of the shortcut, the reduction in negative externalities would compensate for the longer travel times of these users, perhaps because the grass would grow again. Given these assumptions, the system designer would prefer to route some users via the yellow path and some via the green path, even though it would still be faster for all users of the yellow path to go and cross through the lawn.

Then, we have a clear difference between the realised equilibrium (everyone on the grass) and the system optimum (a.k.a. social optimum: spreading people between the pavement and the grass), referred to as the Price of Anarchy.[5] Implementing a system such that equilibrium and social optimum coincide is a tricky problem in general, part of mechanism design.

Our desire line was a toy example, but the exact same logic is easily transposed to much larger systems, including traffic and congestion. It is generally not the case that all individual routing decisions of a road network’s users amount to the system optimum, or the lowest amount of externalities in the system. Here I impose a negative externality on all the commuters choosing the same road as mine because I create one extra unit of congestion.

Would it be better for the population as a whole to spread between two routes A and B? Yes. But would you individually decide to do so if you were trying to minimise your own time and route A was always faster? Probably not, unless the system had extra incentives / penalties for you to do so: tolls are usually the answer given by mechanism design here.[6] And more generally, the problem of incentivizing users to make private decisions adding up to public interest is much of what motivates systems engineering.

Putting it all together

System thinking’s power comes from looking at the interactions of agents involved in a choice or a distribution of resources. In some places, it allows for a neutral, goal-driven interpretation of the world (“I will cut through the grass because it is faster for me to do so”), useful for operational insights. But keeping in mind its limits is in fact key to design better systems.

More importantly, system thinking forces us to be critical of the assumptions and implementations of the systems around us. In popular opinion, politics or the media, there is a plethora of rhetoric aimed at pushing the blame on users or agents within a dysfunctional system, while more rarely is the impact of system design on daily behaviour and decisions considered. The reasons are multiple of course, but it is not always hidden interests in safeguarding a bad system that prevail. A better understanding of system thinking can help shift the focus away from bad system outcomes and towards improving bad system designs. Mechanism design is a hot topic at the moment with the nascent blockchain and cryptoeconomics explorations, and it has the potential to provide such a shift.

[1] As an aside, there is a point to be made that emerging systems always seem harder to topple than the ones willingly put in place. This might be the result of an “evolution”-like argument of survival of the fittest, whereby a system is refined through repeated interactions.

[2] There also seems to be such a split between the structuralists and their post-structuralist counterparts, the latter rejecting the rigidity of analysing human behaviour and actions purely through the prism of systems and structures. Not knowing much more about it, I will leave this as a sidenote and hope that better informed people can tell me more about it.

[3] Even models featuring some altruistic factors in the decision are based on selfish agents, meaning that they maximise their own utility, which happens to include the welfare of other users.

[4] Please note that I enjoy walking on grass and believe it is a heresy to try and route people away from it 😉 but for the sake of example let’s assume our benevolent social planner does not like that at all.

[5] More precisely, the Price of Anarchy is the ratio between the cost of the worst equilibrium (if many exist) to the cost of the socially optimal allocation.

[6] At least until we all board autonomous vehicles centrally routed by an omniscient platform…

Back to posts