Scale: The only way to change the world

Complex systems—such as organizations, governments, ecosystems, planets, or galaxies—tend to exhibit different properties and, consequently, to behave differently depending on their current relative size. Put simply, things that happen at a smaller scale may happen very differently—or not all—at a larger scale.

If scale didn’t matter, then all relationships would be linear. More of one thing would mean more of another—infinitely. For instance, if rain is good, we should want infinity rain. If donuts are good, we should eat infinity donuts. Such reasoning is clearly flawed.

In reality, having more (or less) of something is not always better (or worse); rather, which way you should go depends on where you already are.1 We want more rain during a drought, but not during a flood. The first donut is delicious; the twelfth might cause problems. This is called nonlinear reasoning—a critical tool for any individual hoping to be less wrong.

Understanding the dynamics of nonlinear effects at scale can help us build more successful businesses, programs, and policies—and avoid catastrophe.

The nonlinearities of scale

Infinitely linear (“constant”) returns to scale are rare. More often, growth involves increasing returns or diminishing returns—or likely both, given time.

In business, key examples of increasing returns include economies of scale, whereby the average cost of all units declines as production volume increases (e.g., Costco, Tesla), as well as network economies, where each additional user makes the whole network more valuable to all other users (e.g., TikTok, Gmail).

Critically, most real-world results—from donuts, to rainfall, to business—eventually suffer from diminishing returns to scale. With economies of scale, for instance, a growing company cannot decrease its unit costs forever. Beyond a certain size, diminishing returns will tip into “diseconomies of scale,” when a firm’s unit costs increase. Diseconomies might emerge due to managerial limitations, swelling bureaucracy, or increasing scarcity (and thus higher costs) of key inputs, such as raw materials or specialized talent.

So, we’ve established that things change at scale. Let’s explore how to use these principles to build our own ventures.

My idea is scalable… right?

If we care at all about the size of our impact, we need to be able assess the potential scalability of an idea, policy, or program.

“Put simply: you can only change the world at scale.”

John List, The Voltage Effect (2022, pg. 9)

Successfully scaling requires taking an idea from a small group (of customers, units, employees, etc.) to a much larger group, in a healthy and sustainable way.

Fortunately, in his fantastic book on scaling, The Voltage Effect, economist John List proposes five scalability “vital signs” to analyze:

Vital sign #1: False positives

Early evidence may convince us that something is true, when in fact it isn’t—whether due to statistical errors or to human bias.

For example, based on some tentatively encouraging early research, the “D.A.R.E.” program, a zero-tolerance anti-drug campaign championed by the Reagan administration in the 1980s, was scaled to 43 million children over 24 years. Eager politicians jumped aboard to proclaim themselves as pro-kids and pro-cops. The problem: every major study on D.A.R.E. found that the program didn’t actually work, and in some cases actually increased drug use. Unsurprisingly, the program lost federal funding in 1998.2 The false positive signal that convinced leaders to prematurely scale D.A.R.E. nationwide wasted billions of dollars and decades of time and effort by students, administrators, scientists, and legislators.

Because we humans, regardless of intelligence level, often fail to adequately critique our own ideas, we are prone to confirmation bias. This helps explain the occurrence of false positives, as we simply avoid information that challenges our preexisting or preferred beliefs.3 We cannot afford to make this error at scale. We must bring a critical perspective to our own ideas and a healthy skepticism towards whether positive early results are likely to be replicated at scale.

Vital sign #2: Knowing your audience

There is always a risk that the individuals who participate in a pilot study or survey will behave in ways particular to that group. Inevitably, when we scale something to new groups, different people will behave differently.4

If a social media platform such as Twitter, for example, launches an experimental new feature to a small subset (say, 1%) of users to assess the feature’s potential, it needs to be keenly aware of the potential for selection effects—the risk that the sampled users are not representative of the full user population (i.e., that the sample is not truly randomized). For instance, users of the new feature might disproportionately be die-hard Twitter users, or they might be younger and more tech-savvy than the broader user base. Such biases could skew the pilot results to be overly optimistic, giving an illusion of scalability.

We must understand exactly who our idea is for in order to assess its potential for scalability. The characteristics of our early adopters might be very different from our target market at scale.

Vital sign #3: The chef or the ingredients

For an idea to scale successfully, we need to identify our true performance drivers and do everything we can to cultivate and protect them. In particular, is it the “chefs” (indispensable individual humans) or the “ingredients” (the product or idea itself)?

  1. Chefs — Simply put, humans with unique skills don’t scale. Individual-reliant ideas have a ceiling to their scalability. If key people are overloaded, an organization can atrophy. This is why elite chefs focus more on quality and reputation for one restaurant (or a few), versus on expanding to many locations—where their unique talents would be hard to replicate.
  2. Ingredients — We must determine what elements we cannot live without—and whether they are scalable. For instance, if quality is a critical ingredient, quality standards cannot decline as we grow. Or if faithfulness to a particular mission is a must-have, we cannot allow drift from the original intent at scale. This “program drift” plagues governmental and philanthropic programs in particular, often due to multiple funding sources each pushing their own agendas.5

Vital sign #4: Externalities (“spillovers”)

The larger the scale, the greater the potential for unintended consequences, or “externalities.” When many individual decisions accumulate and interact, equilibrium may be disrupted and spillovers are inevitable—potentially working for or against our intended outcome.

As a negative example of spillovers, the flagship 1968 federal law requiring seatbelts to be worn in cars actually caused drivers, feeling safer, to take more risks while driving—wiping out the safety gains from wearing seatbelts.

In contrast, a positive example is achieving “herd immunity” from a disease through mass vaccinations. At scale, when a critical portion of the population receives a vaccine, positive spillovers emerge as unvaccinated people still benefit indirectly since a substantial share of the people around them are now immune.6

Positive spillovers scale incredibly well. As with herd immunity, achieving spillovers may even become the objective. Equally powerful, negative side effects can be worse than the original symptoms. As best we can, we should think about what second- and third-order consequences might emerge if we scale our idea, and be prepared to deal with unintended consequences.

Vital sign #5: The cost trap

Regardless of how good an idea is, if the returns on your product don’t exceed the cost, or if the benefits delivered by your program don’t support the expenses, the idea is not scalable. As we learned above with economies and diseconomies of scale, our cost profile can change dramatically at scale—for better or for worse.7

***

The overarching lesson is that when we are dealing with complex systems, we should always establish the scale at which we are analyzing the system—in “orders of magnitude,” at least (hundreds, thousands, millions, etc.). Any insight that applies at one scale may be different or even opposite at another scale. And before we invest substantial time and resources to scaling our ideas, we should critically evaluate whether the idea has the vital signs of success.

Leverage Points: The secrets to systemic change

Leverage points are places within a complex system—such as a corporation, an economy, a city, or an ecosystem—in which a small shift in input force can produce amplified changes in output force.1 When concentrated towards the most critical areas for improvements, even small shifts in our efforts can cause large, durable changes.

If we care about maximizing our impact in our companies, communities, or personal lives, leverage is one of the most valuable models to understand.

Taking the systems perspective

Unfortunately, we live in an event-oriented society, in which we focus mainly on our day-to-day experiences. Things happen, we react, then we repeat tomorrow. Reacting to events is the lowest-leverage way to instigate change. It oversimplifies the world into a series of linear, cause-effect events, while ignoring the deeper causes underlying our experiences. In reality, our world is a web of interwoven relationships and feedback loops.

Consider complex societal problems such as homelessness or income inequality. Seemingly plausible solutions to these issues can produce highly unpredictable results and unwanted side effects.

Adopting a systems perspective, on the other hand, allows us to unearth the highest-leverage places for intervention. By looking beyond individual events and studying the higher-level patterns of behavior of the system, we can identify and act on the system structures creating those patterns.2

Adapted from Introduction to Systems Thinking (Kim, 1999, pg. 4)

Tackling homelessness

Consider the problem of homelessness. In order for individuals to qualify for permanent housing support, traditional policies have required homeless people to address certain issues that may have led to the episode of homelessness—for example, by requiring negative drug tests or mental health treatments.

These contingencies often lead to affected individuals getting entangled in bureaucracy and trapped in a vicious cycle where becoming homeless actually exacerbates the issues that caused the episode in the first place, such as unemployment, substance abuse, or medical issues—making it harder and harder to overcome. Simply put, these are low-leverage solutions.

However, modern policies have begun to adopt so-called “Housing First” principles, which enact a simple rule: offer basic permanent housing as quickly as possible to homeless people, followed by additional supportive services. Early research suggests that Housing First policies have helped improve housing stability, reduce criminal re-convictions, and decrease ER visits—all while saving money.3 That is leverage.

The leverage points checklist

Leverage points are not always intuitive, and are ripe for misuse. The checklist below provides a useful aid to identifying them (ordered from least to most impactful):4,5

9. 🔢 Number & Parameters

Sadly, the majority of our attention goes to numbers, such as tax rates, revenue goals, subsidies, budget deficits, or minimum wage. Typically, they don’t provide much leverage on their own, because they rarely change behavior. Truly high-leverage parameters, such as interest rates, are much more rare than they seem.

8. 🏗 Physical Interventions

Physical structures that are poorly designed or outdated can be potential leverage points, but changing them tends to be difficult and slow relative to other, “intangible” interventions.

7. 🔁 Balancing (Negative) Feedback Loops

Negative feedback mechanisms can help systems adjust towards equilibrium, such as how the Federal Reserve adjusts interest rates to promote economic stability, or how companies use employee performance reviews to highlight growth areas and take corrective action if needed. Affecting the strength of these balancing forces can create leverage by reigning in other forces.

6. 📈 Reinforcing (Positive) Feedback Loops

Positive feedback mechanisms can fuel virtuous growth cycles (as with compound interest on an investment portfolio), but left unchecked they will create problems (such as exploding inequality).

Weakening reinforcing loops usually provides more leverage than strengthening balancing loops. For example, the “Housing First” policies discussed above aim to obviate the self-perpetuating nature of homelessness before the loop begins by simply giving people housing immediately.

5. 📲 Information Flows

Simply by improving the efficiency with which useful information can feed back to decision makers, we can often create tons of leverage by enabling better and more timely decisions. Poor quality or slow information can undermine individuals and organizations of all kinds.

4. 📜 Rules

Whoever controls the “carrots and sticks” in a system can exert massive influence. A country’s constitution, a legal system, and a company’s employee compensation plan are all potential high-leverage rules.

For example, salespeople traditionally receive a commission for every deal they close. These schemes provide strong incentives to do big deals and as many as possible. Recognizing that these incentives can fail to promote good post-sale customer support and breed unhealthy internal competition, some executives are experimenting with “vested commissions,” which are paid out over time to sales and customer success employees alike. Such rules can help foster a culture based on relationships and long-term customer success, rather than on transactions.6

3. 👩🏽‍🔬 Experimentation

Evolutionary forces are remarkably powerful, even outside of nature, where genetic variation and selection enables species to develop incredible adaptations. Similarly, companies or countries that are able to innovate and evolve will become more adaptable and resilient.

A culture of experimentation helps explain the success of Google, Amazon, and many others. Many firms fail to innovate simply due to an intolerance for failure. The vast majority of experiments will flop; the challenge is in viewing those failures as opportunities for improvement and learning instead of as wasteful side projects.7

2. 🎯 Goals

Every system has a goal, though not always an obvious one. All companies, viruses, and populations share the goal to grow—a goal which becomes dangerous if left unchecked. Change the goal, and you change potentially everything about the system.

1. 💡 Ideas

Our shared ideas (or “mental models”) provide the foundation upon which we interpret all our experiences. These ideas may be deeply embedded, but there is nothing inherently expensive or slow about changing them. For high-leverage impact, we must be willing to challenge existing ideas. Step back, take an outsider perspective, expose the hidden assumptions in our ideas, and reveal their weaknesses or contradictions.

Remember: all knowledge is tentative and conjectural. As with Einstein supplanting Newton, our best theories can be toppled at any time by newer, better ones. Since the world is always capable of surprising us, the best approach is to keep our minds open!

Private equity, lords of leverage

The business world offers great real-world applications of leverage, notably in the private equity (“PE”) industry, whose whole operating model revolves around applying principles of leverage.

PE firms seek to buy-out underperforming companies, turn them around, and sell them for a large profit. They immediately identify and act on key system intervention points. For instance, they promptly change the goal of the companies: generate more cash flow. By stressing 2-3 success levers and concentrating efforts towards them, they instigate reinforcing feedback loops. They use debt, which not only amplifies financial returns but also acts as a balancing loop by instilling cash-flow discipline on management. Finally, they often change shared ideas by replacing prominent executives.8

The success of this systematic approach is well-established. A 2020 study found that US buyout funds have outperformed the public stock market in nearly every year since 1991.9

***

While PE is just one example, the overarching lesson across economic, social, and political systems is that we can drastically increase our impact if we pause, adopt a systems-level view, and assess what points of intervention will most effectively push us towards our goals.

Dynamic Equilibrium: Balance, wobbling on the edge of chaos

Dynamic equilibrium describes the balancing cycle that occurs between two phases coexisting on the edge of a phase transition (in so-called “phase separation”), such that neither phase overwhelms the other. Dynamic equilibrium is the goal-state of balancing (negative) feedback loops, which counteract change in an effort to maintain stability.

Consider how a thermostat maintains the temperature of a room at a desired level, constantly rebalancing as the room conditions change.

Dynamic equilibrium exists not when the system is at rest, but rather when its inflows and outflows roughly offset each another, causing the level or size of some stock (such as the room temperature) to remain within a tolerable target value or range.1 Despite constant change, the system maintains a tentative balance.

This model has broad applications for systems across disciplines, including in biology, economics, physics, and business. Because most of what we do takes place in complex systems, we can improve our decision making by understanding the forces determining whether a system is in equilibrium, and how small changes in those forces could create enormous changes in system behavior.

Perfectly balanced, as all things should be

Imagine a bathtub in three different stages:

  1. Static equilibrium” — There is some water in the tub, but the faucet is OFF and the drain is CLOSED. There is stability only because nothing happens.
  1. Not in equilibrium — If we leave the drain CLOSED but turn the faucet ON, the water level will begin to rise. We are now in a phase of growth, not equilibrium, since inflows exceed outflows. Without intervention, the tub will eventually overflow (another phase transition).
  1. Dynamic equilibrium” — If we leave the faucet ON and OPEN the drain such that water is draining out at the same rate as it is flowing in, the water level will not change. The tub is now in a “dynamic equilibrium;” despite the inflows and outflows, the tub will neither empty nor overflow.

We observe these dynamics in all types of complex systems, such as natural ecosystems, economies, businesses, or the human body—any system maintaining a general balance between its inflows and outflows.

In biology, balancing feedback manifests in all life forms through homeostatic behaviors, which counteract any change that moves us away from optimal functioning. For instance, our bodies induce specific, automatic responses to regulate our body temperature, blood sugar levels, and fluid balances.

Humans seeking to create or preserve balance have invented remarkable dynamic equilibrium-maintaining technologies, such as thermostats, autopilot, cruise control, and process control systems in manufacturing.

Delicate balance, at best

Because complex systems (companies, forests, economies, etc.) often involve several simultaneous and competing feedback loops, system behavior will be determined by the loops that dominate. In a dynamic equilibrium, these loops are equally matched. However, systems can experience disruptive and unpredictable shifting dominance if the relative strengths of these loops change (e.g., if we turn the faucet up even higher on a tub that was in equilibrium).2

Near equilibrium, systems generally respond in a more predictable, linear fashion to changes in their environment. But when systems deviate from equilibrium, small shifts in the environment can produce large, nonlinear changes in the system—changes which commonly follow a power-law distribution.3

Consider the field of economics, where periods of apparent stability can quickly transition to unstable “boom” or “bust” cycles if various reinforcing feedback loops start amplifying one another. Bubbles may emerge from a mash-up of high consumer and business confidence, greed and speculation, low interest rates, and increasing asset prices. Similarly, bubbles can “pop” if, for example, an external shock (such as a pandemic or a war) triggers fear, reducing confidence and leading to business contraction, sparking market sell-offs, leading to more fear, and so on.

It is up to a portfolio of negative feedback loops to help reestablish economic equilibrium over the long-term. The free movement of prices is a negative feedback loop that helps constantly rebalance supply and demand. The Federal Reserve possesses tools of negative feedback, such as manipulating interest rates or the money supply, in order to tame business cycles. The government could also change tax rates or implement relief packages.

The key lesson is that we cannot become complacent just because the economy, our relationships, or an organization is stable at the moment. Slight shifts in the forces at play can tip the scales toward drastic change!

A recipe for innovation

In his fantastic book on nurturing innovation, Loonshots, physicist Safi Bahcall argues that dynamic equilibrium is one of the critical factors needed to enable technological breakthroughs.

While our “artists” (who work on research and development) are obviously critical to innovation, so too are our “soldiers” (who work on franchises and help bring those breakthroughs to market). In order for organizations to nurture new bets, they must:

  1. Separate artists and soldiers to give raw ideas the breathing room they need to evolve and improve (phase separation); and,
  2. Enable seamless exchange between the two groups to bring innovations to life (dynamic equilibrium).4

Bahcall gives the incredible example of military engineer Vannevar Bush. During World War I, Bush observed that poor cooperation between the scientific community and the culturally tight military was putting the US military prowess at risk of falling behind. He introduced a novel structure by proposing a new organization called the Office of Scientific Research and Development (OSRD), which enabled the military’s research and development efforts to be separate, but simultaneously to stay connected with the military through a seamless interchange.

The OSRD system was able to generate incredible breakthroughs with remarkable efficiency. Its achievements include radar (which helped win the war); work on penicillin, malaria, and tetanus (which helped reduce infectious deaths among soldiers by 20x); plasma transfusion (which saved thousands of lives on the battlefield); and—above all—nuclear fission (which laid the groundwork for the development of nuclear weapons).5

***

Dynamic equilibrium offers an explanation for why complex systems can exist in periods of relative stability, despite being in constant flux. It also explains why these periods of balance are always vulnerable. We cannot afford to assume that stable things will remain so. Subtle oscillations between the feedback loops can cascade into major system changes—whether in our company, our marriage, the economy, or our bath tub.

Phase Transitions: Uncovering the hidden tipping points that change everything

A phase transition is the process of change between different states of a system or substance, which occurs when one or more of the system’s control parameters crosses a “critical” threshold.

We tend to take stability for granted, leading us to be caught off-guard when the ground shifts between our feet. By better understanding the dynamics of phase transitions, we can learn to anticipate change and manage it to our advantage.

From stability to phase change

As a simple example, consider an ice cube (a solid), whose key control parameter is temperature. If we apply heat to the ice, the temperature will rise, and the frozen water molecules being held in rigid formation by binding forces will begin to scatter as the water goes through phase transitions of melting (into liquid) and eventually vaporizing (into gas).

Between each phase, there is a range of temperatures in which the state of the system remains stable. It is only once its temperature crosses certain critical thresholds (specifically, 0° and 100° C) that it enters a phase transition.

In fact, this logic helps explain changes in the “phases of matter” (solid, liquid, gas) for all kinds of physical substances as their temperatures fluctuate.1

For us, the real value of the phase transition concept lies in its applicability not only to physical systems (changes in phases of matter), but also to social systems (changes in phases of behavior). In both types, the whole is not only more than the sum of its parts, but it is very different from its parts.

In complex systems, we can’t analyze one component and predict how the whole system will behave, whether it’s one water molecule in a boiling pot, one employee in a company, etc. In each ease, we need to consider the system—its collective behaviors, including the control parameters that can tip it into unpredictable phase transitions.

Avalanches of peace

Imagine dropping grains of sand, one-by-one, onto a countertop. A pile will gradually form, and its slope (the key control parameter) will increase. For a time, each additional grain has minimal effect; the pile remains approximately in equilibrium.

Eventually, however, the pile’s slope will increase to an unstable “critical” threshold, beyond which the next grain may cause an avalanche (a type of phase transition).

Near the critical point, there is no way to tell whether the next grain will cause an avalanche, or how big that avalanche will be. All we know is that the probability of an avalanche is much higher beyond the threshold, and that avalanches of any size are possible, though smaller avalanches will happen much more frequently.

Through a series of avalanches, each of which widens the base of the pile, the sandpile (an inanimate complex system) “adapts” itself to maintain overall stability!

Research has observed the stabilizing role of frequent smaller “avalanches” in a variety of systems, including the extinction of species in nature, price bubbles and bursts in financial markets, traffic jams, forest fires, and earthquakes relieving pressure from grinding tectonic plates.2 These systems unconsciously “self-organize” towards a critical state, beyond which they undergo phase transitions that help preserve equilibrium, over time.

Growing to death

For those of us who work in organizations, the science of phase transitions provides insight into how to nurture innovation, and how to avoid destructive breakpoints.

Just as avalanches occur with a steep enough pile, high-performing teams in every creative industry unexpectedly shift as they grow over time, often to their own detriment. Consider the fate of former industry titans who, once on top, failed to adapt to a shifting technological landscape—PanAm, Nokia, Kodak, Blockbuster, and so on.

“Growth will break your company.”

Tony Fadell, Build (2022, pg. 242)

Team size is a key control parameter. With small teams (up to ~15 people), every member’s stake in the results of the project are very high. There’s no need for management. Communication happens naturally. Up to 40-50 people, some silos and sub-teams begin to form, but individual stakes remain high, and most interactions remain informal.

However, as teams and companies scale (particularly beyond 120-150 people), individual stakes in project outcomes decline, while the perks of rank (job titles, salary growth) increase until, when the two cross, the system “snaps” and incentives begin encouraging unwanted behavior: the rejection of risky but potentially groundbreaking ideas.3

Moreover, layers of management form, information becomes siloed, jobs become more specialized, culture drifts, and people-related issues explode. Further growth magnifies these problems.

Managing these transitions successfully requires careful design of incentives, org structure, and roles and responsibilities. To do so, we should consider a portfolio of preemptive actions:4,5

  1. Increase “project-skill fit” — Search for and correct mismatches between employee skills and project needs. Employees who are well-matched to their assignments will take more ownership of the outcomes.
  2. Non-political promotions — In promotion decisions, emphasize independent assessment from multiple sources over politics (reliance on the manager).
  3. Get the incentives right — Motivate others with “soft equity” rewards such as peer recognition, autonomy, and visibility. External rewards such as money should be based on milestones or outcomes that individuals can actually control. Too many incentive structures rely on perverse schemes such as earnings-based compensation for junior employees, who have no direct influence on those metrics.
  4. Decentralize — Once you have multiple products, you will need to split your org into individual product groups, sort of “mini-startups” within the business that are more nimble and autonomous.
  5. Optimize “management spans” — For teams focused on innovative new projects (R&D), consider increasing the average number of direct reports per manager to encourage looser controls and more trial-and-error. For “franchise” groups that focus on growing existing businesses, consider narrowing management spans to encourage tighter controls and measurable outcomes (since failure is more costly).

***

Whether in an organization, relationship, laboratory, or highway, we can improve our decision-making by investigating the control parameters governing the stability of our system—and the thresholds beyond which chaotic phase transitions may occur.

We cannot simply extrapolate linearly from the present. Even if our company, our marriage, or our environment is stable at the moment, small changes in critical factors can create unpredictable change. We can either be surprised by the pervasive phase transitions in our lives, or we can anticipate them and harness them to our advantage.

Emergence: More is different, very different

“We can’t control systems or figure them out. But we can dance with them!”

Donella Meadows, Thinking in Systems (2008, pg. 170)

In complex systems such as human beings, companies, or ecosystems, collective behaviors can create dynamics that cannot be defined or explained by studying the parts on their own. These “emergent” behaviors result from the interactions between the lower-level components of a system and the feedback loops between them.

For example, in our brains, groups of neurons exchanging disordered electrical signals create incredible phenomena, such as senses and memory. Collections of employees swapping information and favors within a company can create factions and hidden bargains. In nature, examples of emergent collective behaviors are everywhere, such as birds flocking, hurricanes or sand dunes forming, social network development, the evolution of life, climate change, the formation of galaxies and stars and planets, and the development of consciousness in an infant.

In all of these examples, the system is not only more than the sum of its parts, it is also very different from those parts. We cannot analyze a single skin cell and infer someone’s personality. Nor can we analyze one employee and infer the behavior of the organization.

If we aspire to operate effectively in complex systems, we cannot afford to underestimate the possibility and power of emergent phenomena. The complex ongoing interactions between system components will cause unique and unexpected behaviors. Developing a better understanding of emergence can not only help us prepare to be unprepared, but also to create truly unique explanations and solutions.

Emergent explanations

Imagine how difficult it would be if we could only learn reductively—that is, by analyzing things into their constituent parts, such as atoms. Even the most basic, everyday events would be overwhelmingly complex. For example, if we put a pot of water over a hot stove, all the world’s supercomputers working for millions of years could not accurately compute what each individual water molecule will do.

With emergent phenomena, however, high-level simplicity “emerges” from low-level complexity. As a result, we may be able to understand systems extremely well and make useful predictions by analyzing phenomena abstractly—that is, at a higher (emergent) level. In fact, as theoretical physicist David Deutsch explains, all knowledge-creation actually depends on emergent phenomena—problems that become explicable when analyzed at a higher level of abstraction.1

Consider thermodynamics, the physical laws governing the behavior of heat and energy. These powerful and fundamental laws do not attempt to describe the world in terms of particles or atoms. They are abstract, describing physical phenomena in terms of their higher-level properties, such as temperature and pressure.

Thermodynamics can help us understand why water turns into ice at low temperatures and gas at high temperatures, without requiring us to analyze the details of each individual water molecule. It is due to the emergent phenomena of phase transitions, sudden transformations that occur when one or more control parameters of a collective system cross a critical threshold. At certain temperatures, the water will freeze, melt, or vaporize.2

Merely analyzing individual water molecules would not enable us to simply “deduce” how these phase transitions happen—or indeed whether they would occur at all. Phase transitions are emergent phenomena.

The emergent (crypto) economy

The economy is an intricate web of behaviors and relationships, emerging from the actions of diverse agents—individuals, companies, investors, regulators—each concurrently pursuing their own goals. These agents are not merely “participants” in the economy; they co-create its dynamics. A great example is the emergence of cryptocurrency markets such as bitcoin.

Imagine a small group of techies that starts investing in bitcoin, driven by a belief that its price will rise in the future. If bitcoin’s value indeed rises, news will spread, potentially igniting a viral wave of speculation. The influx is not just about making money, but also about joining a movement. Stories of newly minted millionaires fuel additional speculation and even attract criminals and fraudsters seeking to exploit the frenzy. Concerned regulators may institute game-changing rules to quell the madness.

Here, thousands of decisions by diverse individuals converge to create a market behavior that is far different than the sum of its parts: a bubble—which is clearly not in the collective interest. Characterized by speculation and volatility, the bubble is a cycle of elation and fear, gains and losses, that transcends individual intentions.

We observe such emergent economic phenomena not only in cryptocurrencies, but also in stock market oscillations, housing bubbles, and even the rise and fall of entire economies. In these systems, predictability is an illusion and periods of stability are tentative, as economic agents constantly respond to and anticipate changes in the dynamic landscape they themselves help shape.3

Managing emergent behavior in economics requires an understanding that plausible-sounding approaches might fail—or backfire entirely.

Dancing with fire

In economics as well as nature, emergent behavior can cause rapid change, and managing it may require counterintuitive interventions.

Consider forest fires. The destructiveness of forest fires follows a power-law distribution, in which there are many small fires that are generally manageable or tolerable, and a few massive, catastrophic fires.

Until 1972, Yellowstone Park rangers were required to extinguish every small fire immediately—a policy that made sense when analyzed reductively (“We should save every tree.”). But this policy led Yellowstone to grow dense with old trees, making a mega-fire like the one in 1988 inevitable. The reductive policy ignored the emergent phenomena of phase transitions: as a forest approaches critical thresholds of tree density and tree age, the potential for a massive fire grows exponentially. Today, most forestry services utilize a “controlled-burn” policy of intentionally sparking smaller fires under careful watch, reducing the potential for a massive fire.4

We must be aware when the potential for emergent behavior exists—that is, any time we are engaged with complex systems. Our focus should not be on trying to perfectly predict the future (which invites us into traps of overconfidence and illusions of control), but rather on being adaptable, to ensure we’re prepared to respond to the unexpected.

***

Systems such as the economy, democracy, science, and forests emerged through centuries of iteration and adaptation. The most complex systems emerge and function without anyone having knowledge of the whole.5

Consequently, we should be skeptical about the potential efficacy of simplistic policies and predictions targeted at complex systems. It is impossible to perfectly anticipate the system’s behavior. What we can do is diligently attempt to observe a system’s dynamics, map out its feedback loops, identify and intervene at key leverage points, and be prepared to learn and adapt as emergent behavior unfolds.

Power Laws: The hidden forces behind all sorts of inequalities

A power law describes the relationship between two variables in which one variable varies as an exponent of the other. For example, if the side of a square is doubled, the area is multiplied by a factor of four.

Power laws are nonlinear relationships, where the collective output changes by more than the proportional change in input. These relationships contrast sharply with linear functions (which are simpler and more intuitive), where the changes in input and output are proportional.

The dynamics of power laws help describe many extreme phenomena we observe in the world, from natural disasters to the success of startups. Anyone who aspires to maximize their impact can benefit from learning about the nature of power laws—and how to exploit them.

Far from normal

Power laws produce distributions very different from the nice, symmetrical, bell-shaped “normal distributions.” In normally distributed systems, our observations will have a meaningful central tendency (the average) and increasingly rare deviations from that average—such as with human height and weight. Most people aren’t that far from the average height, and the shortest 1% of people and tallest 1% differ in height by only around 14 inches.1

In contrast, the distributions produced by power laws don’t peak around a typical value; rather, the range of values is much wider—with a majority of observations of modest values on one end, and a “fat tail” of rare but extreme outcomes on the other end. While the tallest people in the world aren’t 10x taller than the shortest people, with power laws, the most extreme events can be orders of magnitude greater than the least extreme.

The distributions of a many physical, biological, and man-made phenomena approximately follow a power-law distribution. For example:

  • The frequencies of words in most languages — If we want to learn a new language, we would do well to start with the small fraction of words (as few as 135 of them) that make up the majority of usage (“Zipf’s law”).
  • The populations of cities — As of 2019, the U.S. had more than 19,000 cities, though just 37 cities housed the majority of the population. The largest one, New York City, housed over 8 million residents alone, more than double the next-closest city.2
  • The size of lunar craters — The Moon has countless small craters from millions of years of minor collisions with other interstellar material, and a few enormous craters from exponentially larger collisions.
  • The frequencies of family names — There are millions of rare or obscure last names, but approximately 1 in 68 people on the entire Earth has the last name “Wang.”3
  • The magnitude of earthquakes — The Richter scale is logarithmic: each increase of 1 on the scale equates to a 10x increase in strength. For instance, a magnitude 5.0 earthquake is ten-times more destructive than a magnitude 4.0 earthquake, but is one-tenth as common.4

Other examples include the size of computer files, the number of views on web pages, the sales of most branded products (e.g., books, music), and individual incomes and wealth.5 In each case, there is a minority that supplies a majority of the outputs.

Sandpiles and avalanches

We should always be wary of the potential for power-law effects whenever we are dealing with systems comprised of many interacting parts—in other words, with complex systems (such as economies, ecosystems, or the climate).6 The various components and the feedback loops that govern them tend to cause such systems to evolve into a very delicate state of balance, a dynamic equilibrium. When forces push the system outside its equilibrium bounds, the system may shift into a new, discrete phase.

As we shall see, when a system “tips” out of equilibrium and into a phase change, the results commonly follow a power-law distribution.

Consider a pile of sand on a countertop. You drop additional grains, one-by-one, onto the pile, steadily increasing the pile’s slope (its key control parameter). At first, each new grain does little; the pile remains roughly in equilibrium. Eventually, however, the pile’s slope will increase to an unstable “critical” threshold, beyond which the next grain may cause an avalanche, a type of phase transition.

At this critical stage, we can’t say for certain whether the next grain will cause an avalanche, or how big that avalanche will be. We do know, however, that the probability of an avalanche is much higher near the tipping point, and that avalanches of any size are possible, but smaller avalanches will happen far more often. A power law!

We see these same power-law dynamics in all manner of complex systems, which essentially “adapt” themselves through a series of “avalanches” to maintain overall stability.7 Examples include the extinction of species in nature, price bubbles and bursts in financial markets, traffic jams, or earthquakes relieving pressure from grinding tectonic plates.

The engine of venture capital

The dynamics of power laws are central to our economy and to innovation in general. Progress occurs through the trial-and-error process of startups trying new and different ways of creating value. These experiments require financial capital from investors who can accept substantial risk, since most startups fail.

Venture capital investors seek to earn outsized investment returns not by having a large proportion of their investments do well, but having one or two “grand slams” that generate massive returns (think Facebook or Tesla). It would not be surprising for a venture fund’s one or two big winners to return more than all their other investments combined. The most that VCs can lose is 1x their investment, but there is (theoretically) no cap to how much they can gain if an investment is successful.

Union Square Ventures, for example, invested in Coinbase in 2013 at a share price of about $0.20, and achieved a massive return when Coinbase opened its initial public offering at $381 in 2021—a valuation of around $100bn and an increase of over 4,000x.8

Feed the winners, starve the losers

A common corollary to power laws is the “80/20 rule” (or, the Pareto principle), which states that for many events 80% of effects (output) come from 20% of the causes (inputs). Mathematically, the rule approximates a power-law distribution. Phenomena roughly following this rule have been observed in income distribution, software coding, business results, quality control, infection transmission, and elsewhere.

While often the exact number varies, the principle reveals that most of the work we’re doing only generates a small amount of our overall results. The key lies in identifying “the 20%”—of activities, individuals, projects, products, businesses, grievances, etc.—that are driving a disproportionate share of the outcomes, and concentrating efforts towards them.

Corporate “turnaround” master Don Bibeault famously relies on the 80/20 rule to drive transformation in struggling businesses, recognizing that most businesses spend too much time satisfying customers, selling products, and preserving marginal employees that make little or no contribution to the bottom line. The key to implementing an 80/20 policy is redeploying resources away from the “marginal many” to the “critical few” that account for current results and future opportunity—a tactic Bibeault calls “feed the winners and starve the losers.”9

***

Power laws teach us that some inputs are much more important than others, and they can explain many of the extreme results we observe in the world. Concentrating our efforts towards unlocking (or avoiding) the outliers of power laws can provide substantial leverage towards helping us achieve our goals.

When seeking an effective strategy or solution, we should ask ourselves where the most “power” in the situation might be hidden!

Exponential Growth and Decay: Grow fast or die trying

Exponential growth occurs whenever a stock of some material or quantity increases or replicates itself in constant proportion to how much there already is. This is a multiplicative effect, in which each step is more extreme than the preceding one. As an example, consider a stock of 1,000 hogs that, given its rates of fertility and mortality, grows exponentially at 10% per year. In the first year, there’s an increase of 100 hogs, then 110 in the second year, then 121 in the third year, and so on—illustrating the snowballing effect.

This type of growth contrasts sharply with linear growth, in which the stock changes by a constant quantity each period (an additive effect). If the hog population grew by a fixed 100 hogs annually, the implied percentage growth rate would diminish over time—from 10% in the first year to 9.1% in the second, and so on.

Exponential patterns are ubiquitous, from biological phenomena such as population growth and disease spread, to economic phenomena such as GDP growth and compound interest, to technological trends such as network effects in communications networks and improvements in the processing power of computers (“Moore’s law”).

Despite the prevalence of exponential progressions, our human intuition often fails to appreciate their speed and chaotic potential. Instead, we gravitate towards linear thinking because it serves us well in most practical circumstances. We can commit fewer errors and better explain and predict the world once we understand the power of exponential growth, and its equally powerful ability to unravel.

Feedback loops: Accelerators and regulators

The underlying driver of exponential growth lies in reinforcing (positive) feedback loops, which exist whenever a system (such as a virus or a savings account) can self-multiply or grow as a constant fraction of itself. These amplifying forces generate exponential growth, producing either virtuous or vicious cycles.

In contrast, balancing (negative) feedback loops are stabilizing, goal-seeking functions that aim to maintain a system in a given range of acceptable parameters—in a “dynamic equilibrium.” Consider how a thermostat regulates the temperature of a home, or how our bodies induce perspiration and shivering to stabilize our body temperatures.1

In physical systems that are growing exponentially, there must be at least one positive feedback loop propelling the growth, but there must also be at least one negative feedback loop constraining the growth, because no physical system can grow forever in a finite environment.2

The inevitable decay

Consider a virus such as COVID-19, which initially spreads exponentially through the population as each infected person infects multiple others. It might seem like an uncontrollable plague.

But again, nothing grows forever. The flip side of exponential growth is exponential decay, when a quantity decreases at a rate proportional to its current value. If we withdraw 10% of the funds in our savings account every period, the accounts’ value will decay exponentially in a downward reinforcing feedback loop.

Let’s return to our example of a virus such as COVID-19 or smallpox. Over time, balancing feedback loops will kick in to combat the spread. In the worst case, the virus could simply start running out of people to infect because so many get sick. But eventually, even highly contagious viruses run out of steam. Our bodies will start to develop antibodies to increase immunity. Widespread vaccinations can achieve the same effect. Governments, organizations, and individuals may adapt their behavior to mitigate the risk and impact of the virus (wearing masks, providing aid, social distancing, etc.).

Once the average number of people that one infected person infects (the so-called “R-knot”) falls below the critical level of 1, the exponential growth turns to exponential decay, and the virus begins to die out. This is known as “herd immunity,” when there are not enough new hosts to whom the virus can continue to spread. The key insight for epidemic control is that “perfection” is not necessary; we don’t have to stop all transmission, just enough transmission to achieve herd immunity.3

We can find exponential decay progressions in a variety of real-world applications, including the biological half-life of chemicals or drugs, the rate of radioactive decay by which nuclear material disintegrates, the decrease in atmospheric pressure at increasing heights above sea level, and the effectiveness of advertising messages over time.

Many phenomena exhibit both exponential growth and exponential decay, at different phases in their progressions.

The ascent and fall of Clubhouse

In 2020-21, Clubhouse, a live audio chatroom app, experienced a meteoric rise, growing to tens of millions of downloads within months. Its exponential growth was fueled by network effects, a type of positive feedback loop in which each new user makes the network more valuable to all other users. Notable celebrities and influencers joined in, creating more buzz and attracting more users around the app’s aura of novelty and exclusivity.

However, after its initial surge, Clubhouse ran into numerous challenges. Its pandemic-driven novelty diminished. Many celebrities and business leaders moved on. It was also difficult for users to engage consistently with live audio due to their busy schedules and fickle attention spans; podcasts and audiobooks remained much more convenient.

Perhaps above all, existing social media giants such as Facebook and Twitter appreciated the potential of the live audio format and moved rather quickly to copy Clubhouse’s functionality. It turned out that live audio was more promising as a feature of the existing platforms—which already had hundreds of millions of users—rather than as a standalone service that would need to bootstrap network effects from the ground up. Users could simply engage with live audio content within their existing social media routines.

The app’s new downloads declined by over 90% between February and April 2021.4 As signups slowed and users either churned or migrated to a competing service, the app became less valuable to both new and existing users—exacerbating its decline. Clubhouse experienced first-hand the capricious potential for exponential growth to unravel.

***

Whether with a virus, a population, or a company, we must remember that infinite exponential growth is mostly a theoretical construct. In the physical and practical world, there are always limits. Exponential growth may only occur across a particular scale of observation—such as during the initial contagion of a virus (when few people have been exposed), or during the early period of a new product’s life (when novelty is high and competition is low). Balancing feedback loops ultimately tame exponential progressions.

We must not underestimate the speed with which exponential forces can generate explosive growth—or equally rapid decline!

Feedback Loops: When in doubt, map it out (and you should doubt)

“I’m all for fixing social problems … What I’m against is being very confident and feeling that you know, for sure, that your particular intervention will do more good than harm, given that you’re dealing with highly complex systems wherein everything is interacting with everything else.”

Charlie Munger, Poor Charlie’s Almanack (2006, pg. 253)

Complex systems are collections of simpler components—such as employees, money, or particles—that are interconnected in such a way that they “feed back” onto themselves to create their own complex patterns of behavior. Every person, organization, animal, plant, pond, country, or economy is a complex system. Their interconnections are called “feedback loops,” circular relationships through which one component affects another and is in turn affected by it.1

Feedback dynamics demonstrate how a system can generate collective (or “emergent”) behaviors that cannot be predicted by only observing the parts. We cannot study one employee (or one team, or one division) and determine everything that will happen in a large organization. Nor can we deduce the behavior of a person from a single brain cell. Nor that of an economy from a single economic policy.

The danger of ignoring feedback

In life, we naturally gravitate towards simpler explanations, those that stem from the most coherent story we can tell from the information most readily available to us. It is much easier for us to craft a narrative where everything happens in isolation, a linear form of reasoning which sees the world as a series of sequential, cause-effect events. However, our world is complex; its relationships are often nonlinear. Ignoring feedback leads us to adopt blunt and short-sighted solutions to problems of complex systems.

Unanticipated feedback processes can help explain why economic policies rarely work as intended, why pest control efforts sometimes create entirely new pests, and why well-intentioned laws can backfire. In business, it is easy for decision makers to ignore or underrate feedback processes that could potentially undermine their strategies. Competitors, employees, suppliers, customers, lawmakers, and the media—each with their own interests—may all react or retaliate in unintended ways to a new business strategy or policy. Those reactions will trigger additional reactions, which will trigger additional reactions.

How can we disentangle the complexity?

With greater knowledge of the dynamics of both types of feedback loops (positive and negative) and how to analyze them, we can better understand the complex world we live in and use their power to create outsized change.

Equilibrium or explosion

Negative (balancing) feedback loops are stabilizing, goal-seeking loops which aim to regulate the stock around a given level or range—to maintain a dynamic equilibrium. Consider how a thermostat constantly adjusts to maintain the temperature of the room at the desired setting.

In biology, balancing feedback manifests in all life forms through homeostatic behavior. For instance, to regulate body temperature, our bodies induce perspiration when they get too hot and shivering when they get too cold.

In contrast, positive (reinforcing) feedback loops are amplifying, self-multiplying loops which create either virtuous cycles of growth or vicious cycles of snowballing decline. These forces exist wherever a system element is able to grow (or shrink) as a constant fraction of itself, generating exponential growth (or exponential decay).2 Examples include a contagious virus, population growth, compound interest, economic growth, and network effects on social platforms such as TikTok.

Booms and busts

The economy provides a great example, as the “boom” and “bust” cycles that we observe occur largely due to reinforcing feedback loops running amok in the short-term, with balancing feedback loops kicking in to help reestablish long-term stability.

In the short-term, unsustainable “bubbles” can emerge from many reinforcing feedback loops fueling one another, such as high consumer and business confidence, excessive greed and speculation, low interest rates, and increasing asset prices. Similarly, bubbles can “pop” due to reinforcing feedback loops that accumulate into a self-perpetuating downward cycle. For instance, an external shock (such as a pandemic or a war) might trigger fear, leading to reduced confidence and business contraction, leading to market sell-offs, leading to more fear, and so on.

Fortunately, negative (balancing) feedback mechanisms help stabilize and regulate the economy. The free movement of prices is a negative feedback loop that helps nudge supply and demand toward equilibrium. For example, rapidly rising home prices will eventually shut out many potential buyers. The Federal Reserve wields negative-feedback tools, such as raising or cutting interest rates and increasing or reducing the money supply, designed specifically to help cool a hot economy and ignite a weak one. Likewise, governments can hike or cut tax rates and increase or decrease fiscal spending.

Given all these feedback dynamics, the difficulty of accurately predicting the economy is unsurprising. Countless institutions and individuals are entangled in a complex web of reactions and fluctuations. But we can see how a complex system such as the economy, through its interconnected feedback loops, can essentially “self-organize” into long-term stability, despite periods of short-term turbulence.3

Mappin’ systems, smokin’ doobies

We tend to operate as if we know for certain what implications our actions will have. But without a complete picture of the feedback dynamics at play, we cannot fully understand if our chosen intervention is the best one, or if it will do more harm than good.

Fortunately, we can map it out. Systems mapping is an invaluable, iterative exercise in which we create a causal loop diagram of the feedback processes at work in the system of interest, including the direction of the feedback and an indication of whether that feedback is reinforcing or balancing. This process can help uncover how feedback is generating behavior that we want to change, and how to anticipate potential side effects.4

Let’s try it out. Say we are a busy person and that high stress is a recurring problem. To ease our anxiety, we try using some cannabis. This is a balancing loop, since higher marijuana use temporarily reduces our stress level.

If we stop our analysis here, increasing our cannabis intake seems effective. But let’s put down the bong and keep going. Cannabis also increases how easily we are distracted. More distraction means lower productivity, which increases the anxiety we were trying to alleviate in the first place.

Cannabis also temporarily reduces our energy level, exacerbating our productivity loss, and might make us more prone to unhealthy eating, further reducing our energy level and harming our productivity.

This logic could go on and on, but it demonstrates how the process of systems mapping can help unearth potential unintended consequences that could exacerbate, rather than cure, our original problem. This process can empower decision makers to avoid problems before they actually emerge.

***

The key lesson is that changing one variable in a system can affect other variables and even other systems. We must consider system elements not just in and of themselves, but in relation to the system as a whole and to the greater environment.

When faced with a complex decision, good strategists should pause and map it out.5 As we work iteratively through the systems mapping process, we can:

  • Identify the critical system elements and the feedback loops between them;
  • Given those relationships, evaluate whether a given approach is likely to actually produce the intended outcome;
  • Consider how to enhance or moderate these forces to achieve better outcomes;
  • Question whether, over time, other feedback effects could kick in to challenge or even reverse any short-term successes; and,
  • Determine whether the potential for unwanted side effects outweighs the benefits.

By approaching decisions through the lens of feedback dynamics, we will have an incredible competitive advantage over others!

Complex Systems: Why simple solutions don’t work

Complex systems—the term of art for many interacting agents, whether buyers and sellers in markets, employees and managers in companies, or the atoms and molecules of a turbulent river—have earned that term for a reason. Their most interesting questions rarely have simple answers.”

Safi Bahcall, Loonshots (2019, pg. 227)

Complex systems are collections of simpler components that are interconnected in such a way that they “feed back” onto themselves to create their own complex (or “emergent”) patterns of behavior—collective behaviors which cannot be defined or explained by studying the individual elements.

Complex systems are everywhere. Examples include humans, animals, plants, insect colonies, brains, organizations, industries, economies, ecosystems, galaxies, and the Internet. Their basic operating units are feedback loops—causal connections between the stock levels of the system and the rules or actions which control the inflows and outflows.

Complex systems are inarguably one of the most valuable mental models to understand, because most of what we do takes place in complex systems. Yet we routinely ignore and underestimate their dynamics! Adopting “systems thinking” can help us enact positive change and make drastically better decisions.

Producing complex behavior

Consider the population of a country. Let’s define the flows that control a nation’s population as the birth rate and death rate (assuming, for simplicity, that there is no migration). If the birth rate exceeds the death rate, the population will grow—a reinforcing (positive) feedback loop. But it cannot grow forever.

Eventually, the system will produce its own balancing (negative) feedback loops to attempt to constrain future population growth. For instance, excessive population levels will eventually strain the country’s resources, potentially leading to increased deaths due to lower living standards, food shortages, or degraded health care. Sensing these problems, people may choose to have fewer children in the future (or the government may mandate it). Savvy innovators may create new technologies to mitigate society’s challenges. The cumulative effect of all this negative feedback is likely to be a moderation of population growth.

This example illustrates how a the relationships between a system’s stocks and flows can produce unique, higher-order patterns of behavior. Amidst constant change, the system attempts to “self-regulate” into a long-term equilibrium.

Why we get systems so wrong

Systems are incredible: they can adapt, respond, pursue goals, repair themselves, and preserve their own survival with lifelike behavior even though they may contain non-living things. Human society itself is a remarkable example, with networks of individuals, organizations, and governments acting individually and collectively to create the conditions for human thriving.

However, systems are also frustrating: the goals and behaviors of their subunits (individual countries, organizations, groups, leaders, citizens, etc.) may create system-level outcomes that no one wants, such as hunger, poverty, environmental degradation, economic instability, unemployment, chronic disease, drug addiction, and war.1

It is easy to fall into traps of overconfidence and illusions of control when dealing with complex systems. We prefer a world that we can explain with simple, coherent narratives and familiar patterns to one riddled with randomness and uncertainty.

The uncomfortable truth is that accurately predicting and influencing the behavior of complex systems is extremely difficult. When multiple feedback loops are jostling for dominance, small changes in the environment can produce large, nonlinear changes in the system. Consider the impossibility of perfectly forecasting stock market crashes, terrorist attacks, or even the weather.

The environment changes over time and as we and others interact with it, and as behavior by one element reverberates into subsequent behaviors in other elements, or even in other systems. Consequently, our knowledge of complex systems must necessarily be piecemeal and imperfect.2

We cannot control complex systems, but we can seek to understand them in a general way and prepare to adapt when they inevitably surprise us.

The consequences of non-system solutions to system problems

We should be extremely cautious about pursuing rigid, short-term solutions to issues that are fundamentally systems problems, because we cannot be certain that such actions won’t cause more unintended harm than good.

Many well-intended policies seeking to address such problems may “sound” good but have only short-term benefits, or fail to achieve their goal altogether. Worse, they could exacerbate the problem or even create entirely new problems.

There are no “easy” solutions to problems such as public health, economic growth, homelessness, public education, or terrorism.

Consider the following examples of non-systems solutions gone wrong:

  • The U.S. experiment with the prohibition of alcohol in 1920 led to a violent spike in crime, and alcohol consumption actually increased.3
  • In the 1930s, Australia introduced the cane toad species to control the cane beetle, which was a major pest for sugar cane. With few natural predators, the cane toad itself became invasive. It not only failed to control the cane beetles, but also devastated other species with its toxic skin and created entirely new ecological problems.4
  • Several Asian governments exerted extreme state control over their economies in the late 20th century, including Korea (1960s), China (1979), and Vietnam (1990s). These policies were very effective at keeping inequality down, but at an enormous cost in terms of growth—reducing living standards for everyone!5
  • China’s decades-long “One Child” policy indoctrinated many Chinese parents to believe that their resources are best devoted to one child, permanently changing the country’s population dynamics. A cultural preference for boys encouraged sex-selective abortions and contributed to a lopsided gender ratio. And brutal enforcement tactics included illegal forced abortions and sterilizations.6

A better approach to solving complex problems

“If you wish to influence or control the behavior of a system, you must act on the system as a whole. Tweaking it in one place in the hope that nothing will happen in another is doomed to failure.”

Dennis Sherwood, Seeing the Forest for the Trees (2002, pg. 3)

Before we attempt to nudge a complex system safely towards change, we must look beyond the “event” level (today’s happening) in order to understand the system’s patterns of behavior over time (its dynamics). This requires substantial observation and study. With this knowledge, we can begin to unearth and act on the systems structures that give rise to those behaviors.7

It is here, at the system level, where we are most likely to unearth high-leverage places for intervention and anticipate unintended consequences. We call this problem-solving approach “systems thinking”—one of the most valuable mental frameworks available to us.

For example, let’s contrast China’s blunt “One Child” policy, which aimed to curb population growth, with Sweden’s population policies during the 1930s, when Sweden’s government became concerned after a substantial decline in birth rate. Sweden could have adopted, say, a simple “Three Child” policy to force parents to accelerate population growth. Instead, the Swedish government recognized that there would never be agreement about the appropriate size of the family. But there was agreement about the quality of child care: they determined that every child should be wanted and nurtured.

Under this principle, they adopted policies that provided widespread sex education, free contraceptives and abortion, free obstetrical care, simpler divorce laws, support for struggling families, and substantial investments in education and health care. Since then, Sweden’s birth rate has fluctuated up and down, but there is no crisis, because people are focused on a much more important goal than the number of humans in Sweden: the quality of life of Swedish families.8

***

The fact that we live in a world of unpredictable and interconnected complex systems should be humbling and encourage us all to embrace lifelong learning. We should experiment incrementally, monitor vigilantly, and be willing to adapt based on what we learn. Be skeptical of seemingly straightforward solutions to complex problems. Consider what second- and third-order side effects there could be, and have the courage to question popular convention.