Inertia: Things keep moving and change is hard

If an object is left alone, if no other force (such as friction) acts upon it, it will maintain its current state of motion. Objects already in motion will continue to move forward with a constant velocity, and stationary objects will continue to stand still—unless another force intervenes.1 The more massive the object, the greater the tendency to resist changes in its motion.

The principle of inertia is a fundamental physical rule of motion, first discovered by Galileo and later codified as Isaac Newton’s “First Law” of gravity.

More broadly, we can observe inertia effects in the behavior of individuals, systems, organizations, and relationships. Whether obviously or subtly, effects of inertia are pervasive, and leveraging their power can be an effective strategic tool.

Stasis: the easier way forward

Correcting or reversing our current course is costly and effortful. The status quo is easier. Following the forces of inertia allows us to minimize the use of energy, but it can also lead us towards stagnation and decay as the environment shifts beneath our feet.

Consider how we tend to succumb to inertia by continuing to use obsolete technology standards even after new, better technologies have been introduced. For example, the original “QWERTY” keyboard layout has persisted for more than a century, primarily because people have simply become accustomed to it. QWERTY endures despite the fact that an alternative system called the “Dvorak” layout enables more than double the share of keystrokes to be done in the home row and requires about 37% less finger motion than QWERTY.2 Dvorak was too late; inertia prevails.

Recognizing inertia and either combating it or capitalizing on it can be a very productive endeavor, especially in competitive systems such as business, where adaptability and dynamism are key.

Expunging (or exploiting) inertia in business

“There is nothing more difficult… than to take the lead in the introduction of change. Because the innovator has for enemies all those who have done well under the old conditions and lukewarm defenders in those who may benefit under the new.”

Niccolo Machiavelli, The Prince (1532)

In organizations (especially larger ones), inertia ensures that change is difficult. Because people generally fear uncertainty and prefer the status quo, they tend to resist new norms or strategies that undermine their existing responsibilities and routines.

As with resisting the forces of entropy (the universal tendency towards disorder in the physical world), combating inertia requires an outside energy injection.

Let’s consider four key types of inertia in business, each of which can present a threat or a strategic opportunity.

1. Routine inertia

Inertia can live in obsolete or inefficient routines, such as excessively large meetings or complex approval processes. These behaviors are often addressable by retraining or replacing managers who have invested many years in developing and applying the obsolete processes, as well as by reorganizing business units around new patterns of information flow. The routines that worked in the past may be wildly inappropriate for future contexts.3

2. Cultural inertia

Moreover, long-established cultural behaviors can prevent companies from promptly responding to competitive threats. Breaking cultural inertia demands simplification to eliminate the hidden inefficiencies buried beneath complex behaviors and back-door bargains between teams. Simplification may demand eliminating excessive administration functions, non-core operations, coordinating committees, or complex initiatives. It may require breaking up entire organizational units, or even reorienting the company entirely towards a redefined strategy.4

For example, in 2021, Volkswagen—then the world’s largest carmaker—had outspent all rivals in a race to beat Tesla in the development of electric vehicles. Volkswagen attributed its struggles to make an attractive electric vehicle in part to the failure of company’s managers, lulled into complacency by years of high profitability, to recognize that electric vehicles are more about software than hardware. The culture and competencies that enabled them to produce exquisitely engineered gas vehicles did not translate into coding prowess. The company’s CEO eventually acknowledged, “VW must completely change.”5 He was right.

3. Customers’ inertia

Consumers themselves also exhibit inertia by generally following their past behaviors. For example, we tend to keep the same bank accounts and to auto-renew our insurance policies and subscriptions—generally without conscious choice.6

The big banks know this, and make massive profits as a result. One analysis estimated that U.S. savers missed out on more than $600bn in interest payments from 2014-22 by keeping their savings in the five biggest banks instead of in higher-yield money-market accounts that paid over 10x higher interest rates.7 The giants are banking on inertia: that their customers are unlikely to investigate the alternatives and migrate their accounts to other banks, so there’s no need to offer competitive interest rates.

4. Competitors’ inertia

Business strategist Hamilton Helmer coined the term “counter-positioning” to describe the strategy in which a new entrant adopts a novel business model that would be irrational for an incumbent to mimic (capitalizing on their inertia). If copying a new product or technology would mean undermining their legacy profit streams, incumbents may calculate that inertia is preferable.

For example, around 2000, Netflix pioneered the mail-order DVD business based on the key assumption that Blockbuster, the then-dominant brick-and-mortar DVD rental giant, would be slow to recognize and respond to the threat of Netflix’s new model, allowing Netflix to steal away their customers who were tired of paying $1/day late fees.8 Eventually in 2004, Blockbuster did launch its own DVD subscription service, Blockbuster Online—but it was too late. The costly launch exacerbated Blockbuster’s financial distress. In desperation, Blockbuster actually pivoted back to its brick-and-mortar business.9 By 2010, Blockbuster had filed for bankruptcy, while Netflix went on to become one of the century’s best-performing stocks.

***

In general, the principle of inertia reminds us to be intentional, and to avoid complacency. Deciding not to make a change is making a decision—to preserve the status quo. But what we did in the past may or may not have been optimal at the time, and may be entirely suboptimal in the future. We must be vigilant in reviewing the choices—conscious or otherwise—that we make to keep things as they are to ensure we are adapting appropriately as the complex systems we live in shift beneath our feet.

Evolution by Variation and Selection: The reigning explanation for life (plus: why humans actually are special)

Until around the 19th century, it was generally assumed that some supernatural force (such as the Greek gods) was required to explain the complexity and diversity of life. However, Charles Darwin’s theory of evolution in 1859 introduced the revolutionary idea that life’s breathtaking variety doesn’t require any supernatural intervention. The theory’s modern understanding, termed Neo-Darwinism, involves three main processes:

  1. Replication — All organic life emerges through the copying of genes from parent to offspring;
  2. Variation — However, the genetic copying process is imperfect, introducing random variation into each generation;
  3. Selection — Nature acts as a relentless genetic “filter,” favoring gene variants that confer advantages in survival and replication.

Evolution is one of the most powerful theories available to us today, with broad implications across domains and throughout our lives. Let’s dive deeper.

The copying competition

Genes are effectively molecular software programs, bits of chromosomal material encoded simply to manufacture certain chemicals. These low-level molecular programs accumulate into complex systems of control and feedback, ultimately giving rise to the marvelous life forms that we see. Most importantly, genes are “replicators,” entities capable of copying themselves. They are programmed to cause their own replication—for example, through sexual reproduction in animals.1

During the imperfect replication process, random variants emerge. These “mutations” are non-purposeful: they occur without consideration of what problem they might solve. But this random genetic “shuffling” is far from useless: it gives potentially favorable genetic variants a chance to emerge. Certain mutations, by blind luck, will be more successful replicators than others. For instance, a male peacock with bigger, more colorful tail-feathers may have more mating success than his modestly-feathered chum.

Natural selection” occurs over generations as the gene variants that are best able to cause their own replication survive and dominate, while less effective mutations fade away.2

We observe evolution in all processes subject to some mechanisms of replication, variation, and selection—including in business, where evolution can help us understand the differential success of companies.

Companies that don’t die

In the business world, as in nature, the mechanisms of replication, variation, and selection play a crucial role.

Businesses replicate the strategies, processes, and business models of others. But like mutating genes, startups begin with some different properties from their predecessors, and incumbents adapt and reconfigure themselves over time. These variations might include new product features, pricing models, marketing strategies, or operational processes. Just as natural selection eliminates disadvantageous genes, the market filters out weaker businesses over time. When the environment changes, the organizations that survive are those best able to adapt their resources and strategies to the external selection environment.3

A great example is The New York Times, which has persisted for 170 years as America’s most Pulitzer-decorated news organization and as a thriving business. Digital upheaval from the Internet and smartphones decimated the legacy print newspaper readership and advertising revenue of most newsrooms. The Times, however, recognized the shifting landscape early on—and chose to adapt. The company (controversially) launched a paywalled digital subscription in 2011, aggressively hired and handsomely paid top journalists (even amidst mass layoffs by competitors), and invested heavily in digital and mobile experiences.4 Today, the company’s market-leading 10 million digital subscribers are over 4x its print-era peak.

Netflix offers another striking example. Starting as a DVD-by-mail disruptor in the Blockbuster-dominated DVD rental market, Netflix became a household name—but it did not stagnate. Recognizing shifts in how viewers consumed video content, the company successfully transformed into a pioneer of the digital video streaming market, then led a global revolution in the production of original streaming content.5 Netflix’s ability to foresee and adapt to consumer preferences and technological advancements secured its streaming dominance, while the market filtered out less adaptable competitors such as Blockbuster and Quibi.

Examples like these demonstrate how evolutionary dynamics can help us eliminate common misconceptions about business—such as the idea that success is linear and predictable, or that the best product or the genius CEO or the biggest company always wins. Adaptability and randomness loom large in dynamic environments.

Biological evolution, too, is frequently misunderstood. Exploring these misconceptions can be extremely enlightening.

The “appearance of design” misconception

For one, the remarkable adaptations degree that we observe in nature (such as the body’s ability to heal itself) give an “appearance of design” that leads many people to presume the existence of a supernatural designer (or designers), and to the common misconception that evolution has “goals”—that it optimizes for the species or the individual.

By definition, the designer of an adaptation must have had an intention for that design. But, again, random variation takes place without knowing what problem those mutations might solve. The variants most successful at replication will dominate. There is no goal—no design.

Evolutionary processes are capable of explaining how all life forms evolved from single-cell organisms into highly complex beings, with remarkable yet “unplanned” adaptations—such as a peacock’s extravagant tail or a giraffe’s long neck.6

If life had been “designed,” then wouldn’t all biological traits be perfectly optimized for the good of the organism? This, too, is false.

Evolution can result in useless features, such as the human appendix or the “tail bone” (evolutionary leftovers). Evolution can even favor completely disadvantageous features. For instance, the peacock’s large, colorful tail that attracts mates also makes it more vulnerable to predators.7 The dominant genes may in fact cause a species to go extinct when the environment changes, if they were only well-adapted to conditions that no longer exist.8

The reality is that natural selection is blind. It does not optimize for the “welfare” of anything other than the gene’s ability to replicate itself.

“Organisms are the slaves, or tools, that genes use to achieve their ‘purpose’ of spreading themselves through the population.”

David Deutsch, The Beginning of Infinity (2011, pg. 92)

The “fine-tuning” misconception

A related misconception is the idea that the world appears to be “fine-tuned” to promote life on Earth.

The “parable of the sentient puddle” provides a thought-provoking counterargument: one day, a puddle wakes up and finds itself in a hole that fits the puddle “staggeringly well.” The puddle therefore concludes that the world must have been created for it.

But just as the hole wasn’t created for the puddle, the universe wasn’t sculpted for life. In fact, the opposite is true: We are fine-tuned by evolution to a very limited range of environmental conditions. Mild changes to our environment (temperature, altitude, etc.) would promptly kill us, in the absence of any human knowledge or technology.9

The Earth, in fact, is often indifferent or even hostile to life. Perhaps 99% of all species that have ever existed on Earth are extinct. Much of the Earth’s life-support system was not bestowed upon humans but created by them, using their distinct ability to create knowledge.10

Why humans actually are special

This revelation leads to perhaps the most important application of the theory of evolution outside of biology: the creation of human knowledge.

According to the dominant theory of knowledge-creation, philosopher Karl Popper’s “critical rationalism,” science is a problem-solving process that resembles biological evolution.

Ideas, like genes, are replicators. New theories emerge from creative guesswork by humans, which introduces variation into the knowledge-creation process (similar to genetic mutation). Then, we subject our ideas to a selection process by which we test them logically and experimentally. When a theory fails to survive criticism, we abandon it. If a new theory displaces it, we tentatively deem our problem-solving process to have made progress.11

Unlike biological evolution, in which bad genes are eliminated through the death of their carrier organisms, we are free at any time to discard our old ideas or to invent and test new ideas. This is one reason why knowledge creation is exponentially faster and more efficient than biological evolution. We can, in Popper’s words, “let our theories die in our stead”!

***

With humans, in some ways evolution produced a system of trial-and-error even more powerful than evolution itself. We evolved not only to survive but to explain and reshape nature, as evidenced by our cities, technologies, agriculture, and science. Our genes are coded only to selfishly perpetuate themselves, but we can transcend this programming and create genuinely altruistic societies. Recognizing our unique evolutionary position, we can build a future that not only ensures our survival but also amplifies our thriving!

Power Laws: The hidden forces behind all sorts of inequalities

A power law describes the relationship between two variables in which one variable varies as an exponent of the other. For example, if the side of a square is doubled, the area is multiplied by a factor of four.

Power laws are nonlinear relationships, where the collective output changes by more than the proportional change in input. These relationships contrast sharply with linear functions (which are simpler and more intuitive), where the changes in input and output are proportional.

The dynamics of power laws help describe many extreme phenomena we observe in the world, from natural disasters to the success of startups. Anyone who aspires to maximize their impact can benefit from learning about the nature of power laws—and how to exploit them.

Far from normal

Power laws produce distributions very different from the nice, symmetrical, bell-shaped “normal distributions.” In normally distributed systems, our observations will have a meaningful central tendency (the average) and increasingly rare deviations from that average—such as with human height and weight. Most people aren’t that far from the average height, and the shortest 1% of people and tallest 1% differ in height by only around 14 inches.1

In contrast, the distributions produced by power laws don’t peak around a typical value; rather, the range of values is much wider—with a majority of observations of modest values on one end, and a “fat tail” of rare but extreme outcomes on the other end. While the tallest people in the world aren’t 10x taller than the shortest people, with power laws, the most extreme events can be orders of magnitude greater than the least extreme.

The distributions of a many physical, biological, and man-made phenomena approximately follow a power-law distribution. For example:

  • The frequencies of words in most languages — If we want to learn a new language, we would do well to start with the small fraction of words (as few as 135 of them) that make up the majority of usage (“Zipf’s law”).
  • The populations of cities — As of 2019, the U.S. had more than 19,000 cities, though just 37 cities housed the majority of the population. The largest one, New York City, housed over 8 million residents alone, more than double the next-closest city.2
  • The size of lunar craters — The Moon has countless small craters from millions of years of minor collisions with other interstellar material, and a few enormous craters from exponentially larger collisions.
  • The frequencies of family names — There are millions of rare or obscure last names, but approximately 1 in 68 people on the entire Earth has the last name “Wang.”3
  • The magnitude of earthquakes — The Richter scale is logarithmic: each increase of 1 on the scale equates to a 10x increase in strength. For instance, a magnitude 5.0 earthquake is ten-times more destructive than a magnitude 4.0 earthquake, but is one-tenth as common.4

Other examples include the size of computer files, the number of views on web pages, the sales of most branded products (e.g., books, music), and individual incomes and wealth.5 In each case, there is a minority that supplies a majority of the outputs.

Sandpiles and avalanches

We should always be wary of the potential for power-law effects whenever we are dealing with systems comprised of many interacting parts—in other words, with complex systems (such as economies, ecosystems, or the climate).6 The various components and the feedback loops that govern them tend to cause such systems to evolve into a very delicate state of balance, a dynamic equilibrium. When forces push the system outside its equilibrium bounds, the system may shift into a new, discrete phase.

As we shall see, when a system “tips” out of equilibrium and into a phase change, the results commonly follow a power-law distribution.

Consider a pile of sand on a countertop. You drop additional grains, one-by-one, onto the pile, steadily increasing the pile’s slope (its key control parameter). At first, each new grain does little; the pile remains roughly in equilibrium. Eventually, however, the pile’s slope will increase to an unstable “critical” threshold, beyond which the next grain may cause an avalanche, a type of phase transition.

At this critical stage, we can’t say for certain whether the next grain will cause an avalanche, or how big that avalanche will be. We do know, however, that the probability of an avalanche is much higher near the tipping point, and that avalanches of any size are possible, but smaller avalanches will happen far more often. A power law!

We see these same power-law dynamics in all manner of complex systems, which essentially “adapt” themselves through a series of “avalanches” to maintain overall stability.7 Examples include the extinction of species in nature, price bubbles and bursts in financial markets, traffic jams, or earthquakes relieving pressure from grinding tectonic plates.

The engine of venture capital

The dynamics of power laws are central to our economy and to innovation in general. Progress occurs through the trial-and-error process of startups trying new and different ways of creating value. These experiments require financial capital from investors who can accept substantial risk, since most startups fail.

Venture capital investors seek to earn outsized investment returns not by having a large proportion of their investments do well, but having one or two “grand slams” that generate massive returns (think Facebook or Tesla). It would not be surprising for a venture fund’s one or two big winners to return more than all their other investments combined. The most that VCs can lose is 1x their investment, but there is (theoretically) no cap to how much they can gain if an investment is successful.

Union Square Ventures, for example, invested in Coinbase in 2013 at a share price of about $0.20, and achieved a massive return when Coinbase opened its initial public offering at $381 in 2021—a valuation of around $100bn and an increase of over 4,000x.8

Feed the winners, starve the losers

A common corollary to power laws is the “80/20 rule” (or, the Pareto principle), which states that for many events 80% of effects (output) come from 20% of the causes (inputs). Mathematically, the rule approximates a power-law distribution. Phenomena roughly following this rule have been observed in income distribution, software coding, business results, quality control, infection transmission, and elsewhere.

While often the exact number varies, the principle reveals that most of the work we’re doing only generates a small amount of our overall results. The key lies in identifying “the 20%”—of activities, individuals, projects, products, businesses, grievances, etc.—that are driving a disproportionate share of the outcomes, and concentrating efforts towards them.

Corporate “turnaround” master Don Bibeault famously relies on the 80/20 rule to drive transformation in struggling businesses, recognizing that most businesses spend too much time satisfying customers, selling products, and preserving marginal employees that make little or no contribution to the bottom line. The key to implementing an 80/20 policy is redeploying resources away from the “marginal many” to the “critical few” that account for current results and future opportunity—a tactic Bibeault calls “feed the winners and starve the losers.”9

***

Power laws teach us that some inputs are much more important than others, and they can explain many of the extreme results we observe in the world. Concentrating our efforts towards unlocking (or avoiding) the outliers of power laws can provide substantial leverage towards helping us achieve our goals.

When seeking an effective strategy or solution, we should ask ourselves where the most “power” in the situation might be hidden!

Exponential Growth and Decay: Grow fast or die trying

Exponential growth occurs whenever a stock of some material or quantity increases or replicates itself in constant proportion to how much there already is. This is a multiplicative effect, in which each step is more extreme than the preceding one. As an example, consider a stock of 1,000 hogs that, given its rates of fertility and mortality, grows exponentially at 10% per year. In the first year, there’s an increase of 100 hogs, then 110 in the second year, then 121 in the third year, and so on—illustrating the snowballing effect.

This type of growth contrasts sharply with linear growth, in which the stock changes by a constant quantity each period (an additive effect). If the hog population grew by a fixed 100 hogs annually, the implied percentage growth rate would diminish over time—from 10% in the first year to 9.1% in the second, and so on.

Exponential patterns are ubiquitous, from biological phenomena such as population growth and disease spread, to economic phenomena such as GDP growth and compound interest, to technological trends such as network effects in communications networks and improvements in the processing power of computers (“Moore’s law”).

Despite the prevalence of exponential progressions, our human intuition often fails to appreciate their speed and chaotic potential. Instead, we gravitate towards linear thinking because it serves us well in most practical circumstances. We can commit fewer errors and better explain and predict the world once we understand the power of exponential growth, and its equally powerful ability to unravel.

Feedback loops: Accelerators and regulators

The underlying driver of exponential growth lies in reinforcing (positive) feedback loops, which exist whenever a system (such as a virus or a savings account) can self-multiply or grow as a constant fraction of itself. These amplifying forces generate exponential growth, producing either virtuous or vicious cycles.

In contrast, balancing (negative) feedback loops are stabilizing, goal-seeking functions that aim to maintain a system in a given range of acceptable parameters—in a “dynamic equilibrium.” Consider how a thermostat regulates the temperature of a home, or how our bodies induce perspiration and shivering to stabilize our body temperatures.1

In physical systems that are growing exponentially, there must be at least one positive feedback loop propelling the growth, but there must also be at least one negative feedback loop constraining the growth, because no physical system can grow forever in a finite environment.2

The inevitable decay

Consider a virus such as COVID-19, which initially spreads exponentially through the population as each infected person infects multiple others. It might seem like an uncontrollable plague.

But again, nothing grows forever. The flip side of exponential growth is exponential decay, when a quantity decreases at a rate proportional to its current value. If we withdraw 10% of the funds in our savings account every period, the accounts’ value will decay exponentially in a downward reinforcing feedback loop.

Let’s return to our example of a virus such as COVID-19 or smallpox. Over time, balancing feedback loops will kick in to combat the spread. In the worst case, the virus could simply start running out of people to infect because so many get sick. But eventually, even highly contagious viruses run out of steam. Our bodies will start to develop antibodies to increase immunity. Widespread vaccinations can achieve the same effect. Governments, organizations, and individuals may adapt their behavior to mitigate the risk and impact of the virus (wearing masks, providing aid, social distancing, etc.).

Once the average number of people that one infected person infects (the so-called “R-knot”) falls below the critical level of 1, the exponential growth turns to exponential decay, and the virus begins to die out. This is known as “herd immunity,” when there are not enough new hosts to whom the virus can continue to spread. The key insight for epidemic control is that “perfection” is not necessary; we don’t have to stop all transmission, just enough transmission to achieve herd immunity.3

We can find exponential decay progressions in a variety of real-world applications, including the biological half-life of chemicals or drugs, the rate of radioactive decay by which nuclear material disintegrates, the decrease in atmospheric pressure at increasing heights above sea level, and the effectiveness of advertising messages over time.

Many phenomena exhibit both exponential growth and exponential decay, at different phases in their progressions.

The ascent and fall of Clubhouse

In 2020-21, Clubhouse, a live audio chatroom app, experienced a meteoric rise, growing to tens of millions of downloads within months. Its exponential growth was fueled by network effects, a type of positive feedback loop in which each new user makes the network more valuable to all other users. Notable celebrities and influencers joined in, creating more buzz and attracting more users around the app’s aura of novelty and exclusivity.

However, after its initial surge, Clubhouse ran into numerous challenges. Its pandemic-driven novelty diminished. Many celebrities and business leaders moved on. It was also difficult for users to engage consistently with live audio due to their busy schedules and fickle attention spans; podcasts and audiobooks remained much more convenient.

Perhaps above all, existing social media giants such as Facebook and Twitter appreciated the potential of the live audio format and moved rather quickly to copy Clubhouse’s functionality. It turned out that live audio was more promising as a feature of the existing platforms—which already had hundreds of millions of users—rather than as a standalone service that would need to bootstrap network effects from the ground up. Users could simply engage with live audio content within their existing social media routines.

The app’s new downloads declined by over 90% between February and April 2021.4 As signups slowed and users either churned or migrated to a competing service, the app became less valuable to both new and existing users—exacerbating its decline. Clubhouse experienced first-hand the capricious potential for exponential growth to unravel.

***

Whether with a virus, a population, or a company, we must remember that infinite exponential growth is mostly a theoretical construct. In the physical and practical world, there are always limits. Exponential growth may only occur across a particular scale of observation—such as during the initial contagion of a virus (when few people have been exposed), or during the early period of a new product’s life (when novelty is high and competition is low). Balancing feedback loops ultimately tame exponential progressions.

We must not underestimate the speed with which exponential forces can generate explosive growth—or equally rapid decline!

Abstraction: Wrong models are extremely useful

“A truly wise person is not someone who knows everything, but someone who is able to make sense of things by drawing from an extended resource of interpretation schemes.”

Sönke Ahrens, How to Take Smart Notes (2017, pg. 116)

Everything we know about the world is a model, a simplified representation of something. This includes every word, language, map, math equation, statistic, document, book, database, and computer program, and our “mental models.”1 We use models constantly to simplify the world around us, to create knowledge, and to communicate. Our mental models, the collection of theories and explanations that we hold in our heads, provide the foundation upon which we interpret new information.

The process by which we create models is called “abstraction,” which helps us approximate complex problems with solvable ones that are simple enough to (hopefully) enable us to make a decision or find an answer. Our human ability to create useful abstractions (knowledge) is the reason why we are such a successful species on Earth.

However, we seldom think about how we fail at abstraction, or how to do it better. We commit abstraction errors whenever we over-simplify or over-complicate problems. We fail by trying to remember isolated facts, instead of trying to understand concepts and draw meaningful connections with other ideas. Finally, we place undue faith in our preferred models by failing to recognize that all models are—as we shall see—wrong.

Practicing good abstraction can help us generate truly creative solutions, while avoiding costly mistakes.

“Consider a spherical cow”

“Any model… must be an over-simplification. It must omit much, and it must over-emphasize much.”

Karl Popper, The Myth of the Framework (1994, pg. 172)

In order to be useful, models must ignore (abstract away) certain variables or features, in order to highlight others.

An old joke from theoretical physics involves a dairy farmer struggling with low milk production. Desperate, the farmer asks for help from the local academics. A team of theoretical physicists investigates the farm and collects vast sums of data. Eventually, the leader returns and tells the farmer, “I have the solution, but it works only in the case of spherical cows in a vacuum.”

The metaphor refers to the tendency of physicists (and other scientists) to use models and abstractions to reduce a problem to the simplest possible form to enable potentially helpful calculations, even if the simplification abstracts away some of the model’s similarity to reality.2

Abstract thinking requires a balancing act: we must be rigorous and systematic, but also make conscious tradeoffs with realism. We isolate a simple part of the problem, abstracting away all irrelevant details, and calculate the answer. Then, we put our “spherical cow” answer back into context and consider whether other factors might be important enough to overturn our conclusions.

Zooming in and out

Imagine how impossible it would be if we could only learn reductively—that is, by analyzing things into their constituent parts, such as atoms. The most basic events would be overwhelmingly complex. For example, if we put a pot of water over a hot stove, all the world’s supercomputers working for millions of years could not accurately compute what each individual water molecule will do.

But reduction is not the only way to solve problems. Many incredibly complex behaviors cannot be simply “derived” from studying the individual components. We can’t analyze one water molecule and predict the whole pot’s behavior. Good explanations, it turns out, can exist at every abstraction layer, from individual particles to whole systems and beyond! High-level simplicity can “emerge” from low-level complexity.3

Let’s revisit our boiling water pot. Predicting the exact trajectory of each molecule is an impractical goal. If we were instead concerned with the more useful goal of predicting when the water would boil, we can turn to the laws of thermodynamics, which govern the behavior of heat and energy. Thermodynamics can explain why water turns into ice at cold temperatures and gas at high ones, using an idealized abstraction of the pot that ignores most of the details. When the water temperature crosses certain critical levels, it will experience phase transitions of freezing, melting, or vaporizing.4 The laws of thermodynamics themselves are not reductive, but abstract; they explain the collective behaviors of lower-level components such as water molecules.

We can use this trick of “zooming” in and out between abstraction levels to generate creative explanations and solutions in many contexts, including in business. For instance, if we only analyzed our business at the company-level, we risk ignoring key factors that emerge only at higher levels (such as the industry or the economy) or at lower levels (such as a product, team, individual, or project).

There is no “correct” level of analysis. A good strategist considers various perspectives in order to understand the true nature of the problem and triangulate on the highest-leverage places for intervention.

Wrong models and doomsday prophecies

“All models are wrong, some are useful.”

George Box, statistician

By simplifying things, models help us reach a decision even when we have limited information. But because they are simplifications, models will never be perfect. Consider the following examples:

  • Résumés and interviews are (flawed) models to predict a candidate’s success in a role.
  • Maps helps us navigate the world by simplifying its geography. (Imagine how useless a “perfect,” life-sized map would be.)
  • Statistical models such as the normal distribution and Bayesian inference can help us make data-driven decisions, but they require us to make a bunch of iffy assumptions and simplifications.
  • Our best economic models are potentially accurate in some contexts, and wildly inadequate in others.

Failing to recognize the limitations of our models can lead to massive errors.

In 1798, the famous population theorist Thomas Malthus created a model of population growth and agricultural growth which led him to prophesy that the 19th century would bring about mass famine and drought. The model asserted that because the population grows exponentially while sustenance only grows linearly, humanity would soon outgrow its ability to feed itself. Malthus and many others thought he had discovered an inevitable end to human progress.

In hindsight, Malthus’s population projections were pretty accurate, but due to a critical error of abstraction, he wildly underestimated agricultural growth, which actually outpaced population growth. His simplified model abstracted away a variable that turned out to be pretty important: humans’ ability to create new technology.5 Indeed, we achieved incredible agricultural advances during this time, including in plows, tractors, combine harvesters, fertilizers, herbicides, pesticides, and improved irrigation.6 Today, famine mortality rates are a fraction of what they were when Malthus made his pessimistic prophesy, even though the global population is 10x larger.

Not only are models inherently imperfect, but also we can never be 100% certain that our model is the best one to use in the first place!7 We must not let our current models shut out fresh ideas or alternative explanations. The ongoing replacement of our best theories and models with new and better ones is not a bug of the knowledge creation process, it is a feature.8

For instance, Einstein’s groundbreaking theory of general relativity ended the 200-year reign of Isaac Newton’s theory of gravity. Already, we know that general relativity and quantum theory, our other deepest theory in physics, are completely incompatible with one another. One day, both will be replaced.

***

Once we accept that models are valuable yet imperfect and impermanent, we can harness them to our advantage. Good strategists can generate creative solutions by analyzing complex problems from various perspectives, perhaps reframing the issues altogether, and by carefully considering which factors are critical and which ones can be safely abstracted away. And we can use our models the right way: as potentially helpful tools in certain circumstances, not as ultimate truth.

Feedback Loops: When in doubt, map it out (and you should doubt)

“I’m all for fixing social problems … What I’m against is being very confident and feeling that you know, for sure, that your particular intervention will do more good than harm, given that you’re dealing with highly complex systems wherein everything is interacting with everything else.”

Charlie Munger, Poor Charlie’s Almanack (2006, pg. 253)

Complex systems are collections of simpler components—such as employees, money, or particles—that are interconnected in such a way that they “feed back” onto themselves to create their own complex patterns of behavior. Every person, organization, animal, plant, pond, country, or economy is a complex system. Their interconnections are called “feedback loops,” circular relationships through which one component affects another and is in turn affected by it.1

Feedback dynamics demonstrate how a system can generate collective (or “emergent”) behaviors that cannot be predicted by only observing the parts. We cannot study one employee (or one team, or one division) and determine everything that will happen in a large organization. Nor can we deduce the behavior of a person from a single brain cell. Nor that of an economy from a single economic policy.

The danger of ignoring feedback

In life, we naturally gravitate towards simpler explanations, those that stem from the most coherent story we can tell from the information most readily available to us. It is much easier for us to craft a narrative where everything happens in isolation, a linear form of reasoning which sees the world as a series of sequential, cause-effect events. However, our world is complex; its relationships are often nonlinear. Ignoring feedback leads us to adopt blunt and short-sighted solutions to problems of complex systems.

Unanticipated feedback processes can help explain why economic policies rarely work as intended, why pest control efforts sometimes create entirely new pests, and why well-intentioned laws can backfire. In business, it is easy for decision makers to ignore or underrate feedback processes that could potentially undermine their strategies. Competitors, employees, suppliers, customers, lawmakers, and the media—each with their own interests—may all react or retaliate in unintended ways to a new business strategy or policy. Those reactions will trigger additional reactions, which will trigger additional reactions.

How can we disentangle the complexity?

With greater knowledge of the dynamics of both types of feedback loops (positive and negative) and how to analyze them, we can better understand the complex world we live in and use their power to create outsized change.

Equilibrium or explosion

Negative (balancing) feedback loops are stabilizing, goal-seeking loops which aim to regulate the stock around a given level or range—to maintain a dynamic equilibrium. Consider how a thermostat constantly adjusts to maintain the temperature of the room at the desired setting.

In biology, balancing feedback manifests in all life forms through homeostatic behavior. For instance, to regulate body temperature, our bodies induce perspiration when they get too hot and shivering when they get too cold.

In contrast, positive (reinforcing) feedback loops are amplifying, self-multiplying loops which create either virtuous cycles of growth or vicious cycles of snowballing decline. These forces exist wherever a system element is able to grow (or shrink) as a constant fraction of itself, generating exponential growth (or exponential decay).2 Examples include a contagious virus, population growth, compound interest, economic growth, and network effects on social platforms such as TikTok.

Booms and busts

The economy provides a great example, as the “boom” and “bust” cycles that we observe occur largely due to reinforcing feedback loops running amok in the short-term, with balancing feedback loops kicking in to help reestablish long-term stability.

In the short-term, unsustainable “bubbles” can emerge from many reinforcing feedback loops fueling one another, such as high consumer and business confidence, excessive greed and speculation, low interest rates, and increasing asset prices. Similarly, bubbles can “pop” due to reinforcing feedback loops that accumulate into a self-perpetuating downward cycle. For instance, an external shock (such as a pandemic or a war) might trigger fear, leading to reduced confidence and business contraction, leading to market sell-offs, leading to more fear, and so on.

Fortunately, negative (balancing) feedback mechanisms help stabilize and regulate the economy. The free movement of prices is a negative feedback loop that helps nudge supply and demand toward equilibrium. For example, rapidly rising home prices will eventually shut out many potential buyers. The Federal Reserve wields negative-feedback tools, such as raising or cutting interest rates and increasing or reducing the money supply, designed specifically to help cool a hot economy and ignite a weak one. Likewise, governments can hike or cut tax rates and increase or decrease fiscal spending.

Given all these feedback dynamics, the difficulty of accurately predicting the economy is unsurprising. Countless institutions and individuals are entangled in a complex web of reactions and fluctuations. But we can see how a complex system such as the economy, through its interconnected feedback loops, can essentially “self-organize” into long-term stability, despite periods of short-term turbulence.3

Mappin’ systems, smokin’ doobies

We tend to operate as if we know for certain what implications our actions will have. But without a complete picture of the feedback dynamics at play, we cannot fully understand if our chosen intervention is the best one, or if it will do more harm than good.

Fortunately, we can map it out. Systems mapping is an invaluable, iterative exercise in which we create a causal loop diagram of the feedback processes at work in the system of interest, including the direction of the feedback and an indication of whether that feedback is reinforcing or balancing. This process can help uncover how feedback is generating behavior that we want to change, and how to anticipate potential side effects.4

Let’s try it out. Say we are a busy person and that high stress is a recurring problem. To ease our anxiety, we try using some cannabis. This is a balancing loop, since higher marijuana use temporarily reduces our stress level.

If we stop our analysis here, increasing our cannabis intake seems effective. But let’s put down the bong and keep going. Cannabis also increases how easily we are distracted. More distraction means lower productivity, which increases the anxiety we were trying to alleviate in the first place.

Cannabis also temporarily reduces our energy level, exacerbating our productivity loss, and might make us more prone to unhealthy eating, further reducing our energy level and harming our productivity.

This logic could go on and on, but it demonstrates how the process of systems mapping can help unearth potential unintended consequences that could exacerbate, rather than cure, our original problem. This process can empower decision makers to avoid problems before they actually emerge.

***

The key lesson is that changing one variable in a system can affect other variables and even other systems. We must consider system elements not just in and of themselves, but in relation to the system as a whole and to the greater environment.

When faced with a complex decision, good strategists should pause and map it out.5 As we work iteratively through the systems mapping process, we can:

  • Identify the critical system elements and the feedback loops between them;
  • Given those relationships, evaluate whether a given approach is likely to actually produce the intended outcome;
  • Consider how to enhance or moderate these forces to achieve better outcomes;
  • Question whether, over time, other feedback effects could kick in to challenge or even reverse any short-term successes; and,
  • Determine whether the potential for unwanted side effects outweighs the benefits.

By approaching decisions through the lens of feedback dynamics, we will have an incredible competitive advantage over others!

Randomness: Harnessing the chaos

“Here, on the edge of what we know, in contact with the ocean of the unknown, shines the mystery and beauty of the world. And it’s breathtaking.”

Carlo Rovelli, Seven Brief Lessons on Physics (2016, pg. 81)

Despite our human programming to detect patterns and seek causes or explanations for everything we observe, many events in the world are simply chaotic and unpredictable. Failures to properly account for randomness lead us astray constantly, especially when we are operating in complex systems such as an economy, company, country, or ecosystem.

Our human tendency to craft neat, linear narratives about cause-and-effect can fool us into identifying causal connections between events where none actually exists (such as a relationship between astrological signs and personality traits). It also leads us to naively extrapolate that what has happened in the past will continue into the future. In our interpersonal interactions, we tend to over-attribute people’s behavior to inherent characteristics, versus circumstantial factors or chance. Overall, these fallacies give us false confidence that things are more predictable and explainable than they really are.1

“We are far too willing to reject the belief that much of what we see in life is random.”

Daniel Kahneman, Thinking, Fast and Slow (2011, pg. 117)

But we can, sometimes, harness the chaos. Randomness can work to our advantage, whether in business, computer science, statistics, or—indeed—in the evolution of all life forms. But first…

Why so random?

Is the world inherently random and unpredictable? Physics offers some intriguing insights.

In classical physics, which describes everyday events such as rolling billiard balls and orbiting planets, “random” behavior can emerge from phenomena that are completely orderly and predictable—at least in theory. The problem in practice is twofold. First, perfect prediction requires a flawless understanding of the laws of nature, which we may never achieve. Second, it also requires an impossibly precise knowledge of the system’s initial conditions.2 Whether we’re measuring the motion of a billiards ball or a planet, no instrument can provide infinite precision. Approximation is our best hope. With time, even small errors in these specifications can lead to huge errors in the prediction (the “butterfly effect”). For this reason, many events—the weather, stock market, highway traffic—may appear “random” simply because we can’t gather and process data quickly enough to predict them.

However, in quantum physics, the other pillar of modern physics, which studies microscopic phenomena, unpredictability may go deeper. The observed behaviors of the universe’s most basic particles are notoriously random. Even if we had perfect information of their initial positions and velocities, we can only make probabilistic predictions of where they will go.3 The universe, it seems, will always be full of surprises!

Taming randomness with numbers

But the value of randomness is not limited to the cautionary tale that “we can’t perfectly predict things.” Randomness is a versatile, multidisciplinary mental asset—particularly in the field of statistics.

While individual random events (particle movements, coin flips) are unpredictable, if we know the “distribution” of the underlying data, the probability of different outcomes over a large enough sample size becomes predictable. This principle lies at the heart of statistics, producing tools such as the well-known normal distribution, which can help us quantify uncertainty and make useful inferences and predictions even for random events. In quantum physics, we can predict the probability distribution of a particle’s movements with remarkable accuracy, but we can never be certain of its exact behavior on any particular observation.

When faced with a problem too complex to be understood directly, one of the best tools we have to begin to untangle it is to collect a random sample and closely study the results. In scientific experiments aiming to assess causality (for example, whether a new dieting method causes weight loss), randomness is a critical ingredient. A valid experiment requires randomization both in (1) selecting a sample from the target population to study, and (2) assigning the subjects to “treatment” versus “control” groups. True randomization ensures that on average a sample resembles the population, enabling us to make valid inferences about that population.4

Without truly randomized sampling in our experiments, we are likely to generate biased and misleading results. If, for instance, all the subjects in our clinical trial were American adult men (as was the case for decades), our sample may not be representative of the patients we intend to treat.

Randomness is also the explanation behind regression to the mean: when there is some amount of randomness involved in an event, we should expect extreme outcomes to be followed by more moderate outcomes, because some extreme results are simply blind luck. And luck is transitory.

For example, because our body weight fluctuates daily, the heaviest participants in a new diet study are certainly more likely to have a consistent weight problem (an inherent trait), but they are also more likely to have been at the high-end of their weight range on the day we first weigh them (a random fluctuation). Therefore, the heaviest patients at the beginning of the study should, on average, be expected to lose some weight over time, regardless of the treatment being studied.5 To get a useful signal, we need to compare the results of the “treatment group” to those of a “control group” that did not try the diet. Otherwise, any “discovery” we make could simply be the (predictable) result of randomness!

Solving problems with chaos

Whenever we encounter a problem we’re not sure how to solve, injecting a bit of randomness into the process can often unearth unique and unexpected solutions. If the solution seems elusive, we should ask ourselves whether we can simply try something, learn from whatever happens, and adjust from there.

Nature itself has mastered the art of trial-and-error. In evolutionary biology, random variation in the copying of genes enables the incredible adaptations and life forms we observe in nature. First, the imperfect process of copying genes from parent to offspring creates random mutations, with no regard to what problems those variants might solve. Over time, nature will “select” for the genes most successful at causing themselves to be replicated in the future, such as those that cause better brain function in humans, prettier feathers in peacocks, or longer necks in giraffes.6

Remarkably, without any intentional “design,” randomness breathes complexity, resilience, and beauty into the world. Driven by evolutionary forces, incredibly complex systems—from human beings to organizational cultures to artificial intelligences—can emerge and function without anyone having consciously designed each of their elements.

In the realm of computer science, programmers have embraced randomness as a problem-solving tool. Randomized algorithms can prove extremely useful when we are stuck. For example, checking random values may help crack complex equations. Many effective “optimization” (or “hill-climbing”) algorithms apply random changes to improve the system whenever it looks like it might be stuck on a local peak. We could “jitter” the system with a few small changes, or we could apply a full “random-restart.”7 Netflix invented a useful resiliency-enhancing tool called “chaos monkey” which deletes random bits of code and shows you how your system reacts.8

***

Embracing randomness can help us to unlock creativity by exploring new ideas and approaches, to eliminate errors of causality, and even to better understand the natural world. We have a choice: we can be astonished or distressed or in denial that the world is unpredictable, or we can admit that we will never have perfect knowledge—and then turn randomness to our advantage.

Critical Rationalism: An un-sexy term for the literal way we achieve progress

“I may be wrong and you may be right, and by an effort, we may get nearer to the truth.”

Karl Popper, The Myth of the Framework (1994, pg. xii)

Scientific theories are explanations—statements about what is there, what it does, and how and why. The distinctive creativity of human beings manifests in our capacity to create new explanations, to create knowledge.

But how do we create knowledge? In short, it consists of guessing, then learning from our mistakes—a form of trial-and-error. Unfortunately, two common misconceptions in particular plague our understanding of this process, leading us to embrace bad ideas.

The induction misconception

One of the classic fallacies about knowledge creation is that of “inductivism”—the belief that we “obtain” scientific theories by generalizing or extrapolating repeated experiences, and that every time a theory makes accurate predictions it becomes more likely to be true. But this whole process is non-existent.

Adapted from The Fabric of Reality (Deutsch, 1997, pg. 59)

First of all, the aim of science is not to simply make predictions about experiences; it is about explaining reality. Completely invalid theories might make accurate predictions.

Consider Bertrand Russell’s famous story of the chicken. A chicken observed that the farmer stopped by every day to feed her, “extrapolating” from this observation that the farmer would continue to do so. Every day that the farmer came to feed the chicken, the chicken became more confident in her “benevolent farmer” theory. Then, one day, the farmer came and wrung the chicken’s neck. Unfortunately, the extrapolating chicken had the wrong explanation for the farmer’s behavior. We cannot assume that the future will mimic the past without a valid explanation for why the past behaved as it did, and why we should expect that behavior to continue.1

Second, we are capable of creating explanations for phenomena that we never experience directly (such as stars, black holes, or dinosaurs), and for phenomena that are radically different from what has been experienced in the past. What is needed is an act of creativity. We did not “derive” atomic bombs or airplanes from past experience; we discovered good explanations about them, then we created them.2

So much for inductivism.

The justification misconception

Another key fallacy is that of “justificationism”—the misconception that in order for knowledge to be valid, it must be “justified” by some authoritative source. But this begs the question… what is this ultimate source of truth? A leader? A religious text? An institution? Nature itself?

For much of human history, we relied on authority figures to tell us what was true and right, based on the presumed wisdom of the leaders of our tribe, government, church, etc. Their supposedly infinite knowledge conferred us feelings of certainty and social status.

The break away from this authoritarian tradition began with the boldness of ancient philosophers such as Aristotle, but truly accelerated in the 16th and 17th centuries with revolutionary thinkers such as Galileo Galilei, Isaac Newton, and Francis Bacon. These leaders helped shape the “Enlightenment,” an intellectual movement that advocated for individual liberty, religious tolerance, and a rebellion against authority with regard to knowledge.

The reigning theory of how knowledge progresses

Standing on the shoulders of the Enlightenment giants, a 20th-century Austrian philosopher named Karl Popper—a towering thinker who nonetheless remains under-appreciated by the general public—transformed our understanding of the growth of knowledge with his theory of critical rationalism.

Popper’s theory advocated for the opposite of justificationism, called “fallibilism,” which recognized that while there are many sources of knowledge, no source has authority to justify any theory as being absolutely true. Tracing all knowledge back to its ultimate source is an impossible task, because this leads to an infinite regress (“But how do we know that source is justified?”).

Instead of asking, “What are the best sources of knowledge?”, Popper recommended that we ask, “How can we hope to detect and eliminate error?”3

“What we should do, I suggest, is to give up the idea of ultimate sources of knowledge, and admit that all knowledge is human; that it is mixed with our errors, our prejudices, our dreams, and our hopes; that all we can do is grope for the truth even though it may be beyond our reach.”

Karl Popper, Conjectures and Refutations (1963, pg. 39)

Popper acknowledged the inherent asymmetry between the justification of a theory and the refutation of one. Whereas it is impossible to definitively prove theories to be correct, we are sometimes able to prove them wrong. Let’s return to our extrapolating chicken. No matter how many days she observed that the farmer came by to feed her, she would never be able to definitively justify her “benevolent farmer” theory. But the one day that the farmer wrung her neck definitively refutes her theory. Sorry, chicken…

Error-correction, not ultimate truth

With this understanding, we arrive at the way in which knowledge actually progresses: through a problem-solving process of conjecture and refutation, also known as trial-and-error.

This process resembles that of biological evolution, in which nature “selects” for the random genetic mutations that are most successful at causing their own replication.4 Ideas, too, are subject to variation and selection.

Knowledge creation begins with creative conjectures—unjustified guesses or hypotheses that offer tentative solutions to our problems. We then subject our guesses to criticism, by attempting to refute them. We eliminate those that are refuted. We tentatively preserve the rest, but they can never be positively justified or proved. The only logically justified statements are tautologies—such as, “All people are people”—which assert nothing.

Adapted from The Fabric of Reality (Deutsch, 1997, pg. 65)

It is not “ultimate truth” that we are after, for even if such a thing did exist, there is no source that can verify it as unquestionably true. Our sense organs themselves are highly fallible. The best we can do is use our creativity to propose new guesses, then subject our best guesses to critical tests and discussion, using logic and the scientific method. When our tests successfully refute a theory, we abandon it. If we discard an old theory in favor of a newly proposed one, we tentatively deem our problem-solving process to have made progress.5

Einstein was right that he was wrong

It is easy to overlook how recent (relatively speaking) it was that science started to embrace the fallibility of all its theories. For this, we have Albert Einstein to thank.

For the 200+ years before Einstein, Isaac Newton’s theory of gravity had unprecedented experimental success; it was widely regarded as the “authoritative” theory of gravity. That is, until Einstein’s theory of general relativity (1915) destroyed the authority of Newton’s theory by showing that it was, in fact, merely a flawed approximation.

To this day, general relativity reigns as the dominant theory of gravity and spacetime, but Einstein himself was clear from the beginning that his theory was essentially conjectural. He did not regard general relativity as “true,” but merely as being a better approximation to the truth than Newton’s theory! We should take note that one of the greatest thinkers in history understood that his own revolutionary theory would inevitably be replaced.6

***

The overarching lesson is that everything is tentative, all knowledge is conjectural, and any good solution may also contain some error. Far from being a pessimistic view, Popper’s theory provides the very foundation for progress: error-correction.

We obtain the fittest available theory not by the justification of some unattainably perfect theory, nor by induction from repeated observations, but through the systematic removal of errors from our theories and the elimination of those which are less fit. Only a very few theories succeed, for a time, in a competitive struggle for survival.7

Relativity: Perspective is paramount

One of the most foundational scientific concepts is the idea of realism, the common-sense view that there exists an objective physical reality independent of any individual’s own consciousness.1 But that doesn’t mean that everything appears the same to everyone; in fact, one of the deepest theories of physics tells us the exact opposite.

Albert Einstein’s groundbreaking theories of relativity demonstrated that time and distance are relative notions, and that they are really two parts of the same thing—a fourth dimension we refer to as “spacetime.” A point in spacetime, called an “event,” is both a location and a moment.

Einstein’s “special theory of relativity” (1905) challenged our intuitive understanding of time and space. According to relativity, observers moving at different speeds will get different answers when measuring lengths and durations.2 For instance, an observer who is moving quickly will experience time more slowly than one who is stationary. The watch of the moving observer would quite literally tick more slowly.3

In other words, time is relative. The only thing that remains fixed is the speed of light—around 186,000 miles per second in a vacuum.

The so-called “twin paradox” provides an incredible example of relative time. Imagine identical twin sisters together on Earth. If one twin hops in a rocket and flies directly away from Earth at a very high speed, while the other one remains stationary on Earth, when the spacefaring twin returns she will be younger than her sister on the ground!4

Even beyond physics, relativity as a concept teaches us that our everything depends on our individual vantage point, which rarely paints a complete picture.

Tricks of perspective

We experience relativity whenever we are riding in a car, train, or airplane. When we are traveling in smooth, linear motion at constant velocity, we will not perceive the speed of our movement without an external frame of reference (such as a window), whereas an outside observer could clearly observe this movement. This is also why we don’t “feel” the rotation of the Earth.

Suppose that you are a passenger on a rocket traveling at a uniform velocity and you toss a ball up in the air and then catch it. You will only perceive the vertical change in the ball’s position. Now suppose you are a stationary observer of the rocket passing by overhead (and you have special x-ray vision into the rocket). You will also notice the horizontal movement when the passenger tosses the ball as the rocket moves across your field of vision. The passenger will not notice this horizontal movement, because he and the ball are moving at the same velocity as the rocket!

Even the most beautiful theory is fallible

Observing that his theory of special relativity did not fit with Isaac Newton’s 200-year-old theory of gravity, Einstein printed another article in 1915 providing a complete solution: the “general theory of relativity,” one of the most powerful and elegant theories ever produced by mankind. This elegant theory posits that space can expand and contract, and that it curves in the presence of matter, such as planets or stars.5

Einstein’s groundbreaking theory transformed our understanding of physics and the universe, superseding Newton’s theory of mechanics and gravity that had dominated our thinking for centuries. This is perhaps the best example to remind us that every theory can be wrong. To this day, Newton’s predictions remain incredibly accurate for most practical circumstances; his theory was just incomplete.

“It was a shocking discovery, of course, that Newton’s laws were wrong, after all the years in which they seemed to be accurate. Of course it is clear, not that the experiments were wrong, but that they were done over only a limited range of velocities, so small that the relativistic effects would not have been evident.”

Richard Feynman, The Feynman Lectures on Physics (1963, Vol. I pgs. 16-1—16-2)

Despite its revolutionary impact, we already know that general relativity cannot be a complete description of reality. Although it explains the motions of larger objects (from planetary orbits to billiards ball trajectories) with great accuracy, it breaks down when describing the bizarre behaviors of microscopic particles. That realm belongs to quantum theory, the other pillar of physics, which—perplexingly—cannot explain gravity.

Fortunately, scientific knowledge progresses not by discovering unattainably perfect theories, but by the repeated toppling of our best theories by stronger, more testable, more unifying ones. We can simultaneously regard both general relativity and quantum theory as our best current explanations in their respective domains, and expect that both theories will be eventually be revised, unified, or replaced!

No single perspective rules

The concept of relativity holds profound implications beyond physics, particularly by underscoring the value of diverse perspectives in our everyday thinking and decision making, including in social systems such as organizations or governments.

Everything (except the speed of light) depends on one’s perspective; there is no “ruling” frame of reference. There is no single, shared “present” in the universe, only the appearance of such with the objects that are close to us and moving at similar speeds.

Relativity teaches us that the knowledge an observer can have about a system to which he or she belongs is limited. When we are immersed in our own system, we can become blind to events outside of our immediate experience, and we may not be able to easily detect developments in our system (e.g., culture change, unconscious bias) that outsiders might observe much more readily.

Just as we require a frame of reference in order to notice the rotation of the Earth, we can frequently benefit from an external perspective, whether through formal observational methods, independent research, or even through an outside consultant, coach, or therapist. We must be open to other perspectives if we want to truly understand ourselves and our environment.

“Your personal experiences make up maybe 0.00000001% of what’s happened in the world but maybe 80% of how you think the world works.”

Morgan Housel, investor

***

We can also draw inspiration from relativity to practice more empathy; our concepts of “rationality”, “morality”, or “duty” can be highly relative to the contexts we live in. Before we pass judgment or claim to truly understand something, we should pause to consider alternative perspectives. Ask experts from various backgrounds. Seek out the best arguments against your ideas. The perspective that led us to our initial conclusions is rarely the only valid perspective to take!

Complex Systems: Why simple solutions don’t work

Complex systems—the term of art for many interacting agents, whether buyers and sellers in markets, employees and managers in companies, or the atoms and molecules of a turbulent river—have earned that term for a reason. Their most interesting questions rarely have simple answers.”

Safi Bahcall, Loonshots (2019, pg. 227)

Complex systems are collections of simpler components that are interconnected in such a way that they “feed back” onto themselves to create their own complex (or “emergent”) patterns of behavior—collective behaviors which cannot be defined or explained by studying the individual elements.

Complex systems are everywhere. Examples include humans, animals, plants, insect colonies, brains, organizations, industries, economies, ecosystems, galaxies, and the Internet. Their basic operating units are feedback loops—causal connections between the stock levels of the system and the rules or actions which control the inflows and outflows.

Complex systems are inarguably one of the most valuable mental models to understand, because most of what we do takes place in complex systems. Yet we routinely ignore and underestimate their dynamics! Adopting “systems thinking” can help us enact positive change and make drastically better decisions.

Producing complex behavior

Consider the population of a country. Let’s define the flows that control a nation’s population as the birth rate and death rate (assuming, for simplicity, that there is no migration). If the birth rate exceeds the death rate, the population will grow—a reinforcing (positive) feedback loop. But it cannot grow forever.

Eventually, the system will produce its own balancing (negative) feedback loops to attempt to constrain future population growth. For instance, excessive population levels will eventually strain the country’s resources, potentially leading to increased deaths due to lower living standards, food shortages, or degraded health care. Sensing these problems, people may choose to have fewer children in the future (or the government may mandate it). Savvy innovators may create new technologies to mitigate society’s challenges. The cumulative effect of all this negative feedback is likely to be a moderation of population growth.

This example illustrates how a the relationships between a system’s stocks and flows can produce unique, higher-order patterns of behavior. Amidst constant change, the system attempts to “self-regulate” into a long-term equilibrium.

Why we get systems so wrong

Systems are incredible: they can adapt, respond, pursue goals, repair themselves, and preserve their own survival with lifelike behavior even though they may contain non-living things. Human society itself is a remarkable example, with networks of individuals, organizations, and governments acting individually and collectively to create the conditions for human thriving.

However, systems are also frustrating: the goals and behaviors of their subunits (individual countries, organizations, groups, leaders, citizens, etc.) may create system-level outcomes that no one wants, such as hunger, poverty, environmental degradation, economic instability, unemployment, chronic disease, drug addiction, and war.1

It is easy to fall into traps of overconfidence and illusions of control when dealing with complex systems. We prefer a world that we can explain with simple, coherent narratives and familiar patterns to one riddled with randomness and uncertainty.

The uncomfortable truth is that accurately predicting and influencing the behavior of complex systems is extremely difficult. When multiple feedback loops are jostling for dominance, small changes in the environment can produce large, nonlinear changes in the system. Consider the impossibility of perfectly forecasting stock market crashes, terrorist attacks, or even the weather.

The environment changes over time and as we and others interact with it, and as behavior by one element reverberates into subsequent behaviors in other elements, or even in other systems. Consequently, our knowledge of complex systems must necessarily be piecemeal and imperfect.2

We cannot control complex systems, but we can seek to understand them in a general way and prepare to adapt when they inevitably surprise us.

The consequences of non-system solutions to system problems

We should be extremely cautious about pursuing rigid, short-term solutions to issues that are fundamentally systems problems, because we cannot be certain that such actions won’t cause more unintended harm than good.

Many well-intended policies seeking to address such problems may “sound” good but have only short-term benefits, or fail to achieve their goal altogether. Worse, they could exacerbate the problem or even create entirely new problems.

There are no “easy” solutions to problems such as public health, economic growth, homelessness, public education, or terrorism.

Consider the following examples of non-systems solutions gone wrong:

  • The U.S. experiment with the prohibition of alcohol in 1920 led to a violent spike in crime, and alcohol consumption actually increased.3
  • In the 1930s, Australia introduced the cane toad species to control the cane beetle, which was a major pest for sugar cane. With few natural predators, the cane toad itself became invasive. It not only failed to control the cane beetles, but also devastated other species with its toxic skin and created entirely new ecological problems.4
  • Several Asian governments exerted extreme state control over their economies in the late 20th century, including Korea (1960s), China (1979), and Vietnam (1990s). These policies were very effective at keeping inequality down, but at an enormous cost in terms of growth—reducing living standards for everyone!5
  • China’s decades-long “One Child” policy indoctrinated many Chinese parents to believe that their resources are best devoted to one child, permanently changing the country’s population dynamics. A cultural preference for boys encouraged sex-selective abortions and contributed to a lopsided gender ratio. And brutal enforcement tactics included illegal forced abortions and sterilizations.6

A better approach to solving complex problems

“If you wish to influence or control the behavior of a system, you must act on the system as a whole. Tweaking it in one place in the hope that nothing will happen in another is doomed to failure.”

Dennis Sherwood, Seeing the Forest for the Trees (2002, pg. 3)

Before we attempt to nudge a complex system safely towards change, we must look beyond the “event” level (today’s happening) in order to understand the system’s patterns of behavior over time (its dynamics). This requires substantial observation and study. With this knowledge, we can begin to unearth and act on the systems structures that give rise to those behaviors.7

It is here, at the system level, where we are most likely to unearth high-leverage places for intervention and anticipate unintended consequences. We call this problem-solving approach “systems thinking”—one of the most valuable mental frameworks available to us.

For example, let’s contrast China’s blunt “One Child” policy, which aimed to curb population growth, with Sweden’s population policies during the 1930s, when Sweden’s government became concerned after a substantial decline in birth rate. Sweden could have adopted, say, a simple “Three Child” policy to force parents to accelerate population growth. Instead, the Swedish government recognized that there would never be agreement about the appropriate size of the family. But there was agreement about the quality of child care: they determined that every child should be wanted and nurtured.

Under this principle, they adopted policies that provided widespread sex education, free contraceptives and abortion, free obstetrical care, simpler divorce laws, support for struggling families, and substantial investments in education and health care. Since then, Sweden’s birth rate has fluctuated up and down, but there is no crisis, because people are focused on a much more important goal than the number of humans in Sweden: the quality of life of Swedish families.8

***

The fact that we live in a world of unpredictable and interconnected complex systems should be humbling and encourage us all to embrace lifelong learning. We should experiment incrementally, monitor vigilantly, and be willing to adapt based on what we learn. Be skeptical of seemingly straightforward solutions to complex problems. Consider what second- and third-order side effects there could be, and have the courage to question popular convention.