Optimism: The irrationality of discounting human creativity

“(T)here is no fundamental barrier, no law of nature or supernatural decree, preventing progress. Whenever we try to improve things and fail, it is… always because we did not know enough, in time.”

David Deutsch, The Beginning of Infinity (2011, pg. 212)

In today’s news and social media environment, optimism is scarce. We are constantly bombarded with claims that society is heading in the wrong direction, that our children will be worse off, that some problem or trend will ensure our demise. These pessimistic messages poison the conversation around a variety of threats: climate change, artificial intelligence (AI), nuclear apocalypse, extraterrestrial invaders, etc.

It’s true that problems are inevitable. It’s also true that there’s no guarantee we will solve these problems. However, we frequently behave (often unknowingly) as if we are not capable of solving them. The truth is that we humans have the gift of creating new knowledge. The future is never doomed to tragedy nor destined to bliss, because the knowledge that will determine the future has not yet been created.

Pessimism is self-enforcing. The (implicit) assumption that we can’t or won’t create the knowledge to solve our problems discourages us from even trying. On the other hand, optimism, the belief that we are capable of solving problems, is not merely a more pleasant way to approach life; in fact, it is the more rational approach to the future.

The futility of pessimism and prophesy

Pessimism is an ancient plague. Indeed, in the Biblical book of Ecclesiastes, King Solomon lamented the futility of life and knowledge:

“For with much wisdom comes much sorrow; the more knowledge, the more grief.”

Ecclesiastes 1:18

Many ancient philosophers and modern thinkers have echoed the king’s dread. Sure, ignorance may be blissful for some, but King Solomon had it backwards: the creation of knowledge is the only way to eliminate the world’s sorrows.

The famous population theorist Thomas Malthus offers a great example. In 1798, Malthus built a model of population growth and agricultural growth which led him to prophesy that excessive population growth destined humanity to mass famine, drought, and war in the 19th century. He believed that the poor and uninformed procreated imprudently, that periodic checks on the birth rate were necessary, and that people were unlikely to behave as required to avert these disasters.1 Malthus and other prominent thinkers believed he had discovered the end of human progress.

Malthus’s population growth predictions were actually fairly accurate. However, his prophesied dooms never materialized. Instead, the food supply grew at an unprecedented rate in the 19th century due to remarkable innovations such as plows, tractors, combine harvesters, fertilizers, herbicides, and pesticides.2 Living standards rose considerably. Today, the number of humans is 10x larger than in Malthus’s time, yet famine mortality rates have plummeted.

The principle of optimism

The mistake Malthus made remains common today: failure to account for the potential for humans to create new knowledge, new technology. Whenever we make predictions of the future but ignore this key factor, we inevitably devolve into pessimism—and our predictions become prophecies.

The brilliant physicist David Deutsch offers a prescription, a worldview he calls the “principle of optimism”:

“All evils are caused by insufficient knowledge.”

David Deutsch, The Beginning of Infinity (2011, pgs. 212-213)

By “evils,” Deutsch is not referring to Hitler or Thanos; he is referring to unsolved problems. He argues convincingly that the only permanent barriers to solving problems are the immutable laws of physics. As long as a solution does not require breaking these laws (for example, if it required traveling faster than the speed of light), then humans have the potential to create the knowledge needed to solve any problem, given time.3 Fortunately, creating new knowledge is a distinctive human specialty.

AI doomsday?

Let’s consider one of the most common areas of pessimism today: AI. Over the years, many renowned thinkers—from Alan Turing to Elon Musk—have proposed various doomsday theories that AI will make humans obsolete or, worse, conquer and enslave them.

Aaron Levie, Founder/CEO of Box, offered a thought-provoking counter-argument to the common assertion that AI will simply replace human workers in droves. Consider the claim that if AI could make a company 50% more profitable in a given function, it would also eliminate 50% of workers in that function. Levie rightly criticizes these naively pessimistic arguments:4

  • First, AI-displacement claims assume that companies are already operating with the maximum amount of labor they would or should have, if budget weren’t a constraint. In reality, companies may prefer to utilize the efficiency gains from AI to scale-up their operations, which might involve hiring more humans—not fewer. New business growth may promote even more investment.
  • Second, AI-displacement claims ignore the probability that competitors will also act in response to productivity gains, pressuring other firms to increase their own productivity—even at the expense of profits. Competition may demand they retain, or even expand, their workforces.

Levie is right. These AI-displacement arguments often ignore the second-order feedback effects that are characteristic of complex systems. We cannot simply assume that if AI leads to higher productivity, then humans will become obsolete. Rather, the impact of AI productivity gains on the demand for labor is unknowable, because the knowledge that humans will create in the future is unknowable. As with most new technologies, it’s likely that AI reduces the demand for labor in certain domains, increases it in others, and even opens up entirely new domains.

Most AI doomsday claims involve textbook pessimism: they implicitly assume that we will create no new knowledge and simply accept our fate—in other words, that progress is over. Instead of prophesying our eminent obsolescence, we would be better off testing and improving on new technologies, investigating how to deploy and control them responsibly, and learning how to adapt and reconfigure our societies to new circumstances. We have, in fact, made such evolutions countless times throughout history.

If AI does make humans redundant or conquer the world, it won’t be because we were incapable of solving the problem. The ultimate impact that AI has on humans is up to, well, humans.

A rational preference for the unknown

Pessimism also tends to emerge when we face decisions between the familiar and the unfamiliar. It’s quite easy to dismiss new ideas or possibilities simply because we’re more comfortable with the status quo.

A fascinating thought experiment from computer science offers a compelling case for why, under many conditions, we should actually prefer the unknown to the known—that is, why optimism can be rational.

In the “multi-armed bandit problem,” you enter a casino filled with slot machines, each with its own odds of paying off. Your goal: maximize total future profits. To do so, you must constantly balance between trying new machines (exploring) and cashing in on the most promising machines you’ve found (exploiting).

The problem has a fascinating solution, the “Gittins index,” a measure of the potential reward from testing any given machine—even one we know nothing about.5 We should prefer a machine with an index of, say, 0.60 to one with 0.55. The only assumption required is that the next pull is worth some constant fraction of the current pull. Let’s say 90%.

Here’s the fascinating part: an arm with a record of 0-0 (a total mystery) has an “expected value” of 0.50 but a Gittins index of 0.703. The index tells us we should prefer the complete unknown to an arm that we know pays off 70% of the time! The mere possibility of the unknown arm being better boosts its value. In fact, a machine with a record of 0-1 still has a Gittins index over 0.50, suggesting we should give the one-time loser another shot.6

The Gittins index offers a formal rationale for preferring the unknown. Exploration itself has value for the simple reason that new things could be better, even if we don’t expect them to be.

This type of reasoning carries over to all “explore/exploit” decisions. For example, how should companies balance between milking their current profitable products today versus researching and developing new ones? Or, how should societies allocate resources between ensuring energy availability with traditional fossil fuels versus investing in low-carbon alternatives like solar or nuclear? The Gittins index illustrates that the mere potential for progress means that trying new things is often the rational approach to the future.

***

Rational optimism is not naïve: we aren’t guaranteed to solve our problems. We could fail or destroy ourselves. However, we should be endlessly encouraged by the fact that the universe does not preclude us from solving our problems! We have all the tools we need.

Progress emerges from learning and creating the right knowledge to address our problems, current and future. Assuming otherwise, or—equally dangerous—failing to remember that creative problem-solving is a human specialty, leads us inevitably to pessimism, prophesy, and error. As Winston Churchill explained about why he was an optimist, “It does not seem to be much use being anything else.”

Memes: The evolutionary battle of ideas

“In reality, a substantial proportion of all evolution on our planet to date has occurred in human brains. And it has barely begun. The whole of biological evolution was but a preface to the main history of evolution, the evolution of memes.”

David Deutsch, The Beginning of Infinity (2011, pg. 380)

Memes are ideas that are replicators, the broader term for any entities capable of causing themselves to be copied. These aren’t limited to funny internet posts. Memes could include jokes, languages, cultural traditions, artistic movements, business strategies, advertising jingles, scientific theories, conspiracy theories, religions, documents, or recipes. Each of these ideas contains (often inexplicit) knowledge for causing its own propagation across human minds, through an evolutionary process of creation and recreation.1

The leaders who transform political, social, scientific, or corporate systems—for better or worse—share the ability to successfully spread their ideas to others. Indeed, all human progress (and suffering) has relied on the creation and replication of memes—including those underlying democracy and autocracy, science and dogma, morality and evil.

For those striving to become more effective strategists and decision makers, fluency with memetics (the study of memes) better equips us to recognize and cultivate good ideas—and, more importantly, to avoid bad ones.

Genetic vs. memetic evolution

An enlightening way to explore memes is by contrast to the original replicators: genes, the bits of DNA that are copied across generations in living organisms.

Genes are the basis of biological evolution, which occurs through the imperfect copying of genes from parents to offspring, followed by a “selection” process in which nature ruthlessly filters out the gene variants that are less successful at causing their own replication.

evolution by replication, variation, and selection

Through 600m years of evolution, these remarkable molecules have given rise to the complex systems of control and feedback we see throughout nature in plants, animals, and bacteria. Genetic evolution, however, is slow—operating on timescales far too long for us to notice.

Memes—a concept coined by biologist Richard Dawkins and elucidated by physicist David Deutsch—also operate through replication, variation, and selection. First, someone transmits a meme (a joke, theory, recipe) to someone else. The recipient then either reenacts a version of that meme to others, or not. If its new holders recreate the meme, they may introduce further variations. “Selection” takes place as the competing “meme variants” achieve differential success in causing their own replication.

Here, however, the meme-gene metaphor crumbles. Whereas genes mutate randomly, with no regard to what problems they might solve, we can create new memes with conscious foresight. Moreover, memes undergo random and intentional variation not only when we express them to others, but also within our own minds. Our creative faculties enable us to subject our ideas to thousands of cycles of variation and selection before we ever enact a variant! Finally, we can transmit a meme immediately, and to anyone, not just our children. For these reasons, meme evolution is exponentially faster than gene evolution.2

internal meme evolution

We can create, vary, and discard our ideas with remarkable speed. This ability, in fact, helps explain the “ascent of man,” and especially our accelerating technological progress in the ~400 years since the Enlightenment. Our knowledge growth is no longer bound by genetic timescales.

How ideas spread

But why do some memes flourish, while most perish? Unfortunately, as with genes, the memes that survive don’t necessarily need to be “good,” or even beneficial. They simply must be better at displacing competing memes from the population of ideas.

Memes successfully replicate through one of two basic strategies: (1) by helping their holders (rational memes), or (2) by inhibiting their holders’ critical abilities (anti-rational memes).3

1) Rational memes

Memes can help their holders solve problems by conferring useful knowledge, such as the recipe to create bread, the knowledge of how to raise a child, instructions for constructing shelter, or a good scientific theory like Einstein’s theory of relativity. Because rational memes like these undergo countless cycles of creative variation and critical selection, they evolve to become increasingly useful. They prevail over alternative memes not because they have been immune from criticism, but because they withstood criticism and evolved accordingly. Memetics itself is robustly criticized4—as it should be.

rational meme replication

Rational memes thrive in open, dynamic societies and organizations—where people can freely critique, vary, and discard them.

2) Anti-rational memes

Memes that can’t withstand rational scrutiny may still spread by incorporating knowledge to prevent such scrutiny in the first place. These anti-rational memes include a myriad of ideologies and dangerous yet contagious ideas, such as conspiracy theories, autocracy, crackpot religions, and discriminatory or violent cultural beliefs. These memes propagate not by being more useful, but by successfully shielding themselves from criticism.

anti-rational meme replication

For instance, conspiracy theories shouldn’t survive long after the factual evidence inevitably contradicts them, so they struggle to spread on their own. They almost always include an additional theory: that the conspirators and the news media are “in” on the conspiracy together—so you can’t trust any information from either of them!5 This anti-rational “protective coating” discourages holders of the conspiracy meme from taking any contradictory evidence seriously.

Consider also the case of North Korea, a one-party state led by a totalitarian dictatorship. The legitimacy of the Kim dynasty rests on an intricate system of anti-rational memes and illiberal policies that suppress criticism and dissent, including “divine right,” mass surveillance, arbitrary arrests, torture camps, rigged elections, political punishments, and state-run media.6 The result is a wholly static society that compares miserably to its free and democratic neighbor, South Korea.

Though North and South Korea possess similar natural resources and share the same ethnic makeup and language, the lives of their citizens could not be more different. South Koreans enjoy a robust democracy, strong civil liberties, and average incomes nearly 30x higher than their Northern neighbors. Meanwhile, North Koreans suffer from inexcusably high rates of poverty, corruption, starvation, disease, and infant mortality.

Rational meme organizations

It is not just governments, but all institutions that should be evaluated based on how well they facilitate error-correction—companies included. Creating dynamic, durable organizations requires that rational memes can evolve without anti-rational memes suffocating change.

Such organizations must cultivate three key traits: (1) criticism, (2) experimentation, and (3) information-sharing.

  1. Criticism — When a leader’s authority rests on suppressing criticism, errors will go uncorrected, and stagnancy and decay are inevitable. This is one reason why the modern “board governance” corporate structure evolved: the board of directors acts as an error-correction mechanism that can remove incompetent leaders. Beware any institution in which individuals have unconstrained power or cultivate an environment in which criticizing or questioning them is unsafe. A great positive example is Netflix, which has institutionalized guiding principles for open, frequent feedback and even disagreement—not just with peers and subordinates, but with senior management.7 Netflix’s dynamic “feedback culture” enables employees to promptly address errors and improve themselves.
  2. Experimentation — Every organization must balance efforts to exploit its current businesses and memes with experiments to explore new ones. Unfortunately, when the environment feels stable, many large organizations stop experimenting because it seems costly and inefficient.8 Such complacency is a death sentence in dynamic competitive environments. Consider ChatGPT-maker OpenAI, who promotes experimentation using a unique approach that embeds researchers directly into product teams, shortening the feedback loop between ideation and development.9
  3. Information-sharing — Companies with a culture of transparency and documentation facilitate criticism and improvement of their memes. For rational memes to evolve efficiently, information and ideas should be codified in shareable, collaborative documents. Amazon is the paragon here. All meetings start with a pre-read of a standardized document format that summarizes an idea, an analysis, or a potential solution to a problem.10 This practice facilitates shared understanding, promotes diverse input and criticism of ideas, and creates an easily replicable artifact for every meeting and decision.

***

When we die, we leave behind genes and memes. In a couple generations, our genes will be forgotten. Memes, however, can live on.

Culturally, professionally, and politically, we should ask ourselves what memes we are creating, spreading, and allowing into our minds. Do they foster progress and understanding? Have we adequately considered alternatives? Are they somehow shielded from criticism by ourselves or others?

If we find ways to contribute positive ideas, behaviors, and culture to the world, we may leave a memetic legacy that long outlives our genes.

Counterfactual Thinking: Think like a robot can’t

A counterfactual is a “what-if” scenario in which we consider what could have been or what could happen, rather than just what actually happens. What if I had never met my partner? What if the US had never invaded Iraq? How might our customers react to a price increase?

Thinking in counterfactuals is a quintessential exercise in human creativity. By imagining what is possible or what could have been under different circumstances, we can unlock new solutions, better evaluate past decisions, and uncover deeper explanations of the world than we could by analyzing only what happens.

No, AI isn’t about to take over the world

To appreciate the power of counterfactual thinking, consider the capabilities and limitations of artificial intelligence (“AI”) technologies, specifically generative chatbots such as ChatGPT.

These chatbots leverage intricate machine learning (“ML”) networks to simulate human conversation. For instance, if we feed a chatbot millions of lines of dialogue from the Internet—including questions, answers, jokes, and articles—it can then generate new conversations based on patterns it has learned and convincingly mimic human interactions, using mindless and mechanical statistical analysis.

However, despite exponential improvements, AI chatbots struggle mightily with genuine counterfactual thinking. Unlike humans, their ability to explain the world or dream up entirely new scenarios is constrained by their explicit programming.1 They cannot disobey, or ask themselves a different question, or decide they would rather play chess. Instead, they assemble convincing responses to prompts by reassembling their training data. Until we can fully explain how human creativity works—a milestone we are currently far from reaching—we won’t be able to program it, and AI will remain a remarkable but incomplete imitation of human-level creative thought.2

“Becoming better at pretending to think is not the same as coming closer to being able to think. … The path to [general] AI cannot be through ever better tricks for making chatbots more convincing.”

David Deutsch, The Beginning of Infinity (2011, pgs. 157-158)

Contrary to the popular AI “doomsday” paranoia, this idea paints a hopeful picture. While digital systems continue to automate routine tasks3—such as bookkeeping, analytics, manufacturing, or proofreading—our counterfactual abilities enable us to push the boundaries of innovation and creativity, with AI as our aid, not our replacement.

We should, therefore, dedicate ourselves to solving problems that require our unique creative and imaginative powers—to design solutions that even the most powerful AI cannot. We did not invent the airplane, nuclear bomb, or iPhone by regurgitating historical data. We imagined good explanations for how they might work, then we created them!

Error-correcting with counterfactuals

Over countless generations of ​genetic evolution​, our brains have developed remarkable methods of learning. The first, more “direct” method of learning is through rote trial-and-error, which can help monkeys to figure out how to crack nuts or chess players to devise winning strategies.

The second, which is a human specialty, is through simulation—using hypothetical scenarios to evaluate potential solutions in our minds. Because blind trial-and-error can sometimes lead to tragedy, an effective simulation is often preferable, and sometimes necessary.4 This strategy is evident in flight training, surgical practice, war games, nuclear reactions, and natural disasters.

In fact, counterfactuals are crucial to the knowledge creation process. It always starts with creative guesswork (counterfactuals) to imagine tentative solutions to a problem, followed by criticism of those hypotheses to correct or eliminate bad ones. We evaluate a candidate theory by assuming that it is true (a counterfactual), then following it through to its logical conclusions. If those conclusions conflict with reality, then we can refute the theory. If we fail to refute it, we tentatively preserve it as our best theory, for now.

Let’s try it out with a problem of causality. We’ve all heard that “correlation does not imply causation,” but our instinct to quickly explain things in terms of linear cause-effect narratives still misleads us. To truly establish causality, we must turn to counterfactual reasoning. In general, we should conclude that an event A causes event B if, in the absence of A, B would tend to occur less often.5

Consider the claim that vaccinations cause autism in children, a tempting headline for the conspiracy-minded. Following our counterfactual logic above: if this theory were true, we should expect autism to be more common among vaccinated children. However, the evidence suggests that rates of autism are essentially equivalent between vaccinated and unvaccinated children.6 In reality, vaccination administration and the onset of autism simply happen (coincidentally) around the same age.

“The science of can and can’t”

A thrilling application of counterfactuals comes from physics, where theoretical physicists David Deutsch and Chiara Marletto are pioneering “constructor theory,” a radical new paradigm that aims to rewrite the laws of physics in terms of counterfactuals, statements about what is possible and what is impossible.

Traditional physical theories, such as Einstein’s general relativity and Newton’s laws, have focused on describing how observable objects behave over time, given a particular starting point. Consider a game of billiards. Newton’s laws of motion can predict the paths of the balls after a cue strike, based on initial positions and velocities. However, it remains silent on why a ball can’t spontaneously jump off the table or turn into a butterfly mid-motion.

Constructor theory goes a level deeper. Instead of merely predicting the balls’ trajectories, it might explain why certain trajectories (such as flying off as a butterfly) are impossible given the laws of physics. By focusing on the boundaries of what is possible, counterfactuals enable physicists to paint a much more complete picture of reality.

Using counterfactuals, constructor theory has offered re-formulated versions of the laws of thermodynamics, information theory, quantum theory, and more.7

***

In summary, we should embrace our beautifully human capacity to imagine worlds and scenarios that do not exist. Counterfactual thinking can spark creativity and innovation, help us reflect on the past, enable better critical evaluations, and even reimagine the laws of physics. Plus, it is the best defense we have against automating ourselves away!

What world will you dream up next?

Scientific Method: Why the world doesn’t speak Chinese

For much of human history, we relied on authority figures to tell us what is true and just, based on the presumed wisdom of the leaders of our tribe, government, church, etc. The break from this authoritarian and anti-progressive tradition began with the boldness of ancient philosophers such as Aristotle, but truly accelerated in the 16th and 17th centuries with revolutionary thinkers such as Francis Bacon, Galileo Galilei, and Isaac Newton. These leaders helped shape the “Enlightenment,” an intellectual movement that advocated for individual liberty, religious tolerance, and a rebellion against authority with regard to knowledge.

What emerged was an enduring tradition of criticism and of seeking good explanations in an attempt to understand the world—a tradition we call science, which has lead us to remarkable progress in the last ~400 years.

The best method of criticism

Thanks to the brilliant yet underappreciated 20th-century philosopher Karl Popper, we have a full-fledged theory of knowledge creation—the theory of critical rationalism. We do not obtain knowledge by the charity of some authoritative source passing the “truth” down to us. In fact, there is no such authoritative source. All knowledge is fallible.

Instead, we create knowledge through an iterative process of trial-and-error. First, we conjecture (guess) tentative solutions to our problems. Then, we criticize those theories, attempting to disprove them. We discard theories that are refuted and try to improve on them. If we’re able to replace a refuted theory with a better one, then we can tentatively deem our efforts to have made progress.1

“What we should do, I suggest, is to give up the idea of ultimate sources of knowledge, and admit that all knowledge is human; that it is mixed with our errors, our prejudices, our dreams, and our hopes; that all we can do is grope for the truth even though it may be beyond our reach.”

Karl Popper, Conjectures and Refutations (1963, pg. 39)

Criticism is the step in this process that helps us root out wrongness. The characteristic (though not the only) method of criticizing candidate theories is through experimental testing—through the scientific method.

After we postulate a theory, we perform a crucial experiment, one for which the old theory predicts one observable outcome and the new theory another. We eliminate the theory whose predictions turn out to be false.2

For instance, our “new” theory could be that a particular dieting method is effective for losing weight. Our “old” theory could be that the dieting method does nothing (the dreaded “null hypothesis”). We would run an experiment and compare the results of a randomly selected group who used the method to a randomly selected “control” group who didn’t. If the treatment group’s results aren’t sufficiently better than the control group’s, then we reject the theory that the dieting method is effective.

The scientific method embraces objectivity, curiosity, careful observation, fierce skepticism, analytical rigor, and continuous improvement.

Good vs. bad explanations

Critically, our scientific theories (guesses) must meet two key criteria. First of all, they must be falsifiable (or testable)—that is, they must be capable of conflicting with possible observations. If no conceivable event would contradict the theory, it cannot be scientific.3

As an example, consider the hypothesis “Scorpios are creative and loyal.” Would a single uncreative, untrustworthy person born between October 23 and November 21 refute the theory? Would 1,000? How uncreative or untrustworthy would they have to be, and how would we know? Unfortunately, the conditions under which these kinds of astrological predictions would be false are never mentioned; therefore, they cannot be scientific (sorry, astrologers…).

Second, our scientific theories must be what physicist David Deutsch calls “good explanations,” those that are hard to vary while still accounting for what they purport to account for. When we can easily tweak our theories without changing their predictions, then testing them is almost useless for correcting their errors. We can toss these out immediately without experiment.4 Examples of easily variable explanations include assertions such as “The gods did it,” or “It appeared out of thin air,” or “Because I said so” (sorry, parents…). These kinds of claims are easily varied to explain, well, anything.

Why the world speaks English

Historians have long debated why the “scientific revolution” originated in the West, given that many technological and political innovations originated in the Indian, Islam, and (especially) Chinese empires. For centuries, the Chinese outperformed the Europeans in applying natural knowledge to solve human problems. But it was the emergence and proliferation of the scientific method across Western Europe in the 17th century that sparked the tradition of criticism and wave of innovation that has revolutionized human society. Why?

Our best theory attributes the West’s scientific supremacy to the structure of its knowledge creation practice—that is, to the scientific method. Despite enormous creativity in China, political battles and the Song emperors’ personal interests smothered the work of the early innovators. By contrast, in 1660, the English established the Royal Society of London, which openly shunned authority and embraced science as a path toward prosperity. The Royal Society inspired a generation of new scientists (including Isaac Newton) who would ultimately propel the English to a commanding lead in the scientific race.

“Nullius in verba” (Latin for “take nobody’s word for it”)

Royal Society of London motto (1660)

If the Chinese emperors had embraced a tradition of criticism, the scientific revolution might have occurred 500 years sooner. And the world might be speaking Chinese, instead of English.5

***

All knowledge is fallible. We have no authoritative source of “absolute truths.” But that is not what science is about. The real key to science is that our explanatory theories can be improved, both through the creation of new theories, and through criticism and testing of our existing theories—that is, through the scientific method.

The quest for good explanations, guided by a tradition of criticism and a rejection of authority over knowledge, is the source of all progress. It embodies the spirit of science and of the Enlightenment.6 For me, there is perhaps no more worthy calling!

Abstraction: Wrong models are extremely useful

“A truly wise person is not someone who knows everything, but someone who is able to make sense of things by drawing from an extended resource of interpretation schemes.”

Sönke Ahrens, How to Take Smart Notes (2017, pg. 116)

Everything we know about the world is a model, a simplified representation of something. This includes every word, language, map, math equation, statistic, document, book, database, and computer program, and our “mental models.”1 We use models constantly to simplify the world around us, to create knowledge, and to communicate. Our mental models, the collection of theories and explanations that we hold in our heads, provide the foundation upon which we interpret new information.

The process by which we create models is called “abstraction,” which helps us approximate complex problems with solvable ones that are simple enough to (hopefully) enable us to make a decision or find an answer. Our human ability to create useful abstractions (knowledge) is the reason why we are such a successful species on Earth.

However, we seldom think about how we fail at abstraction, or how to do it better. We commit abstraction errors whenever we over-simplify or over-complicate problems. We fail by trying to remember isolated facts, instead of trying to understand concepts and draw meaningful connections with other ideas. Finally, we place undue faith in our preferred models by failing to recognize that all models are—as we shall see—wrong.

Practicing good abstraction can help us generate truly creative solutions, while avoiding costly mistakes.

“Consider a spherical cow”

“Any model… must be an over-simplification. It must omit much, and it must over-emphasize much.”

Karl Popper, The Myth of the Framework (1994, pg. 172)

In order to be useful, models must ignore (abstract away) certain variables or features, in order to highlight others.

An old joke from theoretical physics involves a dairy farmer struggling with low milk production. Desperate, the farmer asks for help from the local academics. A team of theoretical physicists investigates the farm and collects vast sums of data. Eventually, the leader returns and tells the farmer, “I have the solution, but it works only in the case of spherical cows in a vacuum.”

The metaphor refers to the tendency of physicists (and other scientists) to use models and abstractions to reduce a problem to the simplest possible form to enable potentially helpful calculations, even if the simplification abstracts away some of the model’s similarity to reality.2

Abstract thinking requires a balancing act: we must be rigorous and systematic, but also make conscious tradeoffs with realism. We isolate a simple part of the problem, abstracting away all irrelevant details, and calculate the answer. Then, we put our “spherical cow” answer back into context and consider whether other factors might be important enough to overturn our conclusions.

Zooming in and out

Imagine how impossible it would be if we could only learn reductively—that is, by analyzing things into their constituent parts, such as atoms. The most basic events would be overwhelmingly complex. For example, if we put a pot of water over a hot stove, all the world’s supercomputers working for millions of years could not accurately compute what each individual water molecule will do.

But reduction is not the only way to solve problems. Many incredibly complex behaviors cannot be simply “derived” from studying the individual components. We can’t analyze one water molecule and predict the whole pot’s behavior. Good explanations, it turns out, can exist at every abstraction layer, from individual particles to whole systems and beyond! High-level simplicity can “emerge” from low-level complexity.3

Let’s revisit our boiling water pot. Predicting the exact trajectory of each molecule is an impractical goal. If we were instead concerned with the more useful goal of predicting when the water would boil, we can turn to the laws of thermodynamics, which govern the behavior of heat and energy. Thermodynamics can explain why water turns into ice at cold temperatures and gas at high ones, using an idealized abstraction of the pot that ignores most of the details. When the water temperature crosses certain critical levels, it will experience phase transitions of freezing, melting, or vaporizing.4 The laws of thermodynamics themselves are not reductive, but abstract; they explain the collective behaviors of lower-level components such as water molecules.

We can use this trick of “zooming” in and out between abstraction levels to generate creative explanations and solutions in many contexts, including in business. For instance, if we only analyzed our business at the company-level, we risk ignoring key factors that emerge only at higher levels (such as the industry or the economy) or at lower levels (such as a product, team, individual, or project).

There is no “correct” level of analysis. A good strategist considers various perspectives in order to understand the true nature of the problem and triangulate on the highest-leverage places for intervention.

Wrong models and doomsday prophecies

“All models are wrong, some are useful.”

George Box, statistician

By simplifying things, models help us reach a decision even when we have limited information. But because they are simplifications, models will never be perfect. Consider the following examples:

  • Résumés and interviews are (flawed) models to predict a candidate’s success in a role.
  • Maps helps us navigate the world by simplifying its geography. (Imagine how useless a “perfect,” life-sized map would be.)
  • Statistical models such as the normal distribution and Bayesian inference can help us make data-driven decisions, but they require us to make a bunch of iffy assumptions and simplifications.
  • Our best economic models are potentially accurate in some contexts, and wildly inadequate in others.

Failing to recognize the limitations of our models can lead to massive errors.

In 1798, the famous population theorist Thomas Malthus created a model of population growth and agricultural growth which led him to prophesy that the 19th century would bring about mass famine and drought. The model asserted that because the population grows exponentially while sustenance only grows linearly, humanity would soon outgrow its ability to feed itself. Malthus and many others thought he had discovered an inevitable end to human progress.

In hindsight, Malthus’s population projections were pretty accurate, but due to a critical error of abstraction, he wildly underestimated agricultural growth, which actually outpaced population growth. His simplified model abstracted away a variable that turned out to be pretty important: humans’ ability to create new technology.5 Indeed, we achieved incredible agricultural advances during this time, including in plows, tractors, combine harvesters, fertilizers, herbicides, pesticides, and improved irrigation.6 Today, famine mortality rates are a fraction of what they were when Malthus made his pessimistic prophesy, even though the global population is 10x larger.

Not only are models inherently imperfect, but also we can never be 100% certain that our model is the best one to use in the first place!7 We must not let our current models shut out fresh ideas or alternative explanations. The ongoing replacement of our best theories and models with new and better ones is not a bug of the knowledge creation process, it is a feature.8

For instance, Einstein’s groundbreaking theory of general relativity ended the 200-year reign of Isaac Newton’s theory of gravity. Already, we know that general relativity and quantum theory, our other deepest theory in physics, are completely incompatible with one another. One day, both will be replaced.

***

Once we accept that models are valuable yet imperfect and impermanent, we can harness them to our advantage. Good strategists can generate creative solutions by analyzing complex problems from various perspectives, perhaps reframing the issues altogether, and by carefully considering which factors are critical and which ones can be safely abstracted away. And we can use our models the right way: as potentially helpful tools in certain circumstances, not as ultimate truth.

Critical Rationalism: An un-sexy term for the literal way we achieve progress

“I may be wrong and you may be right, and by an effort, we may get nearer to the truth.”

Karl Popper, The Myth of the Framework (1994, pg. xii)

Scientific theories are explanations—statements about what is there, what it does, and how and why. The distinctive creativity of human beings manifests in our capacity to create new explanations, to create knowledge.

But how do we create knowledge? In short, it consists of guessing, then learning from our mistakes—a form of trial-and-error. Unfortunately, two common misconceptions in particular plague our understanding of this process, leading us to embrace bad ideas.

The induction misconception

One of the classic fallacies about knowledge creation is that of “inductivism”—the belief that we “obtain” scientific theories by generalizing or extrapolating repeated experiences, and that every time a theory makes accurate predictions it becomes more likely to be true. But this whole process is non-existent.

Adapted from The Fabric of Reality (Deutsch, 1997, pg. 59)

First of all, the aim of science is not to simply make predictions about experiences; it is about explaining reality. Completely invalid theories might make accurate predictions.

Consider Bertrand Russell’s famous story of the chicken. A chicken observed that the farmer stopped by every day to feed her, “extrapolating” from this observation that the farmer would continue to do so. Every day that the farmer came to feed the chicken, the chicken became more confident in her “benevolent farmer” theory. Then, one day, the farmer came and wrung the chicken’s neck. Unfortunately, the extrapolating chicken had the wrong explanation for the farmer’s behavior. We cannot assume that the future will mimic the past without a valid explanation for why the past behaved as it did, and why we should expect that behavior to continue.1

Second, we are capable of creating explanations for phenomena that we never experience directly (such as stars, black holes, or dinosaurs), and for phenomena that are radically different from what has been experienced in the past. What is needed is an act of creativity. We did not “derive” atomic bombs or airplanes from past experience; we discovered good explanations about them, then we created them.2

So much for inductivism.

The justification misconception

Another key fallacy is that of “justificationism”—the misconception that in order for knowledge to be valid, it must be “justified” by some authoritative source. But this begs the question… what is this ultimate source of truth? A leader? A religious text? An institution? Nature itself?

For much of human history, we relied on authority figures to tell us what was true and right, based on the presumed wisdom of the leaders of our tribe, government, church, etc. Their supposedly infinite knowledge conferred us feelings of certainty and social status.

The break away from this authoritarian tradition began with the boldness of ancient philosophers such as Aristotle, but truly accelerated in the 16th and 17th centuries with revolutionary thinkers such as Galileo Galilei, Isaac Newton, and Francis Bacon. These leaders helped shape the “Enlightenment,” an intellectual movement that advocated for individual liberty, religious tolerance, and a rebellion against authority with regard to knowledge.

The reigning theory of how knowledge progresses

Standing on the shoulders of the Enlightenment giants, a 20th-century Austrian philosopher named Karl Popper—a towering thinker who nonetheless remains under-appreciated by the general public—transformed our understanding of the growth of knowledge with his theory of critical rationalism.

Popper’s theory advocated for the opposite of justificationism, called “fallibilism,” which recognized that while there are many sources of knowledge, no source has authority to justify any theory as being absolutely true. Tracing all knowledge back to its ultimate source is an impossible task, because this leads to an infinite regress (“But how do we know that source is justified?”).

Instead of asking, “What are the best sources of knowledge?”, Popper recommended that we ask, “How can we hope to detect and eliminate error?”3

“What we should do, I suggest, is to give up the idea of ultimate sources of knowledge, and admit that all knowledge is human; that it is mixed with our errors, our prejudices, our dreams, and our hopes; that all we can do is grope for the truth even though it may be beyond our reach.”

Karl Popper, Conjectures and Refutations (1963, pg. 39)

Popper acknowledged the inherent asymmetry between the justification of a theory and the refutation of one. Whereas it is impossible to definitively prove theories to be correct, we are sometimes able to prove them wrong. Let’s return to our extrapolating chicken. No matter how many days she observed that the farmer came by to feed her, she would never be able to definitively justify her “benevolent farmer” theory. But the one day that the farmer wrung her neck definitively refutes her theory. Sorry, chicken…

Error-correction, not ultimate truth

With this understanding, we arrive at the way in which knowledge actually progresses: through a problem-solving process of conjecture and refutation, also known as trial-and-error.

This process resembles that of biological evolution, in which nature “selects” for the random genetic mutations that are most successful at causing their own replication.4 Ideas, too, are subject to variation and selection.

Knowledge creation begins with creative conjectures—unjustified guesses or hypotheses that offer tentative solutions to our problems. We then subject our guesses to criticism, by attempting to refute them. We eliminate those that are refuted. We tentatively preserve the rest, but they can never be positively justified or proved. The only logically justified statements are tautologies—such as, “All people are people”—which assert nothing.

Adapted from The Fabric of Reality (Deutsch, 1997, pg. 65)

It is not “ultimate truth” that we are after, for even if such a thing did exist, there is no source that can verify it as unquestionably true. Our sense organs themselves are highly fallible. The best we can do is use our creativity to propose new guesses, then subject our best guesses to critical tests and discussion, using logic and the scientific method. When our tests successfully refute a theory, we abandon it. If we discard an old theory in favor of a newly proposed one, we tentatively deem our problem-solving process to have made progress.5

Einstein was right that he was wrong

It is easy to overlook how recent (relatively speaking) it was that science started to embrace the fallibility of all its theories. For this, we have Albert Einstein to thank.

For the 200+ years before Einstein, Isaac Newton’s theory of gravity had unprecedented experimental success; it was widely regarded as the “authoritative” theory of gravity. That is, until Einstein’s theory of general relativity (1915) destroyed the authority of Newton’s theory by showing that it was, in fact, merely a flawed approximation.

To this day, general relativity reigns as the dominant theory of gravity and spacetime, but Einstein himself was clear from the beginning that his theory was essentially conjectural. He did not regard general relativity as “true,” but merely as being a better approximation to the truth than Newton’s theory! We should take note that one of the greatest thinkers in history understood that his own revolutionary theory would inevitably be replaced.6

***

The overarching lesson is that everything is tentative, all knowledge is conjectural, and any good solution may also contain some error. Far from being a pessimistic view, Popper’s theory provides the very foundation for progress: error-correction.

We obtain the fittest available theory not by the justification of some unattainably perfect theory, nor by induction from repeated observations, but through the systematic removal of errors from our theories and the elimination of those which are less fit. Only a very few theories succeed, for a time, in a competitive struggle for survival.7