Summary of The Incerto by Nassim Taleb

  • Post category:Articles / Summaries
  • Post last modified:April 3, 2024

Summary: 12 min

Books reading time: 33h05

Score: 10/10

Access the Summary Database

To have a broad view of Nassim’s thesis and ideas, I compiled the Incerto into one concise article, linking each book to its follower. 

So far, the series is composed as follows:

Side note: due to its nature, The Bed of Procrustes has not been included in this article.

Enjoy!


Summary of The Incerto

Part I: Fooled by Randomness

Life is mostly random, but we don’t see it because our brains are wired to understand it as a series of logical events.

These mental shortcuts called heuristics help us make sense of a world in a way that doesn’t represent it truthfully.

Let’s take narratives as an example.

Narratives are compelling stories we tell ourselves. A narrative links a series of random actions to each other to make them appear as a logical sequence of events.

Eg: when I tell myself my own life story, I say that I studied for one year but didn’t like it, so I took a gap year. I make it sound like the gap year was a logical thing to do, while it was in fact completely random.

Part of the job of building narratives is creating “past predictions.”

Making past predictions is the action of looking at historical events to make them appear as logical outcomes leading to the situation we’re in today. Past predictions make history sound like all that happened in the past was “bound to happen”.

Eg: When we look at it today, many say that 9/11 was perfectly predictable. No, it wasn’t. It only appears predictable after it happened, due to past predictions.

So, what do we use past predictions and narratives for?

We use them as sources of information to predict the future. Since we believe that the future will resemble the past, we look at the past hoping to catch a glimpse of the future.

Unfortunately, this is another fallacy. The future is random and unpredictable.

It’s so unpredictable that the overwhelming majority of predictions we make end up being wrong. This is the first problem. The second problem is that we have a very narrow understanding of history.

History as we know it is not perfect. It’s an account of some of the events that happened in the past. It is missing all of the events that didn’t happen (in an alternative universe, the Titanic didn’t sink) as well as the events we don’t know about.

The past, as a result, is only a little more well-known than the future and the present.

The present isn’t well-known at all.

Few are aware that they’re living history when they’re actually living history. These moments are always realized ulteriorly (eg: no one knew at the beginning of WWI, that it was the beginning of WWI).

Nobody can predict the future, as we said.

People that do and that, somehow, end up being right every time (eg: “legendary investors”) are called lucky fools.

Lucky fools exist because enough people tried to do what they did so, it was logical that at least one person would succeed.

Eg: if 10 000 people try to become day traders, at least one will succeed out of mere luck.

Out of these 10 000 people, the successful ones, the “survivors” are hailed as “legendary investors” by the press, while their success was likely due to mere luck instead of skill. One person, eventually, wins the lottery because there are enough players for that to happen.

The rest of day traders (or lottery losers), those who failed, are called silent evidence.

They’re evidence that predicting the future isn’t possible. But they’re not taken into account in the press, of course. Failure is invisible.

Business books are always about people that succeed, never about those who lose (except for What I learned Losing a Million Dollars). Business books don’t take silent evidence into account.

Silent evidence is another reason why we can’t trust history. Since we can’t trust history, it would be foolish to believe we can predict the future based on the past.

And even if we had a perfect picture of the past, we still wouldn’t be able to use it to predict the future since the past does not resemble the future.

There will always be events happening in the future that have never happened before – hence impossible to predict.

These events, when unpredictable and with high impact, are called Black Swans.

Part II: The Black Swan

Black Swans are unpredictable high-impact events that seemed like they were predictable in hindsight.

Eg: Harry Potter, as a literary success, was a Black Swan.

When the impact of a Black Swan is negative, it can bring down an entire system.

Eg: a financial crisis could bring down the entire economy.

Black Swans happen in a type of environment where events can have an asymmetric impact related to their size. That environment is called Extremistan.

Extremistan is a term given to a category of randomness where one unit of a sample can disproportionally influence the average of the sample.

Eg: wealth. If you compare the average wealth of a group of 1 million people chosen randomly on earth with the average wealth of the same group including Elon Musk, the average of the group including Elon Musk will be much higher than the average of the group that doesn’t include him. Elon Musk can influence by himself the average because wealth is in Extremistan.

The other type of randomness where Black Swans (normally) don’t happen is called Mediocristan. In Mediocristan, one unit will never be able to disproportionally influence the average.

Eg: weight. No one will ever be fat enough to influence the average weight of one million people taken randomly.

Black Swans are dangerous because they are unpredictable. The only thing we know about them is that they will happen – but we know neither when nor how.

Since a Black Swan is one event that has a high impact, there is asymmetry at play. If you bet on a Black Swan and it happens, (eg: a financial crisis), you can earn a lot.

Environments with asymmetric payoffs such as working as a movie star, singer, author, athlete, etc, are environments where payoffs are huge. The problem is that payoffs in these environments are not evenly distributed.

Eg: 1% of writers sell 99% of books.

It’s Pareto’s Law.

Succeeding in Extremistan is therefore both difficult and random (-> you need to be lucky).

Jobs in Extremistan carry the most risks because losers lose big. But they also carry the highest rewards because winners win big.

This is why many people that compete in Extremistan also get safe jobs in Mediocristan to protect their downside.

So, how do Black Swans happen?

Black Swans are mainly random. They happen because a lot of things are constantly happening.

Since so many “combinations” between these things that are constantly happening are possible, chances are that one of these combinations will trigger a Black Swan (if you search enough, you eventually find what you’re looking for).

Since Black Swans are unpredictable, you will most likely find them when and where you aren’t looking. Sometimes, they’re difficult and non-obvious to recognize (Eg: it took decades to recognize the importance of Darwin’s theory of evolution).

Looking randomly for random things is a process called serendipity. Serendipity is at the origin of every scientific discovery in life (almost all of them were made randomly).

By stimulating the random, you increase the chances for a happy Black Swan to happen.

Eg: going to parties, especially those where you don’t know anyone, increases chances for a Black Swans.

The problem with Black Swans is that they’re not always positive. They can also be destructive.

To protect yourself from Black Swans, you need to position yourself in a way that would make you profit from them. You need to be able to gain strength from variations and randomness that come with Black Swans.

That is, you need to become antifragile.

Part III: Antifragile

The definition of fragile is that which weakens with variation and randomness. The fragile likes quietness and rest. Unlike the fragile, the robust mostly suffers no consequences from variation and randomness. Finally, the antifragile strengthens from randomness.

If the antifragile strengthens from randomness, we can deduce that by the same token, the antifragile weakens with quietness and rest.

Eg: muscles. Muscles get stronger when under stress, and weaken when deprived of stressors. The whole human body does.

Furthermore, when a muscle gets stressed, the body doesn’t create just enough muscle to carry the weight; it creates more muscle than necessary, which leads to hypertrophy.

This principle behind creating more muscle than necessary is called overcompensation. Overcompensation is what leads antifragile systems to become stronger as they are attacked. It’s also what enables innovation.

Innovation is the result of overcompensation created in the aftermath of an effort that was made to solve a problem without enough resources to do so (read this sentence again).

Society is antifragile. When confronted with problems, it overcomes them and becomes stronger.

It often does so at the expanse of units that are victims of these problems, or too fragile to resist.

Eg: every plane crash makes all of the other flights safer.

The “sacrifice” of the plane at the expanse of the entire system is a tragedy, unfortunately “necessary” for the strength of the system (it’s a metaphor, not to be taken literally).

As we can see, one of the ways an antifragile system becomes stronger is by purging the weakest units inside (this argument is often chosen to carry genocides; it goes without saying that such a principle should never be applied to humans; the most fragile people should always be protected.)

The property of antifragility means that the antifragile should be subjected to enough randomness, or it will weaken. The weakening of an antifragile system leads to Black Swans.

How does this happen?

Humans have an inherent urge to play and seek control of things they don’t understand; often to stabilize and get rid of variations in a system.

Stabilizing an antifragile system that naturally varies weakens the system.

Eg: the economy. By tolerating a small amount of randomness, we ensure that these variations don’t build up in the system and explode all at the same time, threatening the viability of said system. Unfortunately, in practice, we do the opposite: we don’t tolerate any variation, seek extreme stabilization, then suffer when a Black Swan happens.

Small loose systems with moderate variations will always be better than big stifled systems with no apparent variations. Black Swans tend to happen exactly in the latter.

Excessive control leads to excessive fragility. Unintended consequences of well-intentioned actions (such as stabilizing a system that shouldn’t be stabilized) are called iatrogenics. In this case, the consequence is fragility.

As a result, it is often better not to intervene in antifragile systems, than to do so.

Since intervening usually means controlling, and since antifragile systems are complex systems that don’t like control, interventions lead to worse outcomes than the absence of intervention.

The wisdom of antifragility is revealed when we realize we can’t predict what will happen in the future.

What we can do though, is to measure whether a system is robust or antifragile enough so that it can resist what may happen in the future.

Consolidating > predicting.

This also means that we can predict the fall of systems (or their survival) by measuring their (anti)fragility. If fragile, their likelihood to disappear is high.

So, how do we detect fragility? Fragile systems are systems with asymmetric upsides and downsides, where the upsides are small and the downsides are big.

Antifragile systems are the opposite. The upsides are big and the downsides are small.

Eg: being the first employee in a startup is antifragile. Best case scenario, you make it and become rich. Worst case scenario, you fail and go get another job with the experience you have acquired.

So, how does one become antifragile?

The first thing to do is to limit the downside. The reason is that you can’t enjoy any perks if you’re homeless – or worse, dead. So always protect the downside. Survival first.

Antifragility is best embodied by the barbell strategy. The barbell strategy entails being extremely risky (with big upsides and limited downsides) on one end, and being extremely conservative to limit the downsides on the other.

Eg: invest 90% of your money into ultra-safe assets, and the rest into startups. Best case scenario, you 100X your money. Worst case scenario, you lose 10%. The downside is protected.

Another way to be antifragile is to have options. Options, due to their nature, enable you to choose what to do as life moves forward.

Eg: renting VS buying. Renting offers you highly flexible options (leaving whenever pleases you), while buying a place doesn’t (mortgage, etc).

Assembling together both options (1) with situations where the asymmetry is positive (2) is called “the philosopher’s stone”. In the long term, it means you will do better than the rest – because of both the positive asymmetry and the option.

Antifragility is often counterintuitive.

Succeeding in an antifragile way is sometimes as simple as “not failing”.

Eg: the first rule of investing is: never lose money.

People think of success as a matter of “adding”. “Working more”, “studying more”, etc.

The reality is often the opposite: success is a matter of subtracting. Less is more.

Eg: smoking less would improve public health much more than any medicine ever would.

As we have seen, antifragility is characterized by big upsides with limited downsides. The only way to be exposed to big upsides is to take risks.

And the only way to take risks is to have skin in the game.

Part IV: Skin in the Game

Learning can’t happen without skin in the game. It is only when you have skin in the game that you are exposed to the consequences of your actions, because you take risks.

Modernity, unfortunately, enables those at the top of the hierarchy (politicians, CEOs, etc) to shift risk from their shoulders, onto the shoulders of another party.

Doing so, they make sure they will benefit from any potential upside while not suffering the consequences of potential downsides.

The power to transfer consequences to risks is called having agency.

Eg: In 2008, banks took enormous risks to maximize their returns. When they failed, they were saved by the government which took over their debt and shifted the burden of liabilities on the taxpayers’ shoulders.

Shifting risk is only possible in a centralized system, because the centralized system has agency: it can shift risk from one party to another.

Had the system not been centralized, there would not be any agency, and banks would have failed without anyone to save them.

Centralization acts as a buffer between risk and actors.

Since the risks (going bankrupt and being saved by the government) were smaller than the rewards (a lot of money), banks kept on taking risks to increase returns.

They behaved this way because the risks weren’t theirs to assume: they had no skin in the game, so they didn’t learn from their mistakes.

So, who did?

The system.

The system learns and adapts to avoid repeating the error.

It wasn’t like this in the past. In the past, laws made sure that whoever did anything would suffer the consequences of their actions. Past laws put people’s skin into the game.

Eg: if you built a house that collapsed and killed somebody, you had to die too. This was to ensure you’d build a house that wouldn’t collapse.

Ethics were embedded into the law (not anymore). There was symmetry between risk, reward, and punishment.

The Golden rule (don’t do to others what you don’t want them to do to you) evolved from this type of symmetry, which subsequently led to Kant’s principles. Kant said that you shouldn’t do anything you wouldn’t like seeing others doing too (eg: frauding in the subway).

These ethics, however, are valid up to a certain point. We practice ethics with our direct entourage and environment, but we don’t apply them with agents that are “far away” from us. Things don’t scale.

Eg: we prosecute criminals in the respect of the law in our country, but we bomb them when they’re in foreign places.

In practice, people stop caring about the well-being of a group once that group goes over 150 members, phenomenon named “the tragedy of the commons”. There is a lot of skin in the game when people are part of small groups, and the opposite. The bigger the group, the fewer the skin.

The association of skin in the game and time enables one to learn what works, and what doesn’t.

Eg: the lightbulb. It took Edison 10 000 trials to find out which one worked best (and the 9 999 that didn’t).

That which works (and survives time) is strong. This is the Lindy effect. The Lindy effect states that the longer something has existed, the longer it will exist.

This is why religions should not be disregarded. They didn’t survive because they are Lindy. They survived because the people observing them and holding their ideas survived. Hence, it’s likely that religions can give you higher chances of survival than atheism does – atheism has not yet been proven Lindy.

Lindy shows how theory cannot be split from practice; actions from consequences; and makers from the fruit of their work.

Unfortunately, with specialization, society is driving a wedge between what people do, and who gets to enjoy what they did.

In the past, we used to enjoy the fruits of our work; today we work literally for someone else (and consume the work of someone else too). Doing so, we lack feedback on our own work, and lack skin in the game.

Eg: trains are uncomfortable because train designers don’t take the train; train designers have no skin in the game.

People that don’t have skin in the game don’t do good work and inversely. If you want to get people to care, you need to add more of their skins into the game.

Skin in the game is extremely powerful. When an intolerant minority refuses to give concession on a certain practice or behavior, the tolerant minority naturally meets their demands and adapts to their behavior.

Eg: if one family member refuses to eat GMO, the entire family won’t eat GMO either.

Only 2% of an intolerant minority can influence 98% of a tolerant majority. Revolutions start because a very small group of intolerant people one day decides to go through with their plans.

These people often have nothing to lose, which makes them (if not antifragile), at least robust.

They understand that life requires real risks to get anywhere. They’re ready to take them because they have a lot of skin in the game.

This is how we can distinguish a practitioner from a theoretician (academics). Theoreticians are paid by the state, they have no skin in the game – hence often say nonsense. Practicians have skin in the game, and say much more interesting things. Doing > talking.

Anything of value in life comes from people that do, not talk. Doing de facto implies skin in the game, and while skin in the game is always risky, it’s a risk worth living for.

Since the absence of skin in the game does not compel you to do as good of a job as if you had skin in the game, the rule to observe should be the following.

Don’t do anything without skin in the game.

For more summaries and articles, head to auresnotes.com.

Photo by jens holm on Unsplash

Want more?

Subscribe to my monthly newsletter and I'll send you a list of the articles I wrote during the previous month + insights from the books I am reading + a short bullet list of savvy facts that will expand your mind. I keep the whole thing under three minutes. 

How does that sound? 

Leave a Reply