Summary of Skin in the Game by Nassim Taleb

  • Post category:Summaries
  • Post last modified:August 18, 2022

Short summary: 1 min

Long summary: 38 min

Book reading time: 5h46

Score: 10/10

Book published in: 2017

Access the Summary Database


  • The amount of skin people have in the game strongly determines their behaviors.
  • Politics today acts as a transferor of risk from one actor to another.
  • Politicians reap the rewards when what they do works and transfer the cost to another party when they fail.
  • Don’t do anything for which you have no skin in the game.

What Skin in the Game Talks About

Skin in the Game is a book written by Nassim Taleb. It explores situations in life governed by the principle “skin in the game”. I learned for example that it is the people that have the most to lose that often win in the end because they’re most invested, or that taking risks is usually safer in the long term depending on the type of risks.

Skin in the Game is the last installment of the series of books known as “the Incerto” which deals with risks and asymmetry in real life.

The Incerto is composed of:

  1. Fooled by Randomness
  2. The Black Swan
  3. The Bed of Procrustes
  4. Antifragile
  5. Skin in the Game

Skin in the Game is the best book I have read in 2021.

This book has completely changed the way I think about life, and am still digesting the lessons as I am writing these lines.

This is not an easy book. It took me 35 hours to summarize.

It was amazing though!

Get it on Amazon.

Visual Glossary

“=” means “equal”
“<” means “is smaller/worse than”
“>” means “is bigger/better than”

Skin in the game: notion of risk.
This image means “risk”.

Short Summary of Skin in the Game

Those who have the most skin in the game often have a disproportionate impact on a system because they are exposed to consequences. Those who aren’t exposed to the consequences don’t suffer from mistakes, hence, they don’t learn.

Another consequence is that anyone who has no skin in the game, can make catastrophic decisions because they won’t suffer from the consequences. This is the case with centralized systems.

Centralized systems like bureaucracies take the skin of everyone out of the game, hence, they don’t move forward…until they collapse.

This shows how someone taking a risk should always suffer the consequences (good or bad), in order to make life fair. This is why laws exist. If you do X, you will suffer Y. Laws force people’s skin into the game.

When it’s not the case, when risk < reward, rewards will tend to be maximized without taking risk into account, hence increasing the risk until the system explodes, like the crisis of 2008 -> getting people’s skin into the game forces them to do better work.

People consider they have their skins in the game up to a certain point. Eg: you will feel more responsible in a small company than in a big one -> things don’t scale. Small ≠ big.

Having your skin in the game means that you are exposed to winning, or losing. Those who lose disappear. Those who won are still there. Time acts as a filter. This is called the Lindy effect.

Lindy states that the longer something has been existing, the longer it will exist. This is why when you are looking for the best practices, you are likely to find them in the oldest ones.

Table of Content

Summary of Skin in the Game, by Nassim Nicholas Taleb

Skin in the Game is an expression that reflects the idea to have stakes in something. Having stakes means that you are taking a certain amount of risk. It means you have something to lose, it means having exposure to the downside.

You can have stakes in a group of people (your family), a project (your company), or a situation (going out with friends).

That risk is exposure to consequences, no matter if the exposure is positive, or negative.

This book examines the effects of skin in the game in life.

Because I do not like this skin in the game expression, I will call it SIG, exposure, or stakes from now on.

Book 1

Prologue Part 1: Antaeus Whacked

Knowledge cannot be separated from the real world -> theory cannot be separated from practice.

The real world is learned through having, SIG, exposure, stakes in something.

Most scientific discoveries were first discovered through practice (SIG) and trial and error (the best scientific method), and were theoretically explained after.

-> SIG is necessary. You can’t discover something new in the world by philosophizing in your bedroom.

The absence of SIG can be dangerous. It leads people to play with stuff they shouldn’t be playing with – simply because they won’t suffer from consequences if things go wrong…but someone else will.

Since they don’t suffer from consequences, they don’t make mistakes -> they don’t learn.

Transferring risks to someone else = transferring lessons.

This explains the wars in Iraq and Afghanistan. Leaders in Washington had no exposure to things happening in the Middle East because they weren’t there -> they didn’t learn.

Risk Transferors

In the past, leaders were risk-takers.

Not anymore. Most of them are risk transferors. Bureaucracy is one example. It is inherently SIG-free.

In fact, centralized systems are traditionally free from any exposure.

-> need for decentralization.

If decentralization doesn’t happen, the system eventually blows up and self-repairs, provided it survives.

Eg: the 2008 banking crisis happened because the system had no exposure to its own actions.

When it collapsed, the people from the system didn’t suffer one bit, because the risks were transferred to the taxpayers through government bailout.

This led people to hate free markets while in fact, they should have hated the government.


Free markets have exposure built-in.

image 104
Free markets have exposures built in. In a free market, taxpayers, companies, banks, and the army have SIG.

In a real free economy, banks wouldn’t have been saved -> they would have died -> they would not have taken these risks in the first place knowing the government wouldn’t save them.

When a free market loses its exposure feature, it’s often because of governments.

This is why the people that hate exposure and risks love big governments. It protects them.

image 105
Banks, companies, taxpayers, and the army are exposed and create risk, which is redistributed unevenly among these actors by the government.

As we have seen, people don’t learn much from their mistakes, especially in the absence of SIG.

But who does, then?

The system.

Systems learn by removing parts that no longer work, identified after someone made a mistake.

Eg: transportation didn’t get safer because people drive better, but because the system improved after bad drivers made mistakes and created accidents.

The learning of the system is grounded by filtering -> bad drivers are now dead.

SIG keeps human hubris (excessive pride and self-confidence) in check.

Prologue Part 2: A Brief Tour of Symmetry

Exposure symmetry is the idea that one should assume as big of a risk as the reward is (and that the risk cannot be assumed by someone else).

image 106
Risk should always equal reward.

Symmetry was the main idea behind the oldest written law, Hammurabi’s law.

Symmetry was imposed in the law so that nobody could transfer tail risk. Tail risk is an extreme event that doesn’t happen often (a Black Swan) but that creates ruin for the one that is a victim of it.

image 49
Tail risk represents risk at the tail (meaning unlikely to happen, but deadly if it does).

The most famous example of the symmetry principle within Hammurabi’s law was that if a builder builds a house and the house collapses and kills the owner, the builder should be put to death.

This law made sure that the builder would build a solid house -> it created SIG for the builder.

Symmetry evolved from Hammurabi to the golden rule of “treat others how you want to be treated” which evolved to Kant.

Kant said you could do anything as long as it wouldn’t be a problem if everyone did it in society.


  • Before frauding in the subway, ask yourself if society would work well if everyone did it as well -> the answer is no -> you cannot fraud in the subway.

This rule though, is a problem because it is a universal rule.

And universal rules are great on paper, but disastrous in practice.

Why? Because of one of the most important principles in this book.

Things don’t scale.

Small ≠ large.

In life, we don’t think in planetary, universal terms, but in terms that relate to our direct environment -> we need practical rules.

Symmetry, as we explained, is the idea that risk = reward.

Now, what happens when risk < reward?

2008 is a good example.

Risks are transferred to somebody else -> risks increase -> profits increase -> risks increase -> profits increase -> crash.

When risk < reward, risk will continuously increase to maximize reward, until the system blows up.

This leads to more regulation, which worsens the problem since regulations facilitate risk hiding.

As a result, the pattern becomes as such: risks are transferred to somebody else -> risks increase -> profits increase -> crash -> more regulation -> risks better hidden -> risks are transferred to somebody else -> risks increase -> etc.

image 107
Never shield anybody from risks.

The Agency Problem

Which leads to the agency problem. Agency is the capacity of one party to move risks around the equation (eg: the government).

image 108
Agency, in red on the picture.

Let’s take an example.

The Silver Rule says: do not do to others what you do not want them to do to you.

The extension of this principle could be: avoid taking advice from someone who gives advice for a living, unless there is a penalty for their advice.

Taking advice from someone creates uncertainty, and uncertainty is dangerous due to two elements: the fools of randomness and crooks of randomness.

The fool takes risks he doesn’t understand, thinking his past successes were due to skill while it may have been luck.

The crook transfers the risks to other people.

Economists mainly deal with the crook.

Agency, as a result, is playing out with the risk you transmit to other parties.


  • Signing up for insurance when you know you are getting sick. You’re shifting risks (hospital costs, in this case), on the insurance.
  • Buying a car that actually doesn’t work (the seller has agency in transferring the risk to you).

Now, we need to realize that in the case of the fool, the fool does not know its own interest.

Fools are addicts, workaholics, people who support large governments, the press, the bureaucrats, all people who often act against their own interests.

So, when the system evolves and gets filtered, fools of randomness are purged and stop harming others.

Epistemological Dimension of Exposure

Exposure is about the real world and in the real world, you need to win.

You win by doing, not talking.

That’s the difference between a skilled member of society, and a charlatan. The skilled member wins by doing, not by convincing (the doing does the convincing).

Entire fields of studies (economics, social studies) are charlatanic because they have no exposure to what they create (they only talk and don’t do).

They are not connected to the consequences of their actions and mainly try to convince, without exposure.

As a result, we should never focus on what people say, but focus on what they do, since this is tangible.

Forecasting (in words), has no relation to speculation (in deeds).

Understand: there is a difference between predicting the future, and betting on the future (betting implies possible losses -> exposure).

Being wrong when it doesn’t cost anything, does not count, since you have no SIG.

And when you do have SIG in real life, it is difficult to measure. Many are non-linear (rain, which is positive, VS floods, which isn’t).

Simply consider that forecasting “based on science” is charlatanesque, and has always been so.

The Inverse Problem

Let’s talk about the inverse problem.

The inverse problem can only be solved by SIG.

Here’s why. It’s harder for humans to reverse-engineer than to engineer.

Eg: we see the results of evolution, but we cannot replicate them.

Time and its irreversibility require filtering that is only possible if you have SIG.

-> you lose the game only if you play in it.

The weak players lose first and the strong players win last -> time acts like a filter.

image 109
Time is a filter.

If you don’t play (if you don’t have SIG), you can’t be filtered, because you can’t lose nor win.

-> Without SIG, we fail to get the Intelligence of Time.

What is strong survives -> this is the Lindy effect.

The Lindy effect states that the longer things have existed, the longer they will exist.

The Intelligence of Time combined with SIG (playing the game) helps define rationality, that is, helps distinguish what works from what doesn’t, and what works is what enables one to survive (as the rest has died, hence, didn’t work).

A practice may appear irrational, but if it has worked for a long time, is it really irrational? No.

-> What works, by definition, cannot be irrational.

What is irrational is what threatens the survival of the collective first, and the individual second (and that’s going against nature).

-> what is rational is what enables collectives and individuals to live for a long time.

As a result, overestimation of tail risk (extreme events that threaten survival) is not irrational, as it is required for survival (or at least, to avoid death).

-> there are risks we cannot afford to take, and risks we cannot afford not to take.

SIG is a necessity, but let’s not apply it to everything.

There is a difference between exposure in the context of war, and exposure in the context of voicing an opinion in a random conversation.

What we want is to focus on the people who take risks without exposure thanks to the structure of the system – and weed them out.

While these people cause a lot of problems, there are ultimately, rare.


Overall, the statements made in this book go against 150 years of modern thinking – which we will call intellectualism.

Intellectualism is the belief that one can split an action from its consequences; theory from practice; and that a complex system can be fixed by a hierarchical approach.

Intellectualism has scientism for siblings. Scientism is the interpretation of science as a complication rather than science as a process.

Today, science has been hijacked by peer-reviewed journals and vendors using it to sell stuff – people that talk, and don’t do.

As we have seen, those who talk should instead, do, and only those who do should talk.


As technology progresses, separation between the user and the maker grows -> The maker has less and less SIG.


  • speakers get uncomfortable on stage because the light is shoved to their face like the police does with a suspect -> light engineers have no exposure (they don’t speak on the stage).
  • train designers don’t design trains well because they don’t take the train.
  • architects don’t live in the buildings they make.

One of specialization’s side effects is the separation between labor and the fruits of the labor (or at least, the enjoyment of what has been created. Eg: the baker does not eat all of his bread).

-> exposure (SIG) brings simple solutions to problems (to feel good on stage, change the lights)

image 110
In a world with SIG, the speaker would adjust the spotlight. But in the modern world, specialization drives a wedge between people’s work and its results. The person that adjusts the spotlight does not suffer from the consequences of a wrongly adjusted spotlight.

People who see complicated solutions don’t have incentives to implement easy ones because they are rewarded for perception, not results.

A bureaucracy will always increase complexity to solve problems because they don’t have SIG.

Things designed by people without SIG tend to grow in complication before they collapse.

People have two brains. One for things they have exposure to, and one for things they do not.

Eg: Checking plane safety because that’s your job VS checking plane safety if you’re a passenger

The first one is boring, the second isn’t.

-> exposure makes things more fun and less boring!

And we also become more creative.

Eg: drug addicts are dumb, but will create very ingenious processes to get their drugs.

-> when we don’t have exposure, we’re dumb, uninterested.

Now, as we said, exposure arises with risks. And the thing about risk is that the strength you gain when facing the risk stays after the risk leaves.

Eg: you must lift up the barbell not to be crushed by it, and once it is back on the rack, you keep the strength you developed to lift it.

Regulations VS Legal Systems

There are two ways to protect citizens against corporations: regulations, and legal systems.

Regulations, because they are additive, restrict freedom and choke life. It’s a problem. Freedom (the freedom to make mistakes) is what enables one to progress and advance.

Legal systems are different since they add exposure to the equation. If you harm me, I sue you.

This led to a very sophisticated, balanced system created out of trial and error. The UK and the US use this system, while the EU has regulations and fines.

Honor and SIG

Finally, exposure is about honor as an existential commitment, and risk-taking as a separation between man and machine and a ranking of humans (the higher the risk, the higher you go in the hierarchy).

-> If you do not take risks for your opinion, you are nothing.

The author considers that the best life is an honorable one. Honor means never doing x under any type of context, while it also means always doing y in any context.

Modernity has destroyed some of that.

Today, the people with exposure are the artisans, those that do what they do because they have self-interest and other incentives to do it.

These are the people you should learn from. The people that did first, and talked second.