|
Getting your Trinity Audio player ready...
|
What if Adam Grant discussed Think Again with the mentors who taught him how minds actually change—without shame?
Think Again Explained is my invitation to take certainty less personally—and curiosity more seriously. I chose this title because most of our mistakes aren’t caused by a lack of intelligence; they’re caused by a lack of rethinking. We get rewarded for having an answer, defending a position, sounding confident. Over time, opinions harden into identity. And once that happens, changing your mind doesn’t feel like learning—it feels like losing.
What I wanted this book to do was flip that script. “Think again” isn’t a command to doubt everything. It’s a reminder that the world keeps updating—facts change, contexts shift, people reveal new information—and the strongest people aren’t the ones who never change their mind. They’re the ones who can update fast without shame, without ego, and without turning every disagreement into a war.
That’s why this ImaginaryTalks roundtable is the perfect thought experiment. I’m imagining a conversation with people I’d genuinely want in my corner when I’m wrong—thinkers who value truth over status, and who understand how humans actually change: slowly, socially, and often painfully. I’m excited because each guest brings a different lens: psychology, culture, forecasting, leadership, and moral courage. And if we do this well, you won’t walk away with “tips.” You’ll walk away with a new posture toward reality—one where being wrong becomes information, and rethinking becomes a daily advantage.
(Note: This is an imaginary conversation, a creative exploration of an idea, and not a real speech or event.)
Topic 1 — The Science of Changing Your Own Mind

A roundtable with Adam Grant, Daniel Kahneman, Carol Dweck, Julia Galef, and Karl Popper.
The setting is quiet and deliberately unflashy—an academic lounge at dusk with tall windows, a long table, and the soft hum of a city outside. No stage, no podium. Just five minds and one shared premise: if rethinking is a skill, it can be trained.
Adam Grant sits with a thin notebook and a pen he keeps turning between his fingers, like a small metronome for attention. He looks around the table with the rare mix of warmth and precision that makes people feel safe—and slightly exposed.
“I want to start with the thing that hurts,” he says, smiling faintly. “Because the whole book exists because most of us don’t struggle with thinking. We struggle with unthinking—letting an old belief keep driving while we swear we’re in control.”
He leans forward.
“What’s the most reliable signal that I’m defending my identity instead of testing my thinking?”
Daniel Kahneman doesn’t rush. He looks down briefly, as if checking the internal instruments. “A reliable signal,” he says, “is emotional heat. When a belief is challenged and you feel irritation, contempt, or a quick need to correct the other person—your mind has left the realm of inquiry and entered the realm of protection. We call it reasoning, but it is often justification. The brain is remarkably talented at building arguments for conclusions it has already chosen.”
Adam nods. “So the emotion isn’t the enemy,” he says. “It’s the dashboard light.”
Carol Dweck picks up the thread. “Yes. And the kind of heat tells you which identity is threatened,” she says. “In a fixed mindset, being wrong feels like being less. So the system rushes to defend: ‘I knew that already,’ ‘That doesn’t count,’ ‘They’re biased.’ In a growth mindset, being wrong can feel like data. But that’s not automatic—it’s trained. The signal is when you start protecting your image rather than expanding your understanding.”
Julia Galef smiles, a little ruefully. “The simplest signal is when your goal shifts,” she says. “If your goal becomes to win the conversation, to look smart, to avoid embarrassment—then you’re soldier-minding, not scout-minding. Scouts don’t ask, ‘How do I prove I’m right?’ They ask, ‘What’s true, and what would I notice if I were mistaken?’”
Karl Popper tilts his head slightly, as if he’s pleased the conversation has arrived where he lives. “There is a philosophical version of that signal,” he says. “When you stop searching for ways your belief could be falsified. A serious thinker should attempt to refute their own position—at least in part. If you refuse even to imagine what would disconfirm you, you are no longer thinking. You are believing.”
Adam’s eyes brighten. “That’s the line I love,” he says. “When you can’t name what would change your mind, you don’t have a belief—you have a badge.”
He pauses, letting the room settle into the discomfort that contains the lesson.
“Okay,” he says. “Second question. Because people misunderstand this part and fall into two extremes.”
“How do you keep confidence without becoming rigid—what does ‘strong opinions, lightly held’ look like in practice?”
Kahneman answers first, almost gently. “Confidence is frequently mistaken for correctness,” he says. “But in decision science, confidence should be calibrated to evidence. ‘Lightly held’ means you attach your confidence to probabilities, not certainties. You can say, ‘I’m 70% confident,’ and still act. Rigidity is the illusion of 100% when the evidence doesn’t support it.”
Dweck nods. “And confidence should be anchored in the self as a learner, not in the self as a finished product,” she says. “If your worth depends on never being wrong, you will become rigid. If your worth depends on learning, you can be confident and flexible. You can say, ‘I believe this—until I learn something better.’”
Galef leans in, practical. “In practice, it looks like arguing on both sides before you argue with someone else,” she says. “Steelman the opposing view. Ask: ‘If I had to convince myself I’m wrong, what would I say?’ And then… actually feel the force of it. People do this as a performance. The trick is to do it as an experiment.”
Popper gives a small approving nod. “Yes,” he says. “And you should create conditions where you can lose gracefully. Institutions matter. Friendships matter. If you are surrounded by people who punish error, you will hide error. If you are surrounded by people who reward correction, you will seek it.”
Adam smiles. “That’s one of the biggest ‘Think Again’ ideas,” he says. “Rethinking isn’t just an inner skill. It’s a social environment. We need ‘rethinkers’ around us, not cheerleaders.”
He glances down at his notebook, then looks up again.
“Third question,” he says. “This is where I want everyone to be extremely concrete, because advice that isn’t actionable is just entertainment.”
“If rethinking is a daily habit, what’s the simplest routine that actually changes outcomes?”
Galef answers first, because it’s her style. “Do a ‘belief check’ in one sentence,” she says. “Pick one belief you acted on this week—something like ‘I’m bad at follow-through’ or ‘That person doesn’t respect me’—and ask: ‘What evidence would make me update this by 10%?’ If you can’t answer, you’re not tracking reality, you’re repeating a story.”
Kahneman nods. “I would recommend a pre-mortem,” he says. “Before you commit to a decision, imagine it has failed. Now list the reasons. This bypasses some overconfidence. It also forces you to see alternatives. It is not perfect, but it is simple, and it works.”
Dweck offers a different practice. “Replace performance goals with learning goals,” she says. “Instead of ‘I must succeed,’ use ‘I must learn.’ That small shift changes how you interpret feedback. It turns criticism from a threat into a tool. Over time, it changes what you choose to practice—and that changes outcomes.”
Popper’s routine is stark. “Write down what would falsify your view,” he says. “One line. What observation would show you are mistaken? If you cannot state it, you are dealing in dogma. And dogma is the enemy of progress.”
Adam looks around the table, then speaks slowly, as if he’s combining all their answers into one instrument panel.
“So the routines are: a belief check, a pre-mortem, learning goals, and a falsification test.”
He pauses.
“And the emotional skill is learning to treat wrongness as a cost you pay once—instead of a tax you keep paying forever.”
Everyone goes quiet for a moment because that hits too close to home to be merely clever.
Kahneman breaks the silence softly. “People underestimate how good it feels to stop defending,” he says. “They think certainty is comfort. But often certainty is a cage.”
Dweck nods. “And they underestimate how contagious learning is,” she adds. “When one person models updating, it gives permission. It changes the culture of the room.”
Galef smiles. “Yes,” she says. “Rethinking is not surrender. It’s strength with better aim.”
Popper’s voice is calm and final. “To learn is to risk being wrong,” he says. “But to refuse that risk is to remain ignorant.”
Adam sits back, satisfied—but not triumphant.
“Alright,” he says. “That’s Topic 1, and it’s the foundation. Next, I want to go where rethinking becomes interpersonal—how you get someone else to revise a belief without making them feel attacked.”
He taps his pen once.
“Because the easiest way to make someone dig in is to try to win.”
And with that, the room shifts from the science of changing your own mind to the harder art: changing a conversation without breaking a person.
Topic 2 — How to Help Other People Rethink

A roundtable with Adam Grant, Jonathan Haidt, Daryl Davis, Megan Phelps-Roper, and Socrates.
The setting changes. It’s still quiet, but less academic—more public-facing. A small studio with five chairs in a circle, soft lights, and the kind of atmosphere that says: this isn’t about being clever. It’s about staying human while you disagree.
Adam Grant looks around at the faces he’s invited—each one, in a different way, has walked into rooms where people were certain and left those rooms with something loosened.
He opens with the tone of a host who knows the stakes.
“Topic 1 was about changing your mind,” he says. “Topic 2 is harder. Because now ego, identity, and social belonging are in the room.”
He takes a breath.
“What makes people feel safe enough to revise a belief without feeling humiliated?”
Megan Phelps-Roper answers first, her voice measured, as if she’s learned to handle memory carefully. “Safety is dignity,” she says. “If someone feels like changing their mind means admitting they were stupid or evil, they’ll cling tighter. The moment you moralize their intelligence, you trap them. What helped me wasn’t someone dunking on me. It was someone treating me like a person who could still be reached.”
Adam nods. “So the door opens when the person doesn’t have to die socially to walk through it.”
Jonathan Haidt leans forward slightly. “Yes,” he says. “Because beliefs are often linked to group identity. Changing your mind can feel like betrayal. The safest environment is one where the person still belongs. Where they can revise a view without losing their tribe—or without being told they’re now worthless. Also, moral frameworks differ. If you speak only in your own moral language, they hear contempt. If you speak in their moral language, they can listen without shame.”
Daryl Davis speaks calmly, like someone who has practiced calm in rooms where calm was not given. “Respect,” he says. “You can’t ask someone to hear you if you won’t hear them. When I sat with people who hated me, I didn’t start by calling them monsters. I started by asking questions. I made it clear: I’m not here to destroy you. I’m here to understand why you believe what you believe.”
Socrates gives a small nod, pleased. “To feel safe,” he says, “a person must not feel hunted. If you treat them as prey, they will defend themselves. If you treat them as a partner in inquiry, you awaken their better self—the part that wants to see clearly.”
Adam smiles. “That’s a great line: not hunted.”
He moves to the second question, and his tone becomes more surgical.
“When does ‘debate’ backfire—and what’s the alternative conversation style that works better?”
Haidt answers first. “Debate backfires when the goal is status,” he says. “When your audience is not truth but a crowd. Debate rewards confidence, speed, and dominance. It often punishes nuance. So people escalate. They perform certainty. The alternative is what I’d call ‘moral translation’ and curiosity—asking, ‘What’s the concern underneath your position?’ Then you respond to that concern.”
Megan Phelps-Roper nods. “Debate backfired on me because it turned everything into a contest,” she says. “If I lost, I felt humiliated. If I won, I felt justified. Either way, nothing changed. The alternative that worked was long-form conversation—where no one was scoring points. People asked me questions, not to trap me, but to reveal the contradictions I hadn’t noticed.”
Daryl Davis adds, “Debate backfires when you label. When you insult. When you make it personal. Then it’s not about ideas—it’s about pride. The alternative is dialogue. Sit down. Build rapport. Find common ground first. Not to avoid disagreement, but so the disagreement doesn’t become war.”
Socrates looks amused. “Debate backfires when both parties believe they already possess wisdom,” he says. “The better way is to expose ignorance gently—through questions. Not the kind of questions that shame, but the kind that awaken curiosity. If a person discovers the gap in their own thinking, they will revise it themselves. If you announce the gap, they will defend it.”
Adam writes something down. “So: debate is a poor tool for learning,” he says. “Because it rewards performance. The alternative is curiosity, translation, dialogue, and questions that help someone discover their own inconsistencies.”
He nods once, then asks the third question—this one the most practical, because it’s where most people fail.
“How do you challenge someone firmly while still protecting the relationship?”
Daryl Davis answers first, steady. “You separate the person from the idea,” he says. “I can tell you I disagree strongly—and still show you respect. I can say, ‘I think this belief is wrong and harmful,’ without saying, ‘You are irredeemable.’ And I keep showing up consistently. Trust is built when people realize you’re not there to humiliate them.”
Megan Phelps-Roper adds, “And you give them an exit ramp,” she says. “If the only options are ‘admit you’re awful’ or ‘fight,’ they’ll fight. But if you give them a way to change their mind while saving face—‘I used to think that too,’ ‘A lot of people believe this’—they can move without feeling crushed.”
Haidt nods. “And you start with the moral value you share,” he says. “If you care about fairness, talk fairness. If they care about loyalty, talk loyalty. People interpret challenge through their moral lenses. If you speak in a foreign moral language, even a gentle challenge feels like an attack.”
Socrates answers last, softly. “Firmness without cruelty is the sign of a good soul,” he says. “You can be unwavering about truth while still being gentle. But the secret is humility: you must remain open to the possibility that you, too, are mistaken. Otherwise your firmness is merely arrogance wearing a noble costume.”
Adam sits back for a moment, letting that land. Then he speaks, slowly.
“So here’s what I’m hearing,” he says. “To help someone rethink, you need three conditions: dignity, curiosity, and a path that preserves belonging.”
He looks around the circle.
“And you need to remember this: when people change their minds, they rarely do it because you defeated them. They do it because they felt safe enough to question themselves.”
He pauses.
“Next topic, I want to take this into the workplace—because rethinking isn’t just personal or political. It’s cultural. And culture decides whether truth is rewarded or punished.”
Socrates smiles, as if culture is merely a large conversation. Haidt looks thoughtful, already seeing the group dynamics. Megan looks steady. Daryl looks calm, like he’s practiced patience as a form of courage.
Adam caps his pen.
“Alright,” he says. “Let’s talk about the environments that make rethinking possible—or impossible.”
Topic 3 — Rethinking at Work

A roundtable with Adam Grant, Amy Edmondson, Edgar Schein, Satya Nadella, and Mary Parker Follett.
This room feels different from the first two. Less intimate, more consequential. A clean conference space with glass walls, late afternoon light, and a long table that has seen a thousand meetings—some honest, most performative. Outside, a city moves at full speed. Inside, the question is slower and sharper:
Can a workplace become a place where people update their minds—or will it always reward the appearance of certainty?
Adam Grant opens with a gentle grin, but his eyes are serious. “A lot of organizations say they value learning,” he says. “But what they actually value is being right… publicly.”
He looks around at his guests—each of them has spent years studying the invisible rules that govern what people feel safe to say.
“First question,” Adam says. “What’s the fastest way to spot a workplace that punishes rethinking—even if it claims to value it?”
Amy Edmondson answers first, precise and practical. “Look at what happens after bad news,” she says. “When someone raises a problem, do leaders become curious—or do they become defensive? Do they ask, ‘What are we missing?’ or do they ask, ‘Who caused this?’ Psychological safety isn’t about being nice. It’s about whether people believe they can speak up, admit uncertainty, and report mistakes without being punished.”
Edgar Schein nods. “Yes,” he says. “And watch how the organization treats ignorance. In a learning culture, it’s normal to say, ‘I don’t know yet.’ In a defensive culture, people pretend. They bluff. They cover. You can measure it by how often people ask genuine questions versus how often they make confident declarations.”
Mary Parker Follett leans in, voice calm but firm. “Also watch the meetings,” she says. “In a punishing culture, meetings are performances: people speak to protect territory. In a learning culture, meetings are integrations: people speak to build a shared picture. If the dominant behavior is ‘advocacy’ without ‘inquiry,’ the culture is not learning—it is competing.”
Satya Nadella adds, direct and grounded. “Look at the incentive structure,” he says. “If promotions go to the people who look flawless and never admit mistakes, you’ll get a culture of concealment. If promotions go to the people who learn fast, share what they learn, and help others improve, you get a culture of growth. You can’t fake culture when incentives contradict it.”
Adam nods. “So the tells are: what happens after bad news, how people treat ‘I don’t know,’ whether meetings are performances or integrations, and whether incentives reward learning or ego.”
He smiles slightly. “That’s already enough to diagnose most organizations in five minutes.”
He turns to the second question.
“How do leaders invite dissent without turning every meeting into chaos or politics?”
Follett answers first, because she’s been waiting for this. “You don’t invite dissent as a spectacle,” she says. “You design it as a process. You create structures where opposing views are expected and integrated—where conflict is not a personal fight but a shared search. The enemy is domination. The alternative is integration: we treat differences as raw material for a better decision.”
Edmondson builds on it. “The key is framing,” she says. “Leaders need to say: ‘I have a view, but I may be wrong. I need your help to see what I’m missing.’ Then they must respond well when dissent appears. People notice whether dissent is ‘tolerated’ or genuinely valued. You can’t invite dissent and then punish the first person who speaks.”
Schein adds the deeper layer. “Leaders must learn ‘humble inquiry,’” he says. “Most leaders ask questions that are disguised commands. People can hear it. A real question is one where the leader is truly open to being influenced. That’s rare—and it is the hinge point.”
Nadella nods. “Practically, you also need norms,” he says. “It can’t become a debate club. You can say: ‘Disagree with ideas, not people. Bring evidence. Offer alternatives. If you critique, you also propose.’ That turns dissent into contribution rather than disruption.”
Adam looks pleased. “So dissent is not chaos when it’s structured. And it’s not political when leaders model humility and reward contribution.”
He takes a breath, then asks the third question—this is the one that separates theory from something teams can actually do Monday morning.
“What’s one concrete ritual that reliably turns a team from ‘prove I’m right’ to ‘let’s get it right’?”
Edmondson answers quickly. “After-action reviews,” she says. “Not blame sessions—learning sessions. What did we expect? What actually happened? What surprised us? What will we change next time? When you do that consistently, you normalize learning and remove shame from mistakes.”
Schein offers a complementary ritual. “Leaders go first,” he says. “A leader shares a mistake and what they learned—publicly. That’s not vulnerability theater. It’s a signal: ‘Learning is rewarded here.’ People will not take interpersonal risks unless leaders model it.”
Follett adds a ritual with her trademark clarity. “A ‘decision log’ with revisiting,” she says. “Write down your assumptions when you decide. Then revisit them later and ask: which assumptions held? Which were false? This turns rethinking into a normal, expected part of governance. Without that, organizations pretend decisions were inevitable and never learn.”
Nadella nods. “At Microsoft we leaned hard into a growth mindset,” he says. “A practical ritual is shifting review conversations from ‘prove’ to ‘improve.’ Ask: ‘What did you learn? What will you do differently? Who did you help grow?’ When those questions become routine, the culture changes.”
Adam smiles, writing quickly. “I love that—‘decision logs’ and ‘revisit assumptions’ might be the most underused tool in business.”
He looks up.
“So the rituals are: after-action reviews, leaders modeling learning first, decision logs with assumption revisits, and performance systems that reward improvement and helping others grow.”
He pauses, then says something softer, more personal.
“I think a lot of workplaces don’t fail because people aren’t smart,” he says. “They fail because people are afraid. Afraid of being wrong. Afraid of looking incompetent. Afraid of losing status. And fear is the enemy of rethinking.”
Edmondson nods. “Yes,” she says. “Psychological safety is not comfort. It’s courage made possible.”
Schein adds, “And culture is the sum of what gets rewarded and what gets punished—usually quietly.”
Follett leans back slightly. “If you want people to rethink,” she says, “you must redesign the conditions under which they speak.”
Nadella finishes with a steady practicality. “The goal isn’t to be right,” he says. “The goal is to learn faster than reality changes.”
Adam looks around the table. “Next,” he says, “I want to zoom into uncertainty—because rethinking isn’t just cultural. It’s probabilistic. It’s how we make decisions when we don’t know.”
He caps his pen.
“Let’s go to Topic 4: how to update your beliefs when the world won’t give you certainty.”
Topic 4 — Better Decisions Under Uncertainty

A roundtable with Adam Grant, Philip Tetlock, Annie Duke, Nate Silver, and Richard Feynman.
The room shifts again. This one feels like a quiet analytics lab—clean, minimal, bright enough to think in. A whiteboard wall with shapes and probability curves (no readable text), a long table, and the sense that here, confidence has to earn its keep.
Adam Grant looks around at his guests and smiles like someone who knows the difference between “smart” and “accurate.”
“Uncertainty is where rethinking either becomes a strength,” he says, “or becomes anxiety in a nicer outfit.”
He leans forward.
“How do you train yourself to notice you’re wrong earlier—before the cost gets expensive?”
Philip Tetlock answers first, calm and methodical. “You build an updating habit,” he says. “Good forecasters don’t wait for reality to slam the door. They track signals. They break big beliefs into smaller claims that can be tested. Then they revisit those claims regularly. The key is not brilliance—it’s review. If you never revisit predictions, you never learn.”
Adam nods. “So the speed of noticing you’re wrong is really the speed of feedback.”
Annie Duke picks it up. “And you separate outcomes from decisions,” she says. “Most people learn the wrong lesson because they judge by results. A good decision can have a bad outcome. A bad decision can get lucky. The way you notice you’re wrong earlier is you keep a decision journal: What did I believe? What did I weigh? What would I do if I had to decide again? Then when reality unfolds, you can see whether you were thinking well or just hoping.”
Nate Silver adds, “You also train yourself to think in distributions,” he says. “People get blindsided because they carry one forecast in their head. One story. If you hold ranges—best case, base case, worst case—you become less shocked. And you notice drift sooner, because you’ve already mapped what ‘off track’ looks like.”
Richard Feynman leans back and smiles faintly, like he’s about to simplify something complicated. “The first principle is that you must not fool yourself,” he says. “And you are the easiest person to fool. The way you notice you’re wrong earlier is you actively hunt for the experiment that could prove you wrong. Not the one that makes you feel good. The one that hurts.”
Adam laughs softly. “That is the most Feynman answer possible.”
He pauses, then asks the second question—because uncertainty often produces a trap: either arrogance or paralysis.
“What’s the difference between healthy doubt and paralyzing doubt?”
Tetlock answers carefully. “Healthy doubt is calibrated,” he says. “It’s a tool. It pushes you to gather more information and revise your probability estimates. Paralyzing doubt is a status emotion—it’s fear of responsibility. It’s the desire to avoid being accountable for being wrong. The cure is to put numbers on uncertainty. Even rough probabilities are better than vague anxiety.”
Annie Duke nods. “Exactly,” she says. “Paralyzing doubt comes from believing decisions should be certain. They’re not. You make the best bet with the information you have. Healthy doubt says, ‘I might be wrong, so I’ll leave room to update.’ Paralyzing doubt says, ‘If I’m wrong, it means I’m incompetent,’ so you freeze.”
Nate Silver adds, “Paralyzing doubt also comes from overvaluing precision,” he says. “People wait for perfect data. But in the real world, you often need direction, not perfection. Healthy doubt accepts imperfect models and improves them. Paralyzing doubt demands a flawless model before action.”
Feynman shrugs slightly. “You have to be willing to be uncertain and still do something,” he says. “Science doesn’t wait for absolute certainty. It tests. It moves. It refines. Paralyzing doubt is when you stop testing and start worrying.”
Adam nods slowly. “So healthy doubt moves you into experiments. Paralyzing doubt keeps you trapped in your head.”
He shifts to the third question. This one is Adam’s favorite kind: a single question that forces a practical principle.
“If you could give one rule for thinking in probabilities to normal people, what would it be?”
Nate Silver answers first, straightforward. “Always ask: ‘Compared to what?’” he says. “People interpret probability as emotion. ‘I feel like it’ll happen.’ But probability is comparative. If you say ‘likely,’ ask ‘more likely than what alternative?’ If you say ‘risky,’ ask ‘riskier than what option?’ That forces you out of vague language.”
Tetlock offers his rule. “Break the question down,” he says. “Most forecasts fail because the question is too broad. If you ask ‘Will this succeed?’ you’re lost. If you ask ‘Will we hit this milestone by this date?’ you can measure. Smaller questions produce better updates.”
Annie Duke smiles. “Mine is: don’t make binary bets,” she says. “Stop asking ‘will it happen or not?’ Start asking ‘what’s the probability?’ And then, crucially: ‘what would change that probability?’ That turns probability into a living model rather than a label.”
Feynman’s rule is brutally simple. “If you can’t explain what evidence would change your mind,” he says, “you’re not doing probability. You’re doing belief.”
Adam taps his pen. “That’s the bridge back to Topic 1,” he says. “Rethinking is the ability to update. Probability thinking is just rethinking with numbers.”
He pauses, then sums up, as if he’s writing the conclusion in his head.
“What I’m hearing is a ‘certainty detox,’” he says. “We journal decisions, track signals, think in ranges, break big beliefs into testable parts, and we deliberately look for disconfirming evidence.”
The room is quiet for a moment, because everyone knows how rare that is in real life.
Tetlock speaks softly. “Most people want to be right,” he says. “Superforecasters want to be less wrong.”
Annie Duke nods. “And they want to be less wrong sooner,” she adds.
Feynman smiles. “And they’re not ashamed of it,” he says. “Because reality doesn’t care about your pride.”
Adam leans back, satisfied.
“Alright,” he says. “Topic 5 is where this becomes public—because rethinking is hardest when the audience is a tribe.”
He caps the pen again.
“Let’s talk about rethinking in the open, where the cost isn’t just being wrong… it’s losing your people.”
Topic 5 — Rethinking in Public

A roundtable with Adam Grant, Hannah Arendt, John Stuart Mill, Steven Pinker, and Václav Havel.
The setting feels intentionally open—an auditorium before a talk begins. Not full, but not empty either. A few scattered silhouettes in the seats, the hum of anticipation, the sense that whatever is said here won’t stay here. There are cameras. There is an audience. There is the invisible pressure that changes how people think: the pressure to belong.
Adam Grant looks around the table and then out toward the dimly lit seats.
“Rethinking in private is hard,” he says. “Rethinking in public is brutal. Because the moment you update your views, someone calls it weakness, betrayal, or hypocrisy.”
He pauses.
“So let’s go right at it.”
“Why do smart groups become irrational tribes—and how do you interrupt that slide?”
Hannah Arendt speaks first, and her clarity feels like cold water. “Because belonging can become more important than truth,” she says. “When people feel uncertain or afraid, they seek certainty in groups. Then the group offers a substitute for thinking: slogans, enemies, and simplified narratives. The tribe gives identity without the burden of judgment. You interrupt the slide by protecting spaces where people can think without fear—where disagreement is not treated as treason.”
Adam nods slowly.
Steven Pinker adds, “There’s also a cognitive angle,” he says. “Humans are built for coalition-building. And moral outrage is a powerful signaling tool—it proves loyalty. Social media amplifies this because outrage travels faster than nuance. Interrupting the slide means changing incentives: reward curiosity, reward correction, reward intellectual honesty. Right now, too many platforms reward certainty and punishment.”
John Stuart Mill leans in. “A society becomes tribal,” he says, “when dissent is treated as a social crime. The moment a person fears speaking, the marketplace of ideas collapses. And then error becomes protected by silence. The antidote is robust freedom of expression—not because every opinion is equally true, but because truth needs challenge to remain alive.”
Václav Havel speaks last, quieter, with that moral steadiness of someone who lived the cost. “Tribalism grows,” he says, “when people stop living in truth. They accept lies for comfort. They repeat phrases they don’t believe because it keeps them safe. The tribe becomes irrational because it becomes performative. You interrupt it when ordinary people choose honesty—even small honesty—despite fear. That is how systems change.”
Adam looks down for a moment, as if he’s letting that sit in his chest.
“Second question,” he says. “Because this is the dilemma for modern life.”
“How do we defend free thought while reducing the social reward for bad-faith certainty?”
Mill answers immediately, because the question is his native soil. “You do not reduce free thought,” he says, “by restricting speech. You reduce bad faith by strengthening norms of argument. Require reasons. Encourage debate that aims at truth rather than humiliation. And perhaps most of all: cultivate education that trains citizens to separate identity from opinion.”
Arendt nods, but tightens the frame. “Bad faith certainty thrives when people do not distinguish between opinion and fact,” she says. “When facts are treated as partisan possessions, the foundation collapses. The defense of free thought requires a defense of reality—shared standards of evidence and a culture that punishes deliberate lying.”
Pinker adds a modern angle. “We also need what I’d call ‘status rewards for intellectual honesty,’” he says. “Right now, people get applause for confidently saying what their side already believes. What if the applause went to people who said, ‘I was wrong, and here’s what changed my mind’? We can design incentives that elevate accuracy over theatrics.”
Havel’s voice is calm. “The most dangerous certainty is the certainty that excuses cruelty,” he says. “Bad faith certainty often disguises itself as moral righteousness. The social reward must change: people should lose status for dehumanizing others, even when it feels politically useful. Free thought requires moral limits—not on ideas, but on the treatment of human beings.”
Adam nods. “So we keep the freedom, but we shift the prestige: make honesty admirable again.”
He moves to the third question, and his tone changes. This one is less about systems and more about the individual—the person watching this video, feeling the pressure to pick a side and never move.
“What does ‘moral courage’ look like now—when rethinking can cost you your community?”
Havel answers first. “Moral courage is the willingness to stand alone,” he says. “Not forever, but long enough to be faithful to truth. It is refusing to say what you do not believe. It is refusing to hate on command. It is refusing to sacrifice your conscience for belonging.”
Arendt follows, firm. “Moral courage is thinking,” she says. “Not thinking as intelligence, but thinking as judgment. The willingness to ask: ‘What am I doing?’ ‘What am I participating in?’ The banality of evil is not monstrous passion—it is thoughtlessness. Courage is refusing thoughtlessness, especially when it is fashionable.”
Mill adds, “Moral courage is allowing yourself to be unpopular,” he says. “It is defending the right to speak even for those you disagree with. It is the recognition that a society that cannot tolerate dissent is a society that cannot correct itself.”
Pinker speaks practically. “And it’s building identity around curiosity rather than ideology,” he says. “If your self-worth comes from being a member of a tribe, you can’t rethink. If your self-worth comes from being a person who learns, you can rethink publicly without collapsing. The courage is choosing a learning identity.”
Adam sits back. The room is quieter than before. Even the scattered silhouettes in the seats feel still.
He speaks slowly, like he’s choosing each word as a commitment.
“Here’s what I’m taking from this,” he says. “Rethinking in public isn’t just a cognitive skill. It’s a social act. It requires better incentives, better norms, and better moral courage.”
He looks around the table.
“And it requires people who are willing to say: ‘I’m updating.’ Not as a performance. Not as a brand. But as a responsibility.”
He pauses.
“Because the real opposite of rethinking isn’t ignorance,” he says. “It’s identity addiction—the need to be the same person in front of the same tribe forever.”
Arendt’s eyes are steady. Mill looks quietly satisfied. Pinker looks like he’s already imagining which incentive systems could be redesigned. Havel looks soft, almost sad, but not defeated.
Adam smiles faintly.
“That’s Think Again,” he says. “Not because changing your mind is trendy—but because the world changes whether you do or not.”
And for a moment, in that half-lit room, rethinking doesn’t feel like weakness.
It feels like adulthood.
Final Thoughts by Adam Grant

If this discussion taught me anything, it’s that rethinking is not a single act—it’s a lifestyle with costs and rewards. The cost is ego. The reward is accuracy. Most people treat changing their mind like an embarrassment. But in every field that matters—science, medicine, investing, leadership—the people who win long-term are the people who can update faster than the world changes.
What I didn’t fully appreciate until hearing these perspectives side by side is how social rethinking really is. It’s not just about your brain; it’s about your environment. You can train yourself to keep a decision journal, to think in probabilities, to ask what evidence would change your mind. But if you’re surrounded by people who punish uncertainty—or if you live in a culture that rewards outrage and certainty—your best thinking skills will shrink under pressure. So part of the work is internal, and part of the work is building spaces where learning is safe: families, teams, communities, and public norms that treat revision as maturity, not betrayal.
The most useful thing I’m taking away from this roundtable is a simple practice: treat your beliefs like hypotheses, not possessions. Write down what you think. Write down what would change your mind. Invite disagreement early. Reward correction. And when you feel yourself getting defensive, treat it as a signal—not that you’re right, but that something important is at stake.
Because the goal isn’t to be the person who never changes. The goal is to be the person who keeps getting closer to reality—without losing your humanity along the way.
Short Bios:
Adam Grant — Organizational psychologist and bestselling author focused on motivation, work culture, and how people update beliefs and make better decisions.
Daniel Kahneman — Nobel Prize–winning psychologist who mapped cognitive biases and the two-system view of thinking that shapes judgment.
Carol Dweck — Psychologist known for the growth mindset framework and how beliefs about ability influence learning, resilience, and change.
Julia Galef — Writer and rationality advocate who popularized “scout mindset” as a way to pursue truth over winning.
Karl Popper — Philosopher of science who argued knowledge advances through falsification, not certainty, and urged testing beliefs against disproof.
Jonathan Haidt — Social psychologist who studies moral intuition, tribalism, and why good people disagree about politics and ethics.
Daryl Davis — Musician and bridge-builder known for patient dialogue that reduces extremism and opens space for belief change.
Megan Phelps-Roper — Former Westboro Baptist Church spokesperson who describes how sustained, humane conversation can unravel closed systems of belief.
Socrates — Classical philosopher famed for disciplined questioning that reveals hidden assumptions and moves people toward clearer thinking.
Amy Edmondson — Harvard professor who pioneered research on psychological safety and how teams learn by speaking up without fear.
Edgar Schein — Foundational thinker on organizational culture and “humble inquiry” as a leadership practice for learning and trust.
Satya Nadella — CEO associated with transforming Microsoft’s culture toward growth mindset, experimentation, and collaboration.
Mary Parker Follett — Early management thinker who emphasized integration, shared power, and constructive conflict as the path to better group decisions.
Philip Tetlock — Researcher on forecasting who identified habits of accurate “superforecasters” and the discipline of continual updating.
Annie Duke — Former poker pro and decision strategist who teaches decision-making under uncertainty, reducing outcome bias, and thinking in bets.
Nate Silver — Analyst known for probabilistic forecasting and explaining uncertainty through models, ranges, and calibration.
Richard Feynman — Physicist celebrated for clarity and skepticism, insisting on intellectual honesty and “not fooling yourself.”
Hannah Arendt — Political philosopher who warned about propaganda, thoughtlessness, and the collapse of shared reality in mass movements.
John Stuart Mill — Philosopher of liberty who defended open debate and dissent as essential to truth and social improvement.
Steven Pinker — Cognitive scientist who writes on reason, progress, and the institutions that protect inquiry and evidence-based thinking.
Václav Havel — Dissident and statesman who framed “living in truth” as moral courage, especially when conformity is rewarded.
Leave a Reply