
Getting your Trinity Audio player ready...
|

Welcome, everyone. It’s a true honor to gather with such remarkable minds to explore a question that is as urgent as it is timeless: What does it mean to be human—now, and in the rapidly approaching future of 2035?
For decades, we’ve marveled at how technology enhances our lives. But today, we are facing a more intimate question—not just how we use technology, but how it is shaping who we are becoming. AI is no longer a distant concept; it is embedded in our relationships, our decisions, our reflections, and even our inner dialogues.
In this series of conversations, we’ll explore how identity, empathy, agency, collaboration, and purpose are all being redefined. Not just by artificial intelligence, but by the very choices we make in response to it. We’re here not simply to predict the future—but to take responsibility for the kind of future we want to live in. This isn’t just a technological conversation—it’s a human one.
Let’s begin.
(Note: This is an imaginary conversation, a creative exploration of an idea, and not a real speech or event.)

Redefining Human Identity in the Age of AI
Moderator: Sherry Turkle
Participants: Andy Opel, Barry Chudakov, Anriette Esterhuysen, Frank Kaufmann, Jonathan Grudin
Sherry Turkle:
Welcome, everyone. As we look toward 2035, we’re faced with an unsettling question: What does it mean to be human when our identities are co-created, even co-managed, by intelligent machines? Let’s begin with Andy. You’ve said AI might help us reclaim intrinsic human values. How do you see this happening?
Andy Opel:
Thanks, Sherry. I believe AI gives us a rare mirror—one that reflects our collective choices. When used with intention, it can remind us of our values: empathy, curiosity, justice. But we must consciously embed these values into the systems we build. Otherwise, we risk building tools that shape us into beings we never intended to become.
Sherry Turkle:
A beautiful point. Barry, you’ve warned of fragmentation—that our identities are splintering under technological strain. What are we becoming?
Barry Chudakov:
We’re becoming a multiplicity. Each person now maintains a portfolio of selves: professional identity, online avatars, AI assistants that speak in our voice. These tools stretch our sense of self to the point of distortion. Without careful reflection, we lose the thread of a coherent “I.” My fear is that schizophrenia becomes not a condition, but a societal norm.
Jonathan Grudin:
If I may add, Barry—this isn’t just about distortion. It’s about delegation. We’re surrendering who we are to systems that don’t have skin in the game. AI shapes our choices, filters our thoughts, nudges our behavior. And most people aren’t aware it’s happening. Our humanity is becoming passive—we’re being optimized, not understood.
Sherry Turkle:
That hits close to home, Jonathan. Anriette, you’ve always held a global perspective on this. Does this redefinition of identity look different in the Global South?
Anriette Esterhuysen:
Yes, Sherry. Identity in many parts of the world is relational, not individualistic. In South Africa, being human means Ubuntu—“I am because we are.” Yet AI is largely being shaped by Silicon Valley values—individuality, efficiency, monetization. If we let AI define identity, we risk erasing cultural nuance and reinforcing systems that widen inequality. We must globalize the authorship of this new identity.
Sherry Turkle:
Thank you. Frank, you’ve spoken about the “post-work human.” If our jobs don’t define us anymore, what does?
Frank Kaufmann:
Meaning, Sherry. Not function. As we transition away from survival-based identities, we must ask: Why are we here? I believe we’re entering an age of existential opportunity. Freed from the necessity of labor, we can finally explore identity through creativity, spirituality, love, and service. But it won’t come automatically. We must teach future generations that being human is not about productivity, but presence.
Sherry Turkle:
I love that—being human is about presence. But presence requires self-knowledge. So here’s a question for all of you:
📌 Is it still possible to know ourselves in a world where digital versions of us can outperform, outlast, and outvoice us?
Andy Opel:
Only if we remain reflective. If we let AI do all our thinking, we’ll forget how to feel. Reflection must be a civic skill, not a luxury.
Barry Chudakov:
We must reclaim narrative. Stories help us integrate these scattered selves. Without story, identity becomes algorithmic residue.
Jonathan Grudin:
And we must demand transparency. If we don’t understand the tools shaping us, we’re not individuals—we’re experiments.
Anriette Esterhuysen:
And we must co-create with care. AI must be multilingual, multicultural, and multivoiced—or it will reduce humanity to a narrow script.
Frank Kaufmann:
Ultimately, the soul must re-enter the conversation. No matter how powerful AI becomes, it cannot love. And it cannot suffer meaningfully. That’s our sacred difference.
Sherry Turkle (closing):
Thank you. You’ve reminded us that the question of identity is not technological, but spiritual and cultural. We stand at a threshold—not just of innovation, but of introspection. As we reimagine what it means to be human, let us not lose the ability to be humane.
Empathy, Ethics, and the Need for Conscious Evolution
Moderator: Sherry Turkle
Participants: Ray Schroeder, Jan Hurwitch, Anriette Esterhuysen, Giacomo Mazzone, Rabia Yasmeen
Sherry Turkle:
Welcome back, everyone. Today’s topic is especially close to my heart—empathy and ethics in the age of AI. The capacity to care, to make moral choices, and to feel deeply—is that under threat in 2035? Ray, I’d love to start with your perspective. You’ve predicted that AI could usher in a more ethical, other-oriented humanity. What gives you hope?
Ray Schroeder:
Thanks, Sherry. Hope, to me, lives in our expanding awareness. As we interact with intelligent systems, we’re given constant reflections—opportunities to choose again, and choose better. AI can highlight bias, expose injustice, and suggest more compassionate responses. But only if we remain committed to being students of ourselves. Empathy is not lost; it's waiting to be practiced.
Sherry Turkle:
Jan, you’ve said empathy and moral judgment are the key to our survival. That’s a bold claim.
Jan Hurwitch:
Bold, yes—but necessary. We are at an evolutionary tipping point. Technology is outpacing our ethics. If we don’t consciously cultivate empathy—not just for each other, but for future generations, for non-human life—we may build a civilization that works, but not one worth living in. Ethics must no longer be an afterthought. It must become the architecture.
Sherry Turkle:
Beautifully put. Anriette, I want to return to something you said in our last conversation: AI amplifies both our best and worst. How do we keep it from amplifying indifference?
Anriette Esterhuysen:
The problem, Sherry, is systemic. AI is trained on human data—yet that data is full of inequality, violence, and prejudice. If we don't train our systems with justice in mind, we risk mechanizing cruelty. But if we include global, compassionate, intersectional voices in AI development—especially from the Global South—we can rebalance the scales. Empathy must be built into the code, not patched in afterward.
Sherry Turkle:
Giacomo, in your essay, you warned of AI being used to outsource moral judgment. That struck me. Could you expand?
Giacomo Mazzone:
Yes. The danger lies in convenience. When people let AI decide who gets a loan, who is arrested, who is watched—it seems efficient. But moral decisions cannot be delegated to an algorithm. The moment we stop wrestling with ethical complexity, we lose part of our humanity. We must stay in the tension. The messiness of morality is where human greatness lies.
Sherry Turkle:
Well said. Rabia, you’ve talked about the importance of soft skills—like empathy and communication—rising in value. But can they really survive in a world shaped by precision-driven machines?
Rabia Yasmeen:
They must. As AI takes over logic-based tasks, what remains is our relational intelligence. We’re entering a new era where how we show up emotionally will matter more than what we “know.” Education must shift to prioritize listening, connecting, and creating shared meaning. The irony is that AI may finally teach us what humans are uniquely good at.
Sherry Turkle:
Let me ask a question to all of you:
📌 What does a truly empathic society look like in 2035—and how do we get there without losing ourselves in the technology?
Ray Schroeder:
We build feedback into everything. Just like AI learns from us, we must learn from how we treat one another—digitally and face-to-face.
Jan Hurwitch:
We normalize moral reflection. In school, at work, even on platforms—ethics becomes a daily conversation, not a philosophical luxury.
Anriette Esterhuysen:
We redesign systems to reward care over conquest. Empathy must be seen not as soft, but as strong—something that holds societies together.
Giacomo Mazzone:
We resist the urge to “optimize” everything. Some things—grief, forgiveness, love—must remain inefficient.
Rabia Yasmeen:
And we teach children early that feelings are not weaknesses. They’re tools of transformation.
Sherry Turkle (closing):
Thank you. Today we’ve seen that empathy is not in decline—it’s in hiding, waiting for a cultural renaissance. In 2035, if we want to remain human, it won’t be through coding empathy into machines. It will be through remembering how to feel, to falter, and to forgive—together.
Human Agency vs. Technological Dependency
Moderator: Sherry Turkle
Participants: Thomas Gilbert, Warren Yoder, Tracey Follows, Courtney C. Radsch, John Markoff
Sherry Turkle:
Welcome, friends. Today’s conversation is about agency—our ability to make choices, exercise will, and steer our lives. As AI becomes embedded in nearly everything, are we still authors of our own stories? Thomas, let’s begin with you. You’ve said public input into AI design is nearly nonexistent. How does that threaten agency?
Thomas Gilbert:
Thank you, Sherry. The real danger isn’t just what AI does—but who decides what it does. Right now, most decisions about AI systems—what’s built, how it’s trained, whom it serves—are made by a small group of private actors. Without democratic oversight, AI becomes a force we adapt to, rather than shape. Agency isn't lost all at once—it’s surrendered, feature by feature.
Sherry Turkle:
That surrender... it reminds me of Warren’s warning about techno-hype. Warren, you said that science fiction is being used to justify real-world disempowerment. Could you elaborate?
Warren Yoder:
Sure. We've turned tech leaders into prophets and startups into messiahs. In doing so, we mistake marketing for destiny. We don’t question the trajectory—we celebrate it. But hype conceals hard truths: centralization of power, loss of critical thinking, erosion of collective will. Real agency begins when we stop consuming futures and start constructing them—together.
Sherry Turkle:
Tracey, your work has touched on the increasing dependence we have on AI, even for basic decision-making. What are we risking?
Tracey Follows:
We’re risking not just what we know, but how we know. When we rely on AI to tell us what’s true, what’s next, even what’s real—we outsource our sense-making. And once that goes, autonomy follows. The illusion of choice grows stronger while the capacity for independent thought grows weaker. It's a paradox of freedom: we feel more in control, but we’re making fewer actual decisions.
Sherry Turkle:
Courtney, in your essay you warned about a future where even our thoughts are nudged by predictive AI systems. That sounds chilling. Is there any way back?
Courtney C. Radsch:
It is chilling, Sherry. Especially when surveillance and manipulation are invisible. Our biometrics, our preferences, our routines—all mined and modeled. People feel overwhelmed, so they defer. And that’s the point where agency erodes—not by force, but by fatigue. The way back is radical transparency, robust digital literacy, and new rights: cognitive liberty, emotional privacy, and algorithmic accountability.
Sherry Turkle:
John, you’ve written about the Borg-like assimilation of humans into tech systems. Are we in danger of becoming digital drones?
John Markoff:
Yes, in subtle but real ways. We’ve already seen the “Borg” effect with smartphones—constant connectivity rewires behavior. With AI, the risk deepens. It’s not just what we do with AI—it’s what we stop doing. Like thinking deeply, reading slowly, deciding deliberately. If we want to keep our agency, we must preserve the space to be fully human—and that includes time to pause, reflect, even resist.
Sherry Turkle:
Let me ask this to everyone:
📌 In a world where AI is making more decisions—what specific things must humans still insist on deciding for themselves?
Thomas Gilbert:
Policy. Governance. Human rights. These cannot be outsourced—not even a little. Otherwise, we build systems that eventually decide who matters.
Warren Yoder:
We must decide the purpose of our tools. Without vision, we become spectators to a story we should be writing.
Tracey Follows:
Let’s decide how we relate to time. AI accelerates everything. Humans must slow down—and claim the right to meaningful pacing.
Courtney C. Radsch:
Emotions. Relationships. Our values. If we let machines dictate what we care about, we won’t just lose agency—we’ll lose identity.
John Markoff:
And we must decide what it means to be human. That definition can’t come from code. It must come from within.
Sherry Turkle (closing):
Thank you. Today’s discussion reveals that agency isn’t just about what we can control—it’s about what we refuse to relinquish. In 2035, the most courageous act may not be building smarter machines, but remaining awake, accountable, and unapologetically human.
Creating Complementary AI: Partner, Not Master
Moderator: Sherry Turkle
Participants: Mauro D. Rios, Jim Dator, Wayne Wei Wang, Liselotte Lyngsø, Cristos Velasco
Sherry Turkle:
Welcome again, everyone. In this session, we explore a vision of AI not as a threat or competitor—but as a complement, a partner that enhances what it means to be human. Mauro, let’s begin with you. You’ve called for AI systems to expand our natural abilities, not replace them. What does that look like in practice?
Mauro D. Rios:
Thank you, Sherry. I envision AI as a cognitive extension, much like a prosthetic for the mind. Imagine tools that enhance our memory, focus, even empathy—but are designed specifically to support our needs, not override them. Complementary AI must be purpose-built for collaboration. The key is co-evolution, where machines learn from us, and we grow through them—without losing our essence.
Sherry Turkle:
Jim, you’ve often framed this in evolutionary terms. You once said AI and human intelligence are part of a larger “waltz.” Can you explain?
Jim Dator:
Of course. Human history is a dance between adaptation and augmentation. Fire, writing, the wheel—each transformed us. AI is no different. But unlike past tools, AI learns and “thinks” with us. This changes the tempo. If we choreograph carefully, we create synergy. If not, we stumble. We must teach AI to follow our rhythm, not dictate it.
Sherry Turkle:
Wayne, your work highlights AI as an “augmentation layer.” How do we ensure it uplifts human judgment rather than distorting it?
Wayne Wei Wang:
By embedding feedback loops and cultural context into every system. Complementary AI must ask for human input constantly, not just once at the design stage. It must be trainable across diverse populations, not just the tech elite. When people see AI reflecting their values—local, cultural, spiritual—it earns trust. That’s where true partnership begins.
Sherry Turkle:
Liselotte, I loved your idea of “personal AI as a bottler” that enhances well-being. Can you speak more about that?
Liselotte Lyngsø:
Certainly. I see a future where each person has a trusted AI that’s not a spy, not a boss—but a companion. It knows your rhythms, supports your unique strengths, and even helps you rest. It’s like having a life coach, therapist, and creative collaborator in one. But to get there, we must flip the model: AI shouldn’t use our data for profit—it should use it to uplift our purpose.
Sherry Turkle:
Cristos, you’ve warned about losing core human traits in this process. How do we walk that line—embracing augmentation without erasure?
Cristos Velasco:
It starts with boundaries. Complementary AI must preserve human decision-making, even when automation is possible. Creativity, empathy, moral reasoning—these are not bugs in the human system; they’re features. Let AI assist, not imitate. Let it support our growth, not define it. And let’s legally and ethically encode human supremacy in decisions that affect lives.
Sherry Turkle:
Let me ask a shared question:
📌 What does a truly “complementary” AI partnership feel like for a person in 2035?
Mauro D. Rios:
It feels like working with an intuitive teammate—one that elevates your ideas and reminds you of your values.
Jim Dator:
Like dancing with a partner who adjusts to your pace and never steps on your toes.
Wayne Wei Wang:
Like standing in front of a mirror that doesn’t distort—but reveals your full potential.
Liselotte Lyngsø:
Like having a silent co-pilot who protects your peace, your purpose, and your possibilities.
Cristos Velasco:
Like being supported, not surveilled. Empowered, not engineered.
Sherry Turkle (closing):
Today’s conversation reveals that the dream isn’t to build machines that mimic us—but to design partners that remind us who we truly are. Complementary AI isn’t just technical—it’s philosophical. It asks: How do we stay human while becoming more?
Meaning and Purpose in a Post-Work Society
Moderator: Sherry Turkle
Participants: Frank Kaufmann, Mark Schaefer, Neil Richardson, David Weinberger, Stephan Adelson
Sherry Turkle:
Welcome to our final conversation. Today we reflect on something timeless yet now urgently redefined: What gives life meaning when work is no longer central? Frank, let’s begin with you. You’ve said that humans must now explore their purpose outside of labor. What does that look like?
Frank Kaufmann:
Thanks, Sherry. Historically, work has been a container for identity. But in a post-work world, we must rediscover being over doing. We’re talking about a spiritual awakening—where creativity, contemplation, relationships, and service become our core identity markers. If AI takes our jobs, it must not take our reason for living. We must become students of meaning, not just tools of productivity.
Sherry Turkle:
Beautiful. Mark, in your writing, you warned of psychological and societal collapse in a future where people feel irrelevant. Can you expand?
Mark Schaefer:
Yes. The danger isn’t just unemployment—it’s disconnection. If you’ve spent your life equating worth with work, then automation doesn’t just steal income—it steals identity. Depression, addiction, even extremism can thrive in that vacuum. We need cultural infrastructures—education, storytelling, rituals—that help people rebuild self-worth in non-economic terms.
Sherry Turkle:
Neil, you’ve spoken of the digital shadow—how our virtual selves might continue after death. Could this offer a new form of purpose?
Neil Richardson:
Yes, I believe so. In 2035, purpose could become multidimensional. You may live a physical life, while your AI-augmented self teaches, comforts, or inspires others long after you’re gone. Our data, our choices, our creativity—it all forms a kind of digital legacy. The question is: Are we curating lives worth echoing? In the post-work era, immortality may not be in the flesh but in the footprint.
Sherry Turkle:
David, you’ve taken an optimistic stance—that AI might actually inspire us to see meaning in new places. How?
David Weinberger:
Because AI can do what we never could: see the invisible. It reveals patterns, truths, connections across domains that eluded us. That kind of insight doesn’t diminish us—it expands us. Imagine discovering truths about the universe through AI... and then expressing them through poetry, music, or spiritual practice. Meaning, in 2035, might not be about mastery—but about wonder.
Sherry Turkle:
Stephan, you’ve said that AI may create a tension between those who embrace it and those who resist it. How does that affect purpose?
Stephan Adelson:
Deeply. Some will find purpose in merging with AI—becoming cybernetic artists, philosophers, even healers. Others will retreat into old values, finding purpose in resisting “mechanized meaning.” But this friction might be healthy. It forces society to define: What is truly human? Is it our capacity to create, to suffer, to transcend? The post-work society might finally let each soul ask—and answer—that question without a paycheck attached.
Sherry Turkle:
A final question to all:
📌 If a child in 2035 asked you, “What should I live for?”—how would you answer?
Frank Kaufmann:
Live to love well. And let that love shape everything you create and care about.
Mark Schaefer:
Live to leave the world a little brighter than you found it. That’s enough.
Neil Richardson:
Live to author your legacy—physical or digital—in a way that whispers truth beyond your years.
David Weinberger:
Live in awe. Ask questions no machine can answer. Then share your joy in the asking.
Stephan Adelson:
Live for balance—between silence and creation, solitude and community, progress and presence.
Sherry Turkle (closing):
Thank you. As AI automates what we do, we are invited—perhaps even required—to rediscover who we are. In this post-work world, meaning is no longer handed to us. It must be grown, nurtured, shared. May we rise not just to build smarter tools, but to become wiser, more awakened humans.
Final Thoughts
Sherry Turkle:
As we come to the close of these deeply moving and complex conversations, I’m struck by a single truth: being human is not something we can take for granted in an age of intelligent machines—it’s something we must actively nurture, protect, and redefine.
Through each of your voices, we’ve seen that our future is not about humans versus AI, but about humans remembering who we are, even as we create what’s next.
We talked about identity—not as something fixed, but as something shaped by values and choices, not just data and devices. We reflected on empathy and ethics as the beating heart of any meaningful civilization. We wrestled with agency in an age of predictive algorithms. We imagined AI not as our master, but as our mirror, our co-pilot. And finally, we asked ourselves: what gives life meaning when traditional roles fall away?
The answers were not simple. They were layered, diverse, and, above all, human.
If there’s one thread running through it all, it’s this: the future we’re building with AI is not pre-written. It will not be determined by lines of code alone. It will be shaped by our courage to care, our willingness to reflect, and our collective insistence on holding on to what makes us human—even as we evolve.
Thank you, each of you, for reminding us that the question of 2035 is not just what can we build—but who will we choose to be.
Short Bios:
Sherry Turkle
Professor of Social Studies of Science and Technology at MIT and author of The Empathy Diaries. Turkle explores how technology influences human identity, relationships, and emotional development.
Andy Opel
Professor of Media Production and social commentator. Opel focuses on how emerging technologies like AI can help us rediscover and align with shared human values.
Barry Chudakov
Founder of Sertain Research, Chudakov examines how technology, particularly digital interfaces, transforms human behavior, memory, and identity.
Anriette Esterhuysen
Internet rights activist and former chair of the UN’s Internet Governance Forum Multistakeholder Advisory Group. She advocates for equitable, inclusive digital development.
Frank Kaufmann
Director of the Values in Knowledge Foundation. Kaufmann explores spiritual and philosophical questions related to meaning, identity, and purpose in the age of AI.
Ray Schroeder
Professor Emeritus and online education pioneer. Schroeder envisions a future where AI encourages greater ethical awareness and compassion in human society.
Jan Hurwitch
Diplomat and human development advocate. Hurwitch emphasizes the importance of empathy, moral growth, and conscious evolution as key to humanity’s survival.
Thomas Gilbert
Futurist and technology ethicist. Gilbert critiques the lack of democratic participation in shaping AI and calls for greater public agency in its development.
Warren Yoder
Policy analyst and civic innovator. Yoder critiques technological hype and champions human complexity, ethics, and meaning as vital counterbalances.
Giacomo Mazzone
Media policy expert and global consultant. Mazzone warns against outsourcing ethical judgment to algorithms and advocates for moral literacy in AI systems.
Rabia Yasmeen
Market and innovation analyst. Yasmeen emphasizes the increasing value of emotional intelligence and empathy in a tech-driven society.
Tracey Follows
Futurist and founder of Futuremade. Follows explores how personal identity and autonomy are challenged by AI and predictive technologies.
Courtney C. Radsch
Journalist and digital rights expert. Radsch focuses on surveillance, freedom of expression, and the tension between AI convenience and cognitive liberty.
John Markoff
Technology journalist and author. Markoff examines the historical trajectory of human-computer interaction and its effect on agency and autonomy.
Mauro D. Rios
AI policy advisor and researcher. Rios advocates for cognitive augmentation tools that enhance, rather than replace, human abilities.
Jim Dator
Futurist and professor emeritus at the University of Hawaii. Dator studies long-term social and technological change, viewing AI as part of human evolution.
Wayne Wei Wang
Strategic foresight consultant. Wang focuses on designing AI systems that integrate cultural context and serve human development.
Liselotte Lyngsø
Futurist and managing partner of Future Navigator. Lyngsø envisions AI as a deeply personal companion that supports human growth and well-being.
Cristos Velasco
Cybersecurity and AI ethics researcher. Velasco emphasizes legal and moral safeguards to protect human qualities in an automated world.
Mark Schaefer
Marketing futurist and author. Schaefer explores the psychological effects of automation and the importance of redefining self-worth in a post-work world.
Neil Richardson
Digital legacy strategist. Richardson investigates how identity, memory, and purpose extend into the digital afterlife through our data and online presence.
David Weinberger
Author and senior researcher at Harvard’s Berkman Klein Center. Weinberger sees AI as a tool for expanding human curiosity, discovery, and awe.
Stephan Adelson
Technologist and spiritual thinker. Adelson explores the inner conflicts and opportunities AI creates for meaning, purpose, and personal evolution.
Leave a Reply