|
Getting your Trinity Audio player ready...
|

Arvid Bell:
When we launched North Star, we weren't trying to replace human judgment—we were trying to extend its reach. In a world where hours can determine the fate of millions, what if we had months of foresight instead of moments of panic?
We’ve spent years building simulations that don't just crunch numbers—they model human decision-making under pressure. We ask: What if a leader is tired, isolated, or proud? What if they’re misunderstood or misled?
But with power comes responsibility. This technology can inform peace—or inflame conflict. That’s why we’ve gathered not just engineers, but ethicists, diplomats, historians, and skeptics. Because North Star isn’t just about predicting war. It’s about preserving peace, with clarity and conscience.
Let’s ask the hardest questions now—before history writes them in blood.
(Note: This is an imaginary conversation, a creative exploration of an idea, and not a real speech or event.)
Topic 1: Can AI Prevent World War III?

Moderator: Fareed Zakaria
Speakers: Arvid Bell, Henry Kissinger, Sam Altman, Angela Merkel, H.R. McMaster
Format: 53r1 – 5 participants, 3 crucial questions, randomized response order, 1 topic per segment
Fareed Zakaria:
Let's begin with the core of today's conversation. Arvid, your AI system, North Star, claims to give us foresight in geopolitics. But here’s the big question:
Can AI realistically prevent World War III, or are we overestimating its capabilities?
Sam Altman:
AI can enhance foresight, yes—but prevention still depends on human will. AI can forecast escalation scenarios with stunning accuracy, but if leaders are unwilling to listen or act with humility, those forecasts become just another ignored alarm. So no, AI won’t “prevent” war on its own. But it can become a powerful tool—if we choose to listen.
Angela Merkel:
Technology offers promise, but trust must be earned. In Europe, we’ve seen how mistrust, even with clear data, paralyzes action. AI must be integrated into diplomatic processes cautiously. It’s not a crystal ball—it’s a compass. Useful, yes. But humans must steer.
Arvid Bell:
North Star doesn’t replace diplomacy—it extends its timeline. Our goal is to give leaders months, not hours, to assess risk and respond wisely. We’re not overestimating AI—we’re giving it the role of strategic assistant, not sovereign judge.
H.R. McMaster:
Having fought in actual war zones, I’ll say this: leaders don’t always lack data—they often lack conviction. AI helps expose the real costs of action and inaction. That clarity might change decisions. Might. But AI’s ability to prevent WWIII will ultimately rest on leadership character.
Henry Kissinger:
The irony of intelligence—artificial or human—is that it still requires interpretation. AI could indeed prevent wars. But it may also embolden hawks who misread its output as justification. Whether it prevents or provokes war depends entirely on how wisely it’s used.
Fareed Zakaria:
Let’s deepen that point. How do we ensure that AI in the geopolitical realm isn't misused—especially in moments of panic or pressure? What safeguards or structures must be in place?
Angela Merkel:
International norms. Just as we have treaties for nuclear weapons, we need transparent, enforceable agreements on the use of predictive AI in diplomacy. No black boxes. No unilateral simulations shaping global decisions without oversight.
Arvid Bell:
Exactly. That’s why North Star is built with multi-stakeholder review embedded in every simulation run. Ethical boards, red team assessments, and simulation transparency are essential. We’re designing for accountability—not just accuracy.
Henry Kissinger:
It is wise to embed AI within institutional checks. But history shows that when power is concentrated, even tools of peace can become tools of preemption. AI systems must never be the sole input for existential decisions.
Sam Altman:
I’d add: public education matters too. If society doesn’t understand these systems, we risk panic when predictions leak—or worse, blind obedience. Democratizing access and literacy around peace tech should be a priority.
H.R. McMaster:
Military leaders already run war games. The difference here is speed and depth. I agree with Angela—we need NATO-level standards. But I’d also advocate for real-time human audit trails in every simulation. When a model triggers escalation risks, a diverse team must verify, not just trust.
Fareed Zakaria:
Last question. Let’s imagine the system gives a credible warning of escalation—say, 80% probability of conflict. What should be the global response? When is it right to act on AI predictions, and when must we wait?
Sam Altman:
Probabilities are not mandates. We act when multiple sources—AI, human intelligence, diplomatic feedback—converge. The risk lies in treating AI as infallible. Instead, let it provoke dialogue and delay rash choices. But yes—act when there's clarity, not just fear.
H.R. McMaster:
I agree. Action should not always mean intervention. Sometimes, acting on AI means pausing deployments, increasing diplomacy, or setting stricter red lines. The warning is the spark—what we ignite from it must be deliberate, not reflexive.
Arvid Bell:
North Star’s purpose is to trigger earlier deliberation, not dictate action. If it alerts us, the best response might be slowing down, not speeding up. Using AI to avoid war isn’t about launching drones—it’s about launching conversations.
Henry Kissinger:
In crises, timing is everything. Acting on AI must follow the logic of wisdom, not mere prediction. A false alarm acted upon hastily could become the cause of the very war it sought to prevent. We must remain masters of the tool, not its servants.
Angela Merkel:
We must build “response rituals”—processes that kick in when an AI warning arises. No one person should respond alone. No one nation should act unilaterally. That structure alone could prevent disaster, even if the model gets it wrong.
Fareed Zakaria – Final Thoughts:
Today’s conversation reminds us that prediction is not prevention. AI like North Star offers a leap forward in foresight—but it must walk hand-in-hand with wisdom, governance, and restraint. If used wisely, it may become one of the greatest instruments of peace. But if trusted blindly, it could lead us into danger faster than ever before.
Topic 2: Ethics of Simulating World Leaders

Moderator: Martha Nussbaum
Speakers: Arvid Bell, Elon Musk, Yuval Noah Harari, Anne Applebaum, Garry Kasparov
Format: 53r1 – 5 participants, 3 crucial questions, randomized response order, 1 topic per segment
Martha Nussbaum:
Let’s begin with the ethical heart of the matter. North Star simulates the decision-making patterns of real-world leaders—potentially without their consent.
Is it morally justifiable to create AI digital twins of political figures without their permission, even for the goal of peace?
Anne Applebaum:
It’s a slippery slope. While intentions may be noble, simulating a world leader’s behavior without their consent risks crossing into surveillance masquerading as strategy. Democracies should model ethical transparency—not emulate authoritarian data practices.
Elon Musk:
Let’s be honest—if the stakes are preventing nuclear war, I’d rather simulate a dictator than wait for them to act. Consent matters, but existential risk overrides it in some cases. The ethical line is: how is the simulation used? Not merely that it exists.
Arvid Bell:
North Star never mimics personalities to embarrass or exploit. We create decision models—not psychological clones. These are scenario engines, not parodies. Still, I welcome this scrutiny. Consent mechanisms should evolve alongside the tech.
Garry Kasparov:
I’ve lived through Cold War paranoia. The idea of simulating behavior for peace, while imperfect, is preferable to guessing in the dark. But the moment it’s used for preemptive justification—or to attack a nation’s dignity—we lose moral high ground.
Yuval Noah Harari:
Consent isn’t just individual—it’s cultural. Simulating a Western leader might pass legal tests, but what about nations with deep spiritual taboos around imitation? Peace tech cannot be ethically neutral. Its global use demands global values.
Martha Nussbaum:
Let’s go deeper. AI models learn and evolve. What if a simulated leader behaves more rationally—or more dangerously—than the real one?
Should we ever allow AI simulations to influence diplomatic or military policy decisions?
Garry Kasparov:
AI should inform, not decide. I play chess, not roulette. If we trust simulations too much, we’re handing strategy over to patterns, not principles. Use it as a mirror, not a compass. Always keep the human judgment in charge.
Arvid Bell:
Agreed. In our work, AI is a second opinion—not a general. No simulation will ever replace the need for seasoned, contextual diplomacy. But it can sharpen awareness, reveal blind spots, and slow reactive escalation. That’s its real value.
Anne Applebaum:
Policymakers already rely on models—economic forecasts, polling, military war games. The danger is pretending AI is more objective than it is. Every model reflects bias. We must build in diverse oversight—journalists, ethicists, even critics.
Yuval Noah Harari:
I worry less about how we use AI and more about who owns the simulations. If only superpowers or tech elites control these tools, they become geopolitical weapons. Transparency, public accountability, and distributed access are vital.
Elon Musk:
We trust AI to land rockets and diagnose cancer—why not to run 10,000 diplomacy simulations and surface the most peaceful paths? Sure, we need safeguards. But the idea that human instinct is always wiser is provably false. AI should guide us more than we admit.
Martha Nussbaum:
Final question. North Star is a peace project—but in the wrong hands, could this kind of technology be weaponized?
What must the global community do now to prevent AI leader-simulation from becoming a new form of psychological warfare?
Yuval Noah Harari:
We need a Geneva Convention for AI. Simulating leaders to provoke, deceive, or justify action is digital aggression. Before the tech matures further, we must codify the boundaries—globally.
Arvid Bell:
We’ve already submitted ethical frameworks to international bodies. But I support a broader call: an independent coalition to audit simulation technologies, much like how nuclear treaties inspect arsenals. Trust requires external eyes.
Elon Musk:
I’ll say this plainly—if we don’t set rules, bad actors will abuse this. Imagine simulating Biden and leaking “his plan” to attack. Deepfake diplomacy is a real threat. We need watermarks, audit trails, and ethical AI labs now.
Anne Applebaum:
Governments must be proactive, not reactive. It’s time for democratic alliances to define AI norms. China and Russia are unlikely to hold back on simulations. If the free world doesn’t lead ethically, others will lead manipulatively.
Garry Kasparov:
I’ve faced disinformation firsthand. This technology can enlighten—but it can also deceive. We must remember that peace requires more than innovation. It requires courage to restrain ourselves when we could strike first.
Martha Nussbaum – Final Thoughts:
Simulating world leaders may help us glimpse the future—but without global ethics, we risk rewriting reality itself. The call today is not to abandon peace tech, but to elevate its purpose: toward humility, transparency, and shared responsibility.
Topic 3: The Rise of Peace Tech — Tool for Unity or Control?

Moderator: Thomas Friedman
Speakers: Arvid Bell, Eric Schmidt, Mariana Mazzucato, Ayo Tometi, Marc Andreessen
Format: 53r1 – 5 participants, 3 crucial questions, randomized response order, 1 topic per segment
Thomas Friedman:
We’ve seen war tech dominate the last century—nuclear weapons, cyber attacks, surveillance. But now we have “peace tech”—AI systems like North Star.
Is this truly a shift toward unity, or are we just dressing control in friendlier language?
Marc Andreessen:
Every technology has two faces. The printing press spread freedom and propaganda. Peace tech is no different. It depends on who builds it and why. But yes, the possibility to coordinate global foresight is real—and revolutionary. Let’s not be cynical too soon.
Ayo Tometi:
We have every right to be cautious. When marginalized voices aren’t at the table, “peace” can become a form of control. Whose peace? For whom? If peace tech doesn’t center justice, it risks becoming just another digital colonizer.
Eric Schmidt:
Tech is neither angel nor demon. It’s infrastructure. Peace tech, done right, will democratize insight and speed up diplomatic intelligence. It gives humanity time—time to think, collaborate, negotiate. That’s not control—it’s liberation.
Arvid Bell:
North Star is not here to centralize power. Quite the opposite—it’s designed to decentralize wisdom. By running thousands of simulations, we challenge overconfidence, expose hidden escalation paths, and empower broader coalitions to respond, not just superpowers.
Mariana Mazzucato:
Let’s be honest—tech is often funded to serve power. The peace tech industry must avoid becoming a privatized tool that sells risk predictions to the highest bidder. We need public ownership of peace infrastructure and community-driven models. Otherwise, unity is just branding.
Thomas Friedman:
If peace tech becomes a billion-dollar industry—as some predict—how do we ensure it serves humanity and not just elite interests?
Eric Schmidt:
Governance and open standards. The internet flourished because it had protocols anyone could build on. We should do the same for peace tech—make core systems open source and transparent. Invite innovation, but with global accountability.
Mariana Mazzucato:
And we must rethink value itself. Tech shouldn’t be rewarded just for being scalable—it must be evaluated by social impact. Peace is not a consumer product. It’s a shared public good. Let’s build metrics and funding structures that reflect that.
Arvid Bell:
That’s why we work with NGOs, diplomats, and academics—not just defense contractors. Yes, investment is needed. But the heart of North Star is not profit—it’s foresight. It’s a prototype for a world that plans, not panics.
Marc Andreessen:
Let’s not demonize profit either. Innovation scales when it’s sustainable. What matters is incentive design. If peace tech firms are rewarded for preventing conflict, not exploiting it, you’ll see markets aligned with morality.
Ayo Tometi:
But we’ve seen how quickly good tech gets co-opted. Social media started with connection—then fueled division. We need civil society at the helm. Communities on the ground, not just CEOs in glass towers, must help shape and test these tools.
Thomas Friedman:
Final question. If peace tech does gain global traction, how do we prevent it from becoming a new instrument of surveillance, manipulation, or preemptive war?
Arvid Bell:
We embed transparency into the design. Every simulation in North Star is documented, reviewable, and challengeable. The answer isn’t to avoid AI—it’s to civilize it. You don’t dismantle roads because they might lead to bad places—you build guardrails.
Ayo Tometi:
Transparency is just the first layer. We need consent, inclusion, and access. People being simulated, advised on, or predicted must have a voice. Otherwise, it’s digital empire dressed as humanitarianism.
Eric Schmidt:
Surveillance happens when tools operate in secrecy. I advocate for national-level peace tech commissions—multilateral, transparent, and publicly audited. We’ve learned from past mistakes. It’s time to build with wisdom baked in.
Mariana Mazzucato:
We should create international “public digital institutions” that govern peace tech like we do climate agreements. No one country—or company—should own the architecture of global peace. Public investment, public accountability.
Marc Andreessen:
One last point: the worst-case scenario isn’t misuse—it’s disuse. If we build peace tech and bury it in regulation, we miss the chance to save lives. Let’s balance freedom and protection. Don’t slow the future—shape it.
Thomas Friedman – Final Thoughts:
Peace tech is a mirror, not a magic wand. It reflects our intentions, magnifies our choices, and multiplies their consequences. Whether it unites or controls depends on what we build—and why. The next era of diplomacy won’t be won with weapons, but with wisdom.
Topic 4: Predicting Human Emotion in Global Decisions

Moderator: Esther Perel
Speakers: Arvid Bell, Daniel Kahneman, Elie Wiesel, Rana el Kaliouby, Dalai Lama
Format: 53r1 – 5 participants, 3 crucial questions, randomized response order, 1 topic per segment
Esther Perel:
We often think leaders make decisions based on facts—but emotion plays a hidden, powerful role. Regret, pride, fear—these drive history as much as logic.
Can AI ever truly simulate the emotional realities that shape global decision-making?
Elie Wiesel:
Emotion is not a variable—it is a moral universe. Fear of war, yes. But also shame, memory, love for one’s people. These are not simulations. They are sacred. A machine may approximate mood, but never soul.
Rana el Kaliouby:
We’re getting better. AI can now detect micro-expressions, tone, and patterns in speech. But the real challenge is context. A furrowed brow in one culture is worry; in another, it’s concentration. Emotion AI must be culturally literate—not just computationally trained.
Arvid Bell:
North Star does not claim to “feel” like humans. It models behavior under emotional strain. For example, we simulate how sleep deprivation might affect a leader’s risk tolerance. It’s not empathy—it’s probabilistic realism.
Daniel Kahneman:
Emotions are often unconscious. That’s the dilemma. Leaders themselves don’t always know what’s driving them. AI may one day outperform humans in recognizing patterns of irrationality, but interpreting them? That still requires humility—and a human mind.
Dalai Lama:
Emotion must be trained. Even the most powerful leaders are children of their past. Peace begins with awareness of suffering—one’s own and others’. AI may learn patterns, but compassion cannot be computed. It must be cultivated.
Esther Perel:
Let’s go further. If a machine predicts that a leader will act irrationally due to emotional pressure, should policymakers trust the data—or trust the person?
When should emotion-informed simulations influence real-world decisions?
Daniel Kahneman:
Use it as a warning system, not a replacement for dialogue. If AI tells you a leader may act impulsively, engage more—don’t assume more. Use it to slow assumptions, not fuel them.
Arvid Bell:
That’s how North Star is used: to generate questions, not commands. If a simulation suggests heightened aggression under personal stress, the goal is to open back channels, not close them. Emotional intelligence is a response tool, not a trigger.
Dalai Lama:
Trust grows from human connection. If a simulation says someone is angry, speak with them. Do not speak about them. AI should lead us to understanding, not surveillance.
Rana el Kaliouby:
Emotion-aware AI can enhance diplomacy—if paired with empathy. For example, understanding that a leader is likely to prioritize legacy over logic could shift negotiation tone. But use it to inform, not manipulate.
Elie Wiesel:
I worry. If machines begin predicting rage, vengeance, or grief, will we preempt human error—or erase human redemption? People change. A broken man may choose peace. AI cannot see that moment of soul.
Esther Perel:
Final question. Emotions are messy, unpredictable, and sacred.
Should we even try to model emotion in AI—or are we playing with something too human to touch?
Dalai Lama:
Modeling emotion is not wrong—but it must be done with compassion. The purpose should always be peace, not control. Machines can mirror our minds—but only we can awaken our hearts.
Arvid Bell:
Emotion modeling, done ethically, is about care. If it gives leaders more space to breathe before reacting—more time to cool a crisis—then it serves a higher good. But we must never pretend it replaces empathy.
Elie Wiesel:
Respect must be the starting point. Do not reduce pain to patterns. Do not code away conscience. Let AI serve humanity’s better angels—not imitate its worst instincts.
Rana el Kaliouby:
I believe we can teach machines to recognize the dignity behind emotion. But only if we teach them to listen—not just compute. Emotional AI must be accountable to the communities it seeks to understand.
Daniel Kahneman:
Emotion is not chaos—it’s complexity. Modeling it is valuable. But let’s never forget: models don’t weep. They don’t hesitate before launching missiles. That difference should guide every decision about peace tech.
Esther Perel – Final Thoughts:
Peace is not the absence of emotion—it’s how we hold emotion when power is at stake. AI may glimpse the outer patterns of feeling, but only humans carry its burden. Let peace tech serve not just our minds—but our shared humanity.
Topic 5: When AI Gets It Wrong — Who Is Responsible?

Moderator: Catherine D’Ignazio
Speakers: Arvid Bell, Edward Snowden, Kate Crawford, Jack Dorsey, Condoleezza Rice
Format: 53r1 – 5 participants, 3 crucial questions, randomized response order, 1 topic per segment
Catherine D’Ignazio:
We trust peace tech to warn us early, but what happens when it’s wrong? If a simulation predicts escalation and triggers a preemptive strike—or worse, misses a real threat—lives are on the line.
So who is ultimately responsible when peace tech systems fail? The designer, the user, or the code?
Edward Snowden:
Responsibility doesn’t vanish into code. If an AI triggers a fatal decision, the people who built it, deployed it, and chose to follow it are all accountable. You can't outsource moral weight to math.
Kate Crawford:
AI is never neutral. The data, assumptions, and values baked into it are shaped by people. When it fails, we must look upstream—who trained it? Who funded it? Who benefits from its decisions? Responsibility is distributed, but never invisible.
Arvid Bell:
We designed North Star to be a decision-support system, not a decision-maker. But I agree—if it misleads, we are not exempt. That’s why transparency, peer review, and third-party audits are built into every stage. Accountability is part of the code.
Condoleezza Rice:
Let’s be real—leaders make decisions, not machines. If a president acts based on an AI forecast, it’s still that leader’s judgment. Blaming the machine is weak leadership. Tools don’t declare war. People do.
Jack Dorsey:
But tools do shape perception. If AI becomes a dominant narrative engine, it can guide entire nations toward action or inaction. That’s a kind of soft power. So yes, responsibility includes the designers—but also the ecosystems of trust we build around these systems.
Catherine D’Ignazio:
Let’s make it concrete. Imagine a situation: North Star predicts a high chance of peaceful resolution, and a nation stands down—only to be attacked.
What do we say to the public, the victims? Should peace tech be held to the same standards as human intelligence—or higher?
Kate Crawford:
Higher. AI carries the illusion of certainty. “The model said so” creates a false sense of objectivity. So it must be audited more rigorously, and held to stricter standards—because its authority can overwhelm human doubt.
Arvid Bell:
The right response isn’t to punish peace tech—it’s to learn. Our system includes built-in uncertainty scores and recommended human oversight. If North Star were misread or over-trusted, we must fix the culture of usage, not just the software.
Condoleezza Rice:
Let’s remember: bad intelligence isn’t new. The 2003 Iraq war was built on flawed human analysis. What we need are better guardrails—not perfect tools. A president must always have multiple inputs, not one AI-generated forecast driving action.
Jack Dorsey:
We also need public resilience. A society that understands probabilistic thinking won’t be paralyzed by a bad forecast. It’ll demand reflection, not retribution. We should build AI systems and public wisdom in tandem.
Edward Snowden:
If an AI gets it wrong and people die, there must be accountability. Not scapegoats—systems. What were the governance failures? Were dissenting voices ignored? Transparency before, during, and after is non-negotiable. Peace tech must be open-source, inspectable, and challengeable.
Catherine D’Ignazio:
Final question. You’ve all touched on the idea of trust.
How do we build peace tech in a way that earns—and deserves—public trust in the face of inevitable imperfection?
Jack Dorsey:
Start with radical transparency. Don’t just open the code—open the conversations. Let people see the debates, the test failures, the hard calls. When people feel included, they trust more—even when things go wrong.
Arvid Bell:
We’ve launched what we call “ethical simulations”—events where civil society leaders watch a simulation run in real time and ask questions. Trust comes from presence. When peace tech is seen, challenged, and explained, it earns its place.
Edward Snowden:
No trust without control. Decentralize it. Let international bodies, not just governments or corporations, own peace tech tools. When the power is shared, the trust is earned—not demanded.
Condoleezza Rice:
Trust also comes from results. If the system helps avoid one major war, people will take notice. But the system must remain humble—never presented as infallible. That honesty builds more trust than any promise of perfection.
Kate Crawford:
And finally, listen to critics. Invite the skeptics, the ethicists, the human rights lawyers. The systems we trust most are those that are shaped with accountability, not just after failure. Build peace tech with vulnerability, not bravado.
Catherine D’Ignazio – Final Thoughts:
In a world racing toward automated power, peace tech asks us to pause—not just war, but our own certainty. When mistakes happen—and they will—only systems built on openness, humility, and shared ownership will hold. Trust isn’t built on perfection. It’s built on responsibility.
Final Thoughts by Arvid Bell:
After listening to all these brilliant minds, one truth rises above the rest: Peace is not a technological achievement—it’s a moral commitment.
North Star can show us forks in the road, but it cannot choose the path. That task remains ours. The future of peace tech will not be determined by algorithms, but by the wisdom, humility, and courage with which we use them.
Let this be our collective promise: not to build tools that merely warn us of war, but to forge systems that give peace a fighting chance.
Thank you—for imagining responsibly with me.
Short Bios:
Arvid Bell – A former Harvard professor and founder of the peace tech system North Star, Bell specializes in conflict simulation and international negotiation. His work blends diplomacy, systems thinking, and AI innovation to forecast global crises.
Henry Kissinger – A towering figure in 20th-century diplomacy, Kissinger served as U.S. Secretary of State and National Security Advisor. Known for realpolitik, he shaped U.S. foreign policy during the Cold War and remains influential in international affairs.
Sam Altman – CEO of OpenAI and a leading voice in artificial intelligence, Altman advocates for responsible AI development and global safety frameworks. He is also an entrepreneur and investor with a background in startups and futurism.
Angela Merkel – Chancellor of Germany from 2005 to 2021, Merkel is known for her pragmatic, science-driven leadership and steady guidance through major European crises. She remains a respected voice in global stability and ethics.
H.R. McMaster – A retired U.S. Army lieutenant general and former National Security Advisor, McMaster is a historian and military strategist with deep experience in conflict zones and policy decision-making under pressure.
Elon Musk – A tech entrepreneur and CEO of Tesla and SpaceX, Musk is outspoken on AI risks and planetary security. His visionary thinking often sparks debate on innovation, ethics, and humanity’s long-term survival.
Yuval Noah Harari – A historian and best-selling author of Sapiens and Homo Deus, Harari explores the intersections of technology, power, and the future of humanity. He frequently critiques the misuse of AI and surveillance.
Anne Applebaum – A Pulitzer Prize-winning journalist and historian, Applebaum writes extensively on authoritarianism, propaganda, and democratic resilience. She is a leading voice on disinformation and political ethics.
Garry Kasparov – A former world chess champion and AI critic, Kasparov now advocates for democracy and human rights. He’s an expert on strategy and a vocal critic of authoritarian regimes and technological overreach.
Martha Nussbaum – A moral philosopher and law professor, Nussbaum is known for her work on ethics, justice, and human development. She brings a deeply humane lens to questions of power, responsibility, and global governance.
Eric Schmidt – Former CEO and Chairman of Google, Schmidt now supports AI innovation in national security and diplomacy through philanthropic and strategic initiatives. He advocates for competitive yet ethical AI leadership.
Mariana Mazzucato – An economist and professor, Mazzucato focuses on mission-driven innovation and the public value of technology. She critiques profit-centered tech and calls for inclusive, public-good innovation strategies.
Ayo Tometi – Co-founder of Black Lives Matter and a global human rights activist, Tometi advocates for social justice, inclusion, and equitable tech governance. Her work centers on dignity and accountability in systemic structures.
Marc Andreessen – A venture capitalist and co-creator of the first web browser, Andreessen is a tech optimist and investor. He supports bold, market-driven innovation and has strong views on scaling technology for global impact.
Daniel Kahneman – A Nobel Prize-winning psychologist and author of Thinking, Fast and Slow, Kahneman pioneered the field of behavioral economics. His research explains how emotions and biases influence high-stakes decisions.
Elie Wiesel – A Holocaust survivor, author, and Nobel Peace Prize laureate, Wiesel dedicated his life to memory, ethics, and human rights. He offered a moral voice for conscience in the face of suffering and inaction.
Rana el Kaliouby – A pioneer in emotional AI, el Kaliouby co-founded Affectiva and leads efforts to bring emotional intelligence into technology. She advocates for ethical human-AI interaction, especially across cultures.
Dalai Lama – Spiritual leader of Tibetan Buddhism, the Dalai Lama promotes compassion, mindfulness, and nonviolence. His teachings on inner peace and human unity remain globally respected across political and spiritual spheres.
Catherine D’Ignazio – A scholar of data justice and co-author of Data Feminism, D’Ignazio explores the social impacts of AI and data systems. She focuses on transparency, ethics, and inclusivity in technology design and governance.
Edward Snowden – A former NSA contractor turned whistleblower, Snowden exposed global surveillance programs and now advocates for privacy, transparency, and decentralized accountability in technology.
Kate Crawford – An AI scholar and author of Atlas of AI, Crawford critically examines the power dynamics behind artificial intelligence. She highlights the environmental, political, and ethical costs of data-driven systems.
Jack Dorsey – Co-founder and former CEO of Twitter and Square, Dorsey is a tech entrepreneur advocating for decentralized technologies and ethical digital ecosystems. His work explores the intersection of social platforms and public trust.
Condoleezza Rice – U.S. Secretary of State from 2005 to 2009, Rice brings deep experience in geopolitics, diplomacy, and national security. She is known for her strategic thinking and clear stance on U.S. leadership in global affairs.
Leave a Reply