
Getting your Trinity Audio player ready...
|

Introduction by Yuval Noah Harari
Throughout history, every technological revolution has reshaped not just how we work, but how we imagine human purpose. The plough transformed tribes into empires, the printing press reshaped religion and politics, and the computer rewired the 20th century. Now, with Project Mercury, we are entering a new chapter: artificial intelligence is learning the language of finance.
At first, it may seem trivial — automating spreadsheets, polishing pitch decks. But as we have seen before, the small beginnings of technology can unleash massive transformations. If machines now master the apprenticeship of finance, the very way we train, reward, and even trust human judgment will be altered.
The question before us is not only whether AI will make banks faster or more efficient. The deeper question is: what role will remain for human beings in a world where capital is guided by algorithms? Will finance serve society better — or will it widen the chasm between the few and the many?
This conversation is not about the death of entry-level analysts, nor the rise of efficiency alone. It is about the future of human agency in the heart of global capitalism.
(Note: This is an imaginary conversation, a creative exploration of an idea, and not a real speech or event.)
Topic 1 — AI vs. the Analyst: The Death of Entry-Level Banking?

Moderator: Fareed Zakaria (calm, incisive guide into global and economic issues)
Participants: Jamie Dimon, Sam Altman, Matt Levine, Bill Ackman, Susan Lund
Opening by Fareed Zakaria
“Project Mercury, OpenAI’s secretive initiative, aims to automate much of the work done by junior bankers — financial modeling, pitch decks, even IPO prep. This raises a fundamental question: if machines can do the grunt work, how will tomorrow’s bankers be trained? Let’s explore the consequences of AI replacing the analyst class.”
First Question:
“How does Project Mercury reshape the traditional career ladder in finance?”
Jamie Dimon:
“The banking apprenticeship has always been brutal but essential. You grind through spreadsheets and late nights, and in the process, you absorb the nuances of risk, client demands, and team dynamics. If AI eliminates that first rung, we’ll have smart people entering finance without the scar tissue that makes them good risk managers later. It’s not just about the work — it’s about the discipline and instinct built by repetition.”
Sam Altman:
“I don’t see it as eliminating the ladder but reshaping it. Why should brilliant young graduates waste thousands of hours aligning columns in Excel? If AI does that, analysts can focus on higher-value tasks earlier — strategy, creative deal structures, client interaction. Think of it as compressing the time it takes to get from ‘spreadsheet monkey’ to decision-maker.”
Matt Levine:
“Both are right in their own way. Having done that work, I can say it was miserable but formative. The real question is: does AI remove the drudgery or the apprenticeship itself? Because if junior bankers no longer learn by suffering through messy details, they might end up in the boardroom without knowing where the skeletons hide in the model.”
Bill Ackman:
“For me, the ladder was never about pain — it was about mastery. If you never get your hands dirty in the mechanics of a model, you risk missing subtle errors that can sink billions. But I also think Sam is right: AI can accelerate development. The challenge is how firms redesign training so that young people don’t skip the fundamentals.”
Susan Lund:
“Our research shows that when AI reshapes industries, the winners are firms that build new learning pathways. In consulting, medicine, even law, young professionals can now use AI tools, but they still need structured mentoring. If banks don’t invest in those systems, they’ll end up with senior people who know strategy but not the plumbing of finance.”
Second Question:
“Can AI truly replace the intuition and judgment learned through grunt work?”
Matt Levine:
“No. Models don’t just crunch numbers, they teach you to see the weird quirks — a line item that doesn’t tie, an assumption that feels off. That paranoia becomes intuition. AI is great at scale, but it’s not paranoid — it’s confident. And overconfidence in finance leads to disaster.”
Sam Altman:
“I agree — AI won’t replace intuition, but it can help refine it. When analysts are no longer bogged down by mechanical tasks, they can spend more time testing scenarios, stress-testing assumptions, and learning judgment from seniors. The machine doesn’t teach intuition, but it creates room for humans to cultivate it faster.”
Jamie Dimon:
“I’ve seen too many crises caused by people who relied on models they didn’t fully understand. Intuition isn’t built in a classroom. It’s built by sitting there at 3 AM, realizing your forecast breaks if interest rates move 25 basis points. If AI shields analysts from that pain, they won’t develop the instincts to prevent the next 2008.”
Susan Lund:
“The data supports both views: AI can amplify decision-making, but if human judgment erodes, systemic risks rise. The best outcome is a partnership — AI as co-pilot, humans as interpreters. But firms must intentionally design training programs so people don’t become passive consumers of machine outputs.”
Bill Ackman:
“Let me put it bluntly: intuition is the edge. Everyone will have the same AI models. The winners will be those who can ask the one question the model didn’t consider. If you don’t develop that muscle early, you’ll never catch up. So the industry’s challenge is to make sure AI doesn’t turn bankers into passengers.”
Third Question:
“What happens to a generation of bankers if the apprenticeship model disappears?”
Susan Lund:
“Historically, when industries automate entry-level work, inequality grows. Some leap ahead, others are left behind. Without a clear ladder, only those with elite connections may get mentorship. That creates a narrower, less diverse talent pipeline for Wall Street.”
Bill Ackman:
“It also risks weakening firms themselves. You can’t have a sustainable industry built only on partners and machines. You need a middle layer of people who’ve learned the trade the hard way. If we lose that, banking becomes brittle.”
Sam Altman:
“I think we’re underestimating human adaptability. Every major innovation — from spreadsheets to Bloomberg terminals — was met with panic. And yet, finance adjusted. Apprenticeship won’t disappear, it will evolve. We’ll see mentorship models that emphasize judgment, storytelling, and strategy rather than endless Excel hours.”
Jamie Dimon:
“Perhaps. But don’t forget: banking is about trust. Clients trust us because we’ve been through the fire. If the next generation skips that fire, trust erodes. Machines can’t replace the credibility that comes from hard-earned experience.”
Matt Levine:
“I’ll add a twist — maybe the best bankers of the future won’t come from banking at all. If AI does the technical work, then skills like persuasion, creativity, or even writing a compelling memo become more important. Imagine the next Goldman partner coming from journalism or philosophy, not from 100-hour weeks.”
Closing by Fareed Zakaria
“What we’ve heard is both caution and optimism. AI may free young bankers from drudgery, but it risks stripping away the very experiences that form intuition, discipline, and trust. The future of finance may not be a choice between humans and machines, but a challenge to reinvent how we teach judgment in a world where the grunt work is gone.”
Topic 2 — The Profit Paradox: Efficiency at What Cost?

Moderator: Gillian Tett (Financial Times columnist and anthropologist, known for connecting finance to broader society)
Participants: Ray Dalio, Satya Nadella, Raghuram Rajan, Shoshana Zuboff, Anne Wojcicki
Opening by Gillian Tett
“Project Mercury promises enormous efficiency — automating financial modeling, accelerating deal execution, and cutting costs. But efficiency is never neutral. It reshapes incentives, redistributes power, and can create blind spots. Tonight we’ll ask: is the pursuit of efficiency worth the risks it brings?”
First Question:
“Will automation make banking more profitable, or just cut labor costs temporarily?”
Ray Dalio:
“Profit is never about efficiency alone. It’s about productivity gains being reinvested into innovation. If Project Mercury just saves costs without expanding value creation, profits will plateau. The danger is firms pocket the savings without building long-term resilience. Then you get fragile systems, not robust ones.”
Anne Wojcicki:
“I see parallels in biotech. AI accelerates lab work, yes, but the real breakthrough comes when it uncovers patterns we couldn’t see before. If banks treat AI only as a headcount reducer, they’ll miss its potential to spot risks and opportunities earlier. That’s where real profitability lies.”
Satya Nadella:
“Technology must be seen as a growth engine, not just a cost cutter. When Microsoft embraced cloud AI, it wasn’t to shrink staff — it was to empower developers and open new markets. If banks see AI as replacement rather than augmentation, they’re aiming too low. Profitability will come from creating new products and services powered by AI.”
Raghuram Rajan:
“I’d caution that in finance, unlike tech, productivity gains often become races to the bottom. Banks that cut costs fastest can pressure others to follow, creating systemic fragility. If AI makes every bank’s models identical and faster, profits may converge — and the industry could be left competing on margins rather than innovation.”
Shoshana Zuboff:
“And let’s not ignore power. Efficiency in the digital age often means surveillance. If banks are driven to scrape ever more data to feed AI, profits may come at the expense of trust. Efficiency is seductive, but if it erodes social legitimacy, long-term profitability will suffer.”
Second Question:
“Could efficiency gains reduce innovation and risk management quality?”
Satya Nadella:
“Efficiency and innovation aren’t opposites. Done right, AI gives teams more bandwidth to explore ideas. The risk isn’t the tool; it’s the culture. If management says ‘cut costs’ and nothing else, then yes, innovation suffers. But if AI becomes an assistant for human creativity, innovation flourishes.”
Raghuram Rajan:
“Innovation thrives on friction. Paradoxically, when processes are too smooth, people stop questioning assumptions. Risk management also depends on skepticism. If AI smooths away the grunt work, it may smooth away the very doubts that prevent disaster. Efficiency can sterilize the culture of inquiry.”
Shoshana Zuboff:
“That’s precisely the trap of surveillance capitalism. The narrative is ‘more efficient, more accurate,’ but it blinds us to the human dimension of risk. Models may look flawless until they’re tested against real-world shocks. Efficiency becomes a mask for vulnerability.”
Ray Dalio:
“I’ve lived through cycles where efficiency was celebrated — only to collapse into crisis because risk was underestimated. Remember Long-Term Capital Management in the ’90s? Brilliant models, maximum efficiency, near disaster. Innovation is not just about speed — it’s about resilience. AI must be tested for failure, not just celebrated for efficiency.”
Anne Wojcicki:
“I’d argue the opposite: AI can make us more innovative in risk management if used wisely. In genomics, algorithms uncover rare but critical patterns humans miss. Why not finance? The danger isn’t efficiency itself — it’s lack of diversity in the data and lack of humility in the people using it.”
Third Question:
“Are we heading toward ‘too much AI’ at the expense of human oversight?”
Shoshana Zuboff:
“Absolutely. When systems become opaque, oversight weakens. Workers defer to the machine, managers defer to the metrics, and regulators defer to industry experts. That’s how we drift into what I’ve called the ‘age of surveillance capitalism.’ Too much AI means too little democracy in decision-making.”
Ray Dalio:
“We must remember the principle of balance. AI should complement human judgment, not dominate it. If leaders start trusting machines over seasoned intuition, we risk a crisis born not of human error, but of human abdication. Oversight must evolve to remain as powerful as the technology it governs.”
Anne Wojcicki:
“From a scientific perspective, I don’t fear ‘too much AI’ — I fear too little integration with human values. If AI in finance is tuned only for efficiency, then oversight becomes a checklist. But if we design it to serve human goals — trust, fairness, opportunity — then oversight becomes part of the system itself.”
Satya Nadella:
“Too much AI is not the problem — too little human accountability is. Every model should be explainable, auditable, and contestable. If we lose sight of that, we’re no better than black-box trading algorithms that amplify volatility. Humans must remain firmly in the loop.”
Raghuram Rajan:
“History tells us that financial markets always chase efficiency until they hit a wall. The next crash will likely involve AI models trusted too much, tested too little. Oversight must be proactive, not reactive. Otherwise, we will again learn the hard way.”
Closing by Gillian Tett
“Our discussion has revealed a paradox: efficiency promises profits, but can also hollow out resilience and innovation if pursued blindly. AI may cut costs, but true profitability lies in balancing speed with skepticism, growth with governance, and efficiency with ethics. The question is not whether AI will dominate finance, but whether finance will remain human enough to steer it wisely.”
Topic 3 — Regulation in the Age of AI Finance

Moderator: Christine Amanpour (seasoned journalist known for sharp, probing questions)
Participants: Gary Gensler, Christine Lagarde, Timnit Gebru, Michael Lewis, Sheila Bair
Opening by Christine Amanpour
“Project Mercury points to a future where AI builds the models that drive markets, shape IPOs, and influence risk. But if algorithms begin to make financial decisions at scale, who is responsible when things go wrong? Tonight we’ll explore whether our regulatory systems can keep pace with AI-driven finance.”
First Question:
“Should AI-built financial models be regulated differently than human models?”
Gary Gensler:
“Financial models are already subject to scrutiny, whether built by analysts or software. But AI introduces opacity. If a model cannot be explained, it cannot be trusted. My view is simple: if a firm uses AI in finance, that model must be explainable and auditable. Otherwise, accountability disappears.”
Michael Lewis:
“I agree in spirit, but I’d note that humans have always hidden behind models they barely understood. Look at the mortgage-backed securities in 2008. AI doesn’t change the core problem — it amplifies it. The difference is speed. AI can make the same mistakes faster, at scale.”
Christine Lagarde:
“At the European Central Bank, we view AI models as both a promise and a risk. They may reduce bias, but they can also encode new forms of bias. Regulation cannot treat them as ordinary models — we must insist on transparency, data governance, and ethical standards. Otherwise, confidence in financial stability erodes.”
Timnit Gebru:
“The very idea that AI can be regulated like a spreadsheet is dangerous. These are not static models — they are dynamic, constantly updating systems. They need oversight as living entities. Otherwise, hidden biases, discriminatory outcomes, and systemic risks multiply under the guise of objectivity.”
Sheila Bair:
“I’d add that from a crisis management perspective, complexity is the enemy. In 2008, regulators didn’t even understand the exposures on bank balance sheets. If AI creates another layer of inscrutability, we’ll be flying blind when the next downturn hits. Yes, they must be regulated differently — more rigorously.”
Second Question:
“How can regulators keep up with algorithms evolving faster than rules?”
Christine Lagarde:
“This is the heart of the matter. Rules are slow, algorithms are fast. One solution is to shift from prescriptive regulation to principles-based regulation. Instead of chasing every technical update, we enforce broad standards: fairness, accountability, transparency. Then firms must prove compliance dynamically.”
Gary Gensler:
“I believe regulators must embrace technology themselves. At the SEC, we’re investing in AI to detect anomalies, monitor trading patterns, and stress-test models. To regulate AI, regulators must use AI. Otherwise, we’ll always lag behind the industry.”
Michael Lewis:
“I’ll be blunt: regulators will always be behind. Wall Street hires PhDs to exploit loopholes faster than governments can close them. The only counterweight is a cultural one — whistleblowers, investigative journalists, and skeptical outsiders. Rules alone won’t save us.”
Timnit Gebru:
“I’d caution against over-reliance on regulators playing technological catch-up. We also need independent watchdogs — civil society, academia, labor unions — reviewing these systems. When finance self-polices, it hides risks until it’s too late. True accountability must come from multiple centers of oversight.”
Sheila Bair:
“In my FDIC years, I learned that crisis hits when complexity outpaces oversight. Regulators must keep it simple: if a bank cannot explain its AI system in plain English, it shouldn’t use it. This principle forces firms to slow down until regulators and the public catch up.”
Third Question:
“Who is accountable if an AI-driven model triggers a financial crisis?”
Timnit Gebru:
“The myth of AI is that responsibility dissolves into the machine. But accountability must remain human. If a model causes discriminatory lending or systemic collapse, the firm deploying it is responsible. We cannot allow ‘the AI did it’ to become a defense.”
Michael Lewis:
“History shows us that when disasters strike, accountability gets blurred. Traders blame managers, managers blame regulators, regulators blame markets. With AI, that fog thickens. Unless we demand personal liability for executives, accountability will be endlessly deferred.”
Gary Gensler:
“The law is clear: firms are accountable for the systems they use. If an AI model triggers misconduct, the liability is no different than if a human analyst had made the error. My concern is making sure the public knows this, so firms cannot hide behind technological complexity.”
Christine Lagarde:
“I would expand this to the global level. Finance is not national, it is transnational. If an AI model designed in Silicon Valley destabilizes a bank in Europe, who answers? We need international frameworks for accountability, not fragmented national responses.”
Sheila Bair:
“I’d go further: accountability must be enforced before crisis, not after. That means stress tests, contingency planning, and clear executive responsibility. If CEOs know their careers are on the line, they will not outsource critical risk decisions to machines without human oversight.”
Closing by Christine Amanpour
“Tonight’s debate shows a tension between speed and stability, innovation and accountability. AI in finance may offer unparalleled efficiency, but it risks obscuring responsibility at precisely the moment when clarity is most needed. If regulation is to keep pace, it must be bold, global, and relentless in keeping humans — not machines — accountable for the future of finance.”
Topic 4 — Democratizing Finance or Concentrating Power?

Moderator: Kara Swisher (sharp, provocative tech journalist who presses leaders on accountability)
Participants: Elon Musk, Cathie Wood, Yanis Varoufakis, Jack Dorsey, Alicia Garza
Opening by Kara Swisher
“Project Mercury and AI-driven finance raise a core question: does automation level the playing field, or does it further entrench power in the hands of the few? Will AI democratize markets, or make them even less accessible? Let’s dig in.”
First Question:
“Could AI level the playing field for smaller firms and startups, or entrench big banks further?”
Cathie Wood:
“I’m optimistic. AI gives smaller firms access to analytical power they could never afford before. Just as cloud computing democratized data, AI can democratize financial modeling. The key is access — if these tools are open, startups can compete with Wall Street giants.”
Elon Musk:
“Sure, but let’s be honest: big banks will always have more data, more engineers, more GPUs. Efficiency won’t necessarily trickle down. Unless AI is decentralized, the rich will just get richer. That’s why I keep pushing for open-source approaches — otherwise you’re just reinforcing monopolies.”
Yanis Varoufakis:
“The language of democratization is seductive, but history says otherwise. Technology in finance has almost always concentrated power — from high-frequency trading to derivatives. Without deliberate redistribution, AI will serve the elite. It won’t empower the many, it will enslave them to algorithms.”
Jack Dorsey:
“I think decentralization is the answer. Just as Bitcoin and blockchain allowed individuals to bypass banks, AI could empower communities if built on open networks. But if these systems are controlled by corporate silos, we’re just replicating old hierarchies with shinier tech.”
Alicia Garza:
“And let’s not forget: democratization isn’t just about access, it’s about equity. If AI tools are accessible only to those with elite education, broadband access, and capital, then marginalized communities remain excluded. Leveling the playing field requires intentional design, not just efficiency.”
Second Question:
“Will access to AI modeling tools democratize investment knowledge for the public?”
Elon Musk:
“I actually think yes — eventually. Imagine retail investors having AI copilots that analyze companies as well as Wall Street analysts. That’s coming. But there’s a catch: if people blindly follow AI advice without understanding it, democratization becomes manipulation.”
Cathie Wood:
“I see it as a huge opportunity. When ordinary investors can model scenarios for Tesla or biotech startups using AI, we’ll see broader participation in innovation. Democratizing knowledge leads to democratizing capital — and that’s how we create inclusive growth.”
Yanis Varoufakis:
“Democratization of knowledge is meaningless if ownership remains concentrated. You can give the public access to tools, but if the structural inequality of finance persists — debt traps, wage stagnation, housing costs — then AI will not liberate, it will pacify. Knowledge without power is decoration.”
Jack Dorsey:
“That’s why transparency matters. If people can see and understand the algorithms, they can use them to make real choices. If it’s a black box, then the public is just trading one set of overlords for another. AI must be transparent, otherwise democratization is just branding.”
Alicia Garza:
“I want to emphasize that democratizing financial knowledge is not just about individuals making stock picks. It’s about communities being empowered to plan budgets, invest locally, and resist predatory lending. If AI doesn’t reach the grassroots, it won’t democratize anything that matters.”
Third Question:
“Or will Project Mercury become a ‘closed club’ advantage for elites?”
Cathie Wood:
“It depends on the rollout. If access is restricted to elite banks and hedge funds, yes, it’s a closed club. But if OpenAI and others license these systems widely, we could see a genuine shift toward financial inclusion. It’s a design choice, not a destiny.”
Elon Musk:
“Let’s be blunt: most of these initiatives are designed for elites. If you pay $150 an hour to train an AI, you’re not building tools for retail investors in Omaha. You’re building toys for billionaires in Manhattan. Unless there’s public pressure, Mercury will serve the few.”
Yanis Varoufakis:
“I’ll go further. Mercury is not just about finance — it’s about control of knowledge. If financial models become proprietary AI products, then the very logic of capitalism is privatized. That’s the ultimate closed club: the privatization of reason itself.”
Alicia Garza:
“And that closed club doesn’t just exclude average investors — it deepens racial and gender inequities in finance. If marginalized groups are locked out of the AI revolution, we’re repeating centuries of exclusion with new tools. Democratization means confronting structural inequity head-on.”
Jack Dorsey:
“That’s why open networks matter. If Mercury stays closed, the future is just Goldman Sachs with better software. But if it’s open, it could be a force multiplier for financial justice. The choice isn’t technological — it’s ethical.”
Closing by Kara Swisher
“We’ve heard a clash of visions. On one side, AI as an empowerment tool, putting Wall Street’s capabilities in everyone’s pocket. On the other, AI as a new gatekeeper, reinforcing inequality and elitism. Democratization is not guaranteed — it will depend on whether leaders design AI for inclusion or control. Project Mercury is not just a question of finance, but of power: who holds it, and who gets left out.”
Topic 5 — The Future of Human Work in Finance

Moderator: Krista Tippett (host of On Being, known for thoughtful, reflective conversations on meaning and human values)
Participants: Daron Acemoglu, Erica Groshen, Andrew Yang, Thomas Piketty, Marc Andreessen
Opening by Krista Tippett
“As AI systems like Project Mercury automate the routine tasks of finance, we’re left with a deeper question: what remains uniquely human? If the analyst’s role is eroded, where will future generations find purpose, meaning, and training in the world of finance? Let’s explore the human side of this transformation.”
First Question:
“If AI takes over entry-level tasks, what unique human skills remain valuable?”
Marc Andreessen:
“Creativity, persuasion, narrative. Finance isn’t just numbers — it’s storytelling. Deals happen because someone convinces someone else to believe. AI can structure a model, but it can’t yet sit across from a CEO and earn trust. That’s a human skill, and it’s irreplaceable.”
Daron Acemoglu:
“I agree in part, but we must not underestimate the subtle expertise lost when routine work is automated. Human skills like skepticism, contextual judgment, and ethical reasoning are vital. The danger is that by stripping away the entry-level grind, we also strip away the training ground for these skills.”
Erica Groshen:
“From a workforce perspective, the most valuable skills will be adaptability and collaboration. AI reshuffles roles constantly. Humans who can bridge disciplines, work across teams, and integrate technology with social intelligence will thrive.”
Andrew Yang:
“I’d say empathy is the real edge. Finance has become increasingly abstract, detached from human needs. AI may handle the mechanics, but people will still crave relationships. A banker who can understand a client’s hopes and fears — that’s irreplaceable.”
Thomas Piketty:
“And let us not forget: the most valuable human skill may be courage. The courage to challenge systems, to question inequality, to design finance that serves society rather than just capital. AI will not give us moral courage. That remains a human responsibility.”
Second Question:
“Can finance reinvent itself to create new ‘learning ladders’ for young talent?”
Erica Groshen:
“Yes, but it requires deliberate investment. Apprenticeship models must evolve. Instead of endless Excel hours, young professionals could rotate through mentorship programs, scenario testing, or client simulations. The ladder isn’t gone — it just needs new rungs.”
Andrew Yang:
“I agree, but I’m worried that Wall Street won’t do this on its own. The incentive is always to cut costs. We may need new institutions — public-private training programs, AI literacy initiatives — to create ladders outside the old firms. Otherwise, we risk a lost generation.”
Marc Andreessen:
“I think the ladders will emerge naturally. Ambitious young people will use AI to leapfrog ahead, creating startups, fintech platforms, or new investment vehicles. They won’t wait for Wall Street to train them — they’ll train themselves. The future ladder is entrepreneurial, not institutional.”
Thomas Piketty:
“I disagree. Self-training is not accessible to all. If learning ladders depend on personal networks or capital, inequality will deepen. We must create collective ladders — educational systems, cooperative institutions — to ensure opportunity is shared, not monopolized.”
Daron Acemoglu:
“And critically, we need ladders that cultivate skepticism, not just speed. If AI makes finance more efficient but also more fragile, young professionals must be trained to ask hard questions about risk and ethics. That kind of learning will not happen by accident.”
Third Question:
“Is the real opportunity in re-humanizing client relationships rather than number-crunching?”
Andrew Yang:
“Absolutely. The analyst of the future won’t be judged on formatting PowerPoint slides but on their ability to connect, empathize, and guide. Finance has to return to its human roots — serving families, entrepreneurs, communities. That’s where AI can never compete.”
Marc Andreessen:
“Yes, but let’s be careful. AI will handle more and more quantitative tasks, which frees humans to focus on persuasion and creativity. But don’t confuse ‘soft skills’ with softness. Building trust at scale, navigating global complexity — these are hard human challenges, and the winners will master them.”
Daron Acemoglu:
“I worry that re-humanization will be rhetoric unless we change incentives. As long as firms reward short-term profits over long-term relationships, AI will be used to maximize efficiency, not humanity. Re-humanization requires rethinking the goals of finance itself.”
Thomas Piketty:
“I’d go further — the opportunity is not just re-humanizing relationships, but re-politicizing finance. If finance continues to concentrate wealth, humanizing it will mean little. We need finance that redistributes opportunity, not just polishes client relationships.”
Erica Groshen:
“I’d add a pragmatic point: yes, human connection is the opportunity, but it must be taught and valued. Firms must design career paths that reward mentoring, communication, and ethics as much as technical brilliance. Otherwise, we risk producing a generation that is AI-fluent but humanly illiterate.”
Closing by Krista Tippett
“Our conversation reveals a crossroads. AI is not just a tool that changes tasks; it’s a force that reshapes values. If routine work disappears, the unique work left for humans may be empathy, creativity, courage, and ethics. But these don’t emerge automatically — they must be cultivated. The future of finance is not about whether AI replaces humans, but whether humans rise to meet what AI cannot.”
Final Thoughts by Kristalina Georgieva

What we have heard in this discussion is the story of both promise and peril. AI in finance can indeed make systems more efficient, more transparent, and even more accessible. But it can also deepen divides, concentrate power, and create fragilities we do not yet understand.
As Managing Director of the IMF, I have seen how financial tools can either lift millions out of poverty or drive them deeper into debt. The difference lies not in the technology itself, but in how humanity chooses to govern it.
Project Mercury is not just about bankers and algorithms — it is about people. It is about the young graduate wondering if there is still a place for her on Wall Street. It is about the small business owner who dreams of fairer access to capital. It is about the societies that depend on trust, stability, and shared prosperity.
The future of finance must remain human-centered. We must ask not only what AI can do, but what it should do. And we must build ladders, not walls — ladders of opportunity for the next generation, ladders of inclusion for the underserved, ladders of trust between institutions and the public.
If we succeed, Project Mercury will not just automate models — it will model a future where technology serves humanity, not the other way around.
Short Bios:
Jamie Dimon
Chairman and CEO of JPMorgan Chase, Dimon is one of the most influential voices in global banking, often weighing in on markets, regulation, and the future of finance.
Sam Altman
CEO of OpenAI, Altman is a tech entrepreneur and investor leading the development of advanced artificial intelligence systems, with a focus on their broad economic and societal impact.
Matt Levine
A former Goldman Sachs investment banker turned Bloomberg columnist, Levine is known for his witty, insightful analysis of Wall Street, financial culture, and regulation.
Bill Ackman
Founder and CEO of Pershing Square Capital Management, Ackman is a billionaire hedge fund manager known for his bold bets, activist strategies, and outspoken commentary on markets.
Susan Lund
An economist and former partner at McKinsey Global Institute, Lund specializes in global labor markets, productivity, and the impact of technology on jobs and economic growth.
Ray Dalio
Founder of Bridgewater Associates, the world’s largest hedge fund, Dalio is a billionaire investor and author of Principles, known for his systems-based approach to markets and economics.
Satya Nadella
CEO of Microsoft, Nadella has overseen the company’s transformation into a cloud and AI powerhouse, emphasizing responsible technology adoption and enterprise innovation.
Raghuram Rajan
Former Governor of the Reserve Bank of India and ex-IMF Chief Economist, Rajan is a leading academic and policy thinker on financial crises, development, and global markets.
Shoshana Zuboff
Harvard Business School professor emerita and author of The Age of Surveillance Capitalism, Zuboff is a leading critic of how big tech reshapes society and economics.
Anne Wojcicki
CEO and co-founder of 23andMe, Wojcicki is a biotech entrepreneur pioneering consumer genomics and navigating the balance of data, science, and business innovation.
Gary Gensler
Chair of the U.S. Securities and Exchange Commission (SEC), Gensler is a regulator with deep experience in financial markets, digital assets, and AI oversight.
Christine Lagarde
President of the European Central Bank and former head of the IMF, Lagarde is a global leader in finance and monetary policy, known for championing stability and international cooperation.
Timnit Gebru
A computer scientist and AI ethics researcher, Gebru is the founder of the Distributed AI Research Institute (DAIR) and a leading voice on bias, accountability, and fairness in AI systems.
Michael Lewis
Best-selling author of The Big Short, Flash Boys, and Moneyball, Lewis is a celebrated storyteller who exposes the human and systemic flaws in financial and economic systems.
Sheila Bair
Former Chair of the Federal Deposit Insurance Corporation (FDIC), Bair guided the U.S. through the 2008 financial crisis and is a strong advocate for financial stability and consumer protection.
Elon Musk
CEO of Tesla, SpaceX, and X (formerly Twitter), Musk is one of the world’s most prominent tech entrepreneurs, known for bold innovation and controversial views on AI, finance, and society.
Cathie Wood
Founder and CEO of ARK Invest, Wood is an influential investor known for backing disruptive technologies and advocating for democratized access to innovation-led investing.
Yanis Varoufakis
Economist, author, and former Greek finance minister, Varoufakis is a critic of global capitalism, debt systems, and economic inequality, bringing a radical perspective to finance debates.
Jack Dorsey
Co-founder of Twitter and Block (Square), Dorsey is a tech entrepreneur focused on decentralization, digital payments, and financial access through blockchain and Bitcoin.
Alicia Garza
Co-founder of Black Lives Matter and Principal at the Black Futures Lab, Garza is an activist and organizer advocating for equity, inclusion, and empowerment in politics and economics.
Daron Acemoglu
MIT economist and co-author of Why Nations Fail, Acemoglu researches technology, inequality, and political economy, warning of the dangers of unregulated AI on jobs and society.
Erica Groshen
Former Commissioner of the U.S. Bureau of Labor Statistics, Groshen is a labor economist specializing in employment trends, automation, and workforce development.
Andrew Yang
Entrepreneur, former U.S. presidential candidate, and author of The War on Normal People, Yang advocates for universal basic income and policies to address automation-driven job loss.
Thomas Piketty
French economist and author of Capital in the 21st Century, Piketty is a global thought leader on wealth inequality, taxation, and the long-term dynamics of capital accumulation.
Marc Andreessen
Co-founder of Andreessen Horowitz, Andreessen is a venture capitalist and early internet pioneer, championing bold tech-driven disruption and writing extensively on the future of AI.
Leave a Reply