This lecture was delivered at the London School of Economics in December 2024. For a video recording of the lecture, view here.
I’ve got Heraclitus on my mind today. And we will return to Heraclitus, even if we can never return to exactly the same point.
When I first visited LSE, I was a Masters student at Oxford, on a possible path to becoming a philosophy professor. It was the early 90s. Around then, the first text message was sent. Intel released its Pentium processors. And Tim Berners-Lee introduced the World Wide Web to the public.
Today, here with you, I’ve returned to LSE as a technologist, investor and founder. Globally, about 26 billion texts are sent daily. NVIDIA’s H100 GPUs are around 600,000 times more powerful than Pentium processors. And about 70% of the global population uses the internet.
My point is not to just time capsule technological progress and my professional development. But to put on equal footing not just how different the world was, but how differently I saw it.
Heraclitus is right. One cannot step into the same river twice. Not only because the river is different, but because we, ourselves, are also changed.
As humans, we tend to project the way in which we understand the world as static. Or at least, we recognize change in the world more than change in us and our understanding of the world.
This gets to the heart of a question that people have grappled with before human language. And that philosophers, from the pre-Socratics to Popper and beyond, have worked to answer:
How do we come to understand the world?
It’s perhaps our most fundamental and human question. As many of you know, we have empiricists that root knowledge in our sensory experience, rationalists who derive it from reason and innate ideas, and idealists who argue it’s mediated by the mind. The list goes on; our schools of thought seem almost more varied than our thoughts themselves.
As a technologist and humanist, here’s what I’d add and emphasize: humans often overestimate how much our understanding of the world is from pure reason and perceptions, and underestimate how much it’s mediated by technology. Not only because of the role and power of technology itself, but because of who we are fundamentally.
We are more than Homo sapiens. If we merely lived up to this scientific classification and just sat around thinking all day, we’d be much different creatures than we actually are.
We humans are Homo techne: humans as toolmakers and tool users.
Technology can expand our vision. Quite literally, telescopes and microscopes help us see farther and deeper than we could otherwise. It transports us, whether by airplane, book, or video
call. And it extends our life, such as through medicine or gene therapy. Things many of us have forgotten are technology—language, currency, wheels—underpin everyday life.
This is as important to who we are, as to who we’ll become. We evolve with and through our tools. We shape our tools. Then our tools shape us. In that exchange, our epistemology and metaphysics also evolve—our understanding of the world updates through technology.
And that phenomenon is never more true than with AI. What I have found bewildering is that this moment for AI—this current AI era—is going to be as important of an evolution in our epistemology and metaphysics than any other technology we’ve encountered to date.
Why? Well, consider just how central humans are to how an AI model is built. Our corpus of online human knowledge is ingested to build foundational models. Reinforcement training—or the ways models make decisions and craft outputs—is guided by interactions with us. What we prompt AI to generate is consumed by us, and blended back into humanity’s digital canon.
This shift now challenges how we have understood the world for millennia: through discussions with humans. In essence, the process of one human saying something they believe to be true about the world that garners the agreement of fellow humans. Now with AI, we have a super competent extension of people and application of our knowledge. Moreover, AI is not a mere static tool – we continue to improve how it learns and generates. How will we collectively use it to shape our understanding of the world?
This is a question for all of us. Because, of all our technologies, these foundational AI models might be the best technological approximation of us as a collective: the good, the bad, and the ugly. The full range of us, with all our commonalities and differences—especially as the access to AI for more people continues to grow. This makes discussions of how AI benefits society and humanity much more interesting, but also much more complicated.
And that’s what I want to focus on today: what AI means for society. On this topic, there are three important questions that I hope to address:
- Where does the value of technology stem from?
- What might AI disrupt within society?
- How might it change dynamics between societies?
Let’s start with the first question about a generational technology like AI—and the origin of its value. This may be a good place to start, because it helps us gauge whether and not we have agency in defining its value, or if it’s more so an innately good or bad tool.
There are primarily two schools of thought. The first believes that technology is value-neutral. They hold that technology isn’t inherently good or ill—it’s about how people use it. The second school of thought is that technology is value-laden. That technology is inherently good or bad.
I believe it’s door three: a blend of both. Those who believe technology is value-neutral tend to overlook the ethical complexities inherent in technological development and deployment.
Consider cigarettes or the atomic bomb: both are products of human ingenuity, but their societal effects raise profound moral questions. Cigarettes, despite their economic contributions, have fueled a global public health crisis, while the atomic bomb fundamentally altered the fabric of geopolitics, introducing existential risk.
Let’s for a second imagine if Nazi Germany had developed the atomic bomb before the United States during World War II. The shape and deployment of that technology in a fascist regime would have had catastrophic consequences, likely remaking the post-war global order in ways antithetical to democracy and human rights. In this context, the value-neutral stance collapses.
But the value-laden approach isn’t quite right either. Advocates of this position may see AI as a panacea for humanity’s greatest challenges or, conversely, as a harbinger of dystopia. Yet this perspective is equally flawed. Technology is not a raw substance with immutable intrinsic properties of morality or utility; it can and must be shaped, refined, and integrated with human values to achieve specific outcomes.
Alfred Nobel’s dynamite is neither inherently constructive nor destructive. Humans use it when making tunnels and buildings, and when fighting wars. And this year, under Nobel’s name, AI pioneers have won his acclaimed prize for advancements in chemistry and physics. As wielders of AI, we determine—and can reward—the impact we want.
So, is technology value-neutral or value laden? Neither. The truth is it’s value-sculpted. It has its initial and inertial properties as a technology, that we humans then whittle and carve. We shape it—not like clay, but like marble. It takes muscle, intention, and repetition. And we must respect and acknowledge its properties, while hewing it to our purpose.
And our sequence in sculpting matters, too. To start, taking a value-neutral approach—rooted in scientific rigor and factual verification—is essential in the early stages of technological development. We must examine the logical structures underlying AI, but rigorously revise our hypotheses and approach after it is in the hands of people. Iterative deployment—or inviting the public to participate in the development process for AI—accelerates this process. This overall approach mirrors the scientific method: systematic, objective, and methodical.
However, as AI integrates into society, the value-laden perspective becomes indispensable. We must ask: How can AI be shaped to prioritize human well-being, both now and in the future?
How can it amplify our collective capabilities while minimizing harm? For instance, AI in healthcare should aim not only to diagnose diseases more accurately but to ensure equitable access to these advancements, irrespective of socioeconomic status.
The geopolitical implications of getting this sequenced value-neutral and value-laden approach is significant. The societies that build and deploy transformative technologies like AI wield considerable influence over the global order. This underscores the geopolitical importance of AI:
it is not merely a tool but a driver of power dynamics. Just as the printing press upended the religious and political structures of early modern Europe, AI has the potential to reshape economies, governance, and international relations.
History reminds us, however, that transitions catalyzed by transformative technologies are rarely smooth. Again, the printing press, while enabling unprecedented dissemination of knowledge, also precipitated decades of religious conflict. AI, too, will bring disruption. Yet, just as the printing press ultimately became indispensable, AI can create a more interconnected and empowered global society—if we manage its transition wisely.
Let me start by saying that this transition will not be easy. And it’s good that we are concerned about it, as it will be painful in parts and places. Humans as a species are historically bad at transitions—but we can navigate better knowing that. Transitions are both hard and important for societies, each time we integrate new technology. And transformative technology eventually becomes indispensable to humans. This transition to our AI future will be navigated regardless of our planning and coordination. But we should be thoughtful and intelligent about it.
To best navigate this disruption, we must advance the positive use cases of AI and foster smoother integration into society. This requires moving beyond binary debates about AI’s inherent value and focusing instead on our agency with it.
If we harness AI correctly and collectively, society will experience superagency. That’s what happens when a critical mass of individuals, personally empowered by AI, begin to operate at levels that compound through society.
In other words, it’s not just that some people are becoming more informed and better equipped thanks to AI. Everyone is, even those who rarely or never use AI directly. You may not be a doctor, but suddenly your doctor can diagnose seemingly unrelated symptoms with AI precision. You might not repair cars, but your mechanic’s AI agent can now instantly diagnose the cause of that weird sound when your car accelerates. Even ATMs, parking meters, and vending machines are multilingual geniuses who understand and adjust to your preferences.
That’s the world of superagency. Each of these enhancements and enrichments across professions, industries and sectors don’t just add up for society, they transform it. This evolution is not only inevitable, but already underway. And we have the opportunity to make this as much—or more—about human amplification, than human replacement. We can design with superagency in mind—rather than chase it from behind—as it arises in society.
As the world of superagency starts to more fully emerge, we’ll hear the following question asked, repeatedly and at an increasing pitch: “What gives you the right to disrupt society?” The query often carries a sharp edge of skepticism, even indignation. After all, no one voted to invite this wave of technological upheaval.
Yet disruption does not spring from a vacuum. It is rooted in foundational rights that underpin free societies: the right to build a company, to develop a product, to offer that product to the public, and the public to engage with it. These rights, while essential, do not create disruption on their own. Disruption occurs at the intersection of supply and demand, and at the inflection point of product-market fit. A technology disrupts when it resonates with people: when they adopt it, pay for it, and incorporate it into their lives. Without demand, even the most ambitious innovation falters.
As I speak, some of you may be sensing technological determinism or the mighty wheel of capitalism—but I assure you that we have a choice. But while the choice to engage in AI as an individual can be a personal preference, the choice to not engage in AI as a society is consequential.
Societies that resist participation merely delay their integration until the tail end of adoption, losing the opportunity to sculpt the technology in its formative stages. They will also detain the benefits that AI can bring to the health, wealth and happiness to generations of their people. However, inevitability does not imply passivity. Heraclitus’ river is everchanging, but so are we—we can decide how we move through it. Just as a sailor navigates by tacking according to the wind rather than relinquishing the helm, so must we steer the course with AI. If disruption is happening, the pressing question becomes: What shape will it take?
Some disruptions are easier to imagine than others. We have line of sight into how AI can democratize access to critical resources at scale. For instance, AI-powered medical assistants can bring quality healthcare to underserved or remote regions, where skilled practitioners are scarce or overburdened. Similarly, AI-driven tutors can make personalized education accessible to millions, adapting lessons to individual needs in ways traditional classrooms may not. Tools like these amplify human agency, as well as address systemic inequities.
Yet alongside these positive transformations, AI must be safeguarded against dehumanizing applications. The same technologies that accelerate drug development can be weaponized for bioterrorism. The same technologies that provide highly personalized, customized services can be used to surveil. The same technologies that can amplify a personal brand can be used for deepfakes that can manipulate public opinion and sow mistrust. These risks cannot be eliminated, but they can be mitigated by AI itself, as well as through thoughtful oversight.
Beyond these first-order effects, more profound and complex disruptions await. The transformation of work, for example. How do we make sense of a technology that may eliminate jobs and sectors, but also create new occupations and industries?
History offers instructive parallels, like the loom. The advent of the power loom transformed England. It produced cloth 40x faster than a skilled weaver. The cost of cotton decreased by 80% over fifty years due to mechanization. Textiles, particularly cotton, became the largest industrial sector in Britain, and accounted for roughly 40% of England’s exports.
On a societal level, the power loom was undeniably transformational for England—and for generations of people who benefited from the innovation. While productivity soared, the transition was painful for those whose livelihoods were rendered obsolete. The innovation displaced countless handweavers, sparking the Luddite movement in 19th-century England. Until soldiers and laws were deployed to stop them, the Luddites burned down factories, killed factory owners, and destroyed thousands of power looms.
Amidst the transformational change, the machine itself made for a convenient target. The technology, of course, paved the way. But according to author Brian Merchant—and a number of historians he cites—it wasn’t so much technology, or even specific machines that these weavers were resisting. Instead it was the factory system, its exploitative working conditions, and the regimentation and seeming loss of liberty this new way of life demanded.
So how do we address the underlying systems to make more fertile ground for innovation that clearly benefits society? How do we navigate the immediate costs of disruptions and accelerate the benefits throughout society?
Let me offer three ways. The first is how we, as society, view this technology. The second is how we deploy this technology. And the third is how we manage this technology.
I hope my remarks so far have already started to illustrate how we, as a society, should view AI. Rather than an existential threat, AI can be a GPS of the mind and usher in a new cognitive industrial revolution—if we continue to sculpt it.
While I do enjoy a metaphor, I am actually very intentionally invoking GPS—or Global Positioning System technology. Back in the early 70s, the US Department of Defense began work on what would eventually become GPS. The technology used radio signals from multiple satellites in medium Earth orbit to pinpoint the geographic coordinates of receivers on the ground. By the end of the decade, the U.S. Air Force had a fledgling version of the system running, for military use only.
Then, in 1983, the Soviet military shot down a Korean passenger jet that had flown off course into Soviet airspace. In the hope of averting similar catastrophes, U.S. President Reagan announced that whenever GPS became fully operational, the United States would also make it available for civilian use. Years later, President Bill Clinton fully executed on that promise, granting the public the full power and capabilities that GPS had to offer. These acts from two presidents—from different sides of the aisle—paved the way for a free global public utility that has become an indispensable resource for navigating the twenty-first century.
Today, all of us use GPS. So much so that it works in the background and in ways that we may not even be aware of. Turn-by-turn navigation is the most common way we benefit from GPS, but it’s far from the only one. The precise timing information GPS provides is used to synchronize clocks in telecom networks, in ways that help keep mobile phone calls clear and lag-free. During natural disasters and other emergencies, first responders use GPS-enabled drones to locate missing people, quickly map stricken areas, and even deliver supplies to those
who cannot be easily reached. Precision-farming techniques that GPS enables make a variety of organic produce more affordable.
So what does this extended detour—ironically about GPS—have to do with AI?
First, it maps out a clear example of the positive outcomes that can result when the government embraces a pro-technology, pro-innovation perspective and views private-sector entrepreneurship as a strategic asset for achieving public good.
Second, it’s also a great example of how we can effectively leverage our capacity to turn Big Data like geographic coordinates and time stamps into Big Knowledge that can be used to provide context-aware guidance in many aspects of our lives.
Third, and most importantly for democracy, it reinforces individual agency. It’s true that we all carry around a tracker in our pockets, one with a mic and camera. A device that can be used to surveil. But on the other hand, we have a tool that nearly ensures we never get lost again.
Ok, so if we can agree on this way of viewing AI, let’s now dig into how we might integrate it into society. How can we deploy AI in a way that minimizes costs and accelerates gains in society?
Let’s go back to two years ago, when ChatGPT was released. It was magical in both its utility and creativity. You could ask it to write an essay for you. Or critique an essay that you wrote. You could have it compose questions from a company where you had an upcoming interview. Or create a personalized, epic poem for a relative’s birthday. This was just the start.
For good reason, ChatGPT’s capabilities got much acclaim. It was exceptional for individuals. But, for me, it was equally extraordinary in how it was deployed to the public.
When it was released, ChatGPT was powerful and functional, but far from perfect. In fact, for those who were keeping score, it was the fourth major model in OpenAI’s GPT series. So why does this matter?
OpenAI could have developed this new technology behind closed doors until a small cadre of experts had decided that it was performing in sufficiently effective and perfectly safe ways. But instead it took opportunities to invite the public to participate in the development process.
This is called iterative deployment. Individual users were now at the very heart of the experience. And, just as important, it gave them opportunities to have experiences that they’ve sought or designed. This marked a critical shift in AI development and human empowerment. Iterative deployment allows for what Thomas Jefferson called “consent of the governed,” which applied in an AI context, is about how people embrace or resist new technologies, along with the new norms and laws they ultimately inspire. If the long-term goal is to integrate AI safely and productively into society instead of simply prohibiting it, then citizens must play an active and
substantive role in legitimizing AI. That is how we get a highly accessible, easy-to-use AI that explicitly works with you and for you, rather than on you.
But once we release AI into the world, how do we continue to manage it, as a society? I believe that the most effective way is through iterative deployment. But many—especially here in Europe—may instinctively reach for regulatory action. And while I’m not unconditionally opposed to government regulation, I still believe that the fastest and most effective way to develop safer, more equitable, and more useful AI tools is through iterative deployment. This allows us to take smaller risks with AI to better navigate any big risks.
When I say we must take small risks to navigate big risks, I should mention that, both as individuals and society, we are always taking risks—whether we know it or not. It’s a common misconception that we can steer clear of risk. When in reality, stopping or pausing to avoid risk is a risk—and most often a more perilous one than embracing risk in the first place.
So if we are destined to always take risks, our focus should not be on avoiding them, but navigating them. And one of the wisest ways to do so is to use small risks to negotiate big risks. Taking smaller risks more often is less of a risk—and allows for iteration, discussion, and continual improvement.
That’s what American economist Hyman Minsky suggests, particularly with his concept of the Minsky moment. The Minksy Moment is the point in time when a sudden decline in market sentiment leads to an abrupt, big market crash, marking the end of a period of economic prosperity. To overly simplify it: the Minskyan thesis is that stability creates instability—and that maximizing stability in the short run leads to instability in the mid and long term. Too many safeguards in a financial system can actually make it more brittle. And when things break, nobody’s prepared and it becomes a huge event.
We can learn from the Minsky Moment as we think about this era in AI. This means finding the right level of AI safeguards and regulations, not only to encourage progress but to better fortify a system that has more and more AI in it. We must take small risks to navigate big risks.
A lot of that is through iterative deployment, making AI accessible to a diverse range of users with different values and intentions—at regular intervals. But to avoid the Minsky Moment in AI, I’d hope we’d collectively shift our focus toward measurement and conversation, rather than just regulation. And to be clear, I’m not saying “no regulation!” Just that we find ways to measure twice, and cut once. That we cycle through more conversations before cycling through more regulation. In short: let’s regulate first by measuring. When governments say: “we’re worried about this part of AI” the first question we reach for is “how can we measure this worry or bad outcome?” versus “oh no—how quickly can we pause or stop AI?”
This shift in public-sector response and reaction to AI is critical, not only for our countries but because nations around the world are also having this conversation. This brings us to our third and final question: how might AI change dynamics between societies?
On the global stage, AI is poised to redefine the dynamics between nations, not just through the lens of military might—a common historical analogy—but through the subtler and arguably more relevant lens of economic power. When transformative technologies have historically reshaped societies, they have often done so by amplifying productivity, altering the balance of trade, and fundamentally redefining what it means to participate in the global economy. AI will be no different, though the magnitude and complexity of its effects will be unparalleled.
The military metaphor is tempting, and not without precedent. History reminds us that societies with superior weaponry often gained dominance over those without. This has led to an enduring focus on “hard power”—the ability to coerce or control through military means. Yet AI, while relevant to defense and security, extends far beyond this. Its true significance lies in its potential to act as an economic amplifier, redefining soft power as Joseph Nye conceptualized it. Nations capable of integrating AI into their economies will not only enhance their global influence but also transform their citizens into hyper-productive participants in a fast-evolving global market.
Consider how digital technologies such as the internet, mobile phones, and cloud computing have already reshaped global commerce and connectivity. Even rural farmers, historically disconnected from major economic hubs, now use mobile apps to optimize their crop sales or forecast weather patterns. These incremental improvements have fundamentally altered how individuals and societies participate in the global economy. AI will amplify this transformation exponentially. Nations that embrace this cognitive industrial revolution—analogous to the industrial revolution of the 18th and 19th centuries—will secure disproportionate wealth and stability. Just as countries that industrialized early came to dominate global trade, those that lead in AI development will shape the contours of 21st-century geopolitics.
However, the international response to AI is far from uniform. Global dialogues, such as those at the United Nations, reveal stark contrasts in how different regions approach this technology.
In the West, the primary question is often, “Should we allow this?” Policymakers and citizens alike grapple with ethical dilemmas, privacy concerns, and fears of overreach.
In the Global South, by contrast, the plea is more urgent: “Can you please include us?” For many nations in this bloc, AI represents not just an opportunity but a potential lifeline to leapfrog decades of developmental hurdles.
Meanwhile, China asks, “How can we use this to enhance governance and expand global influence?” Its investments in AI-driven surveillance and smart city technologies exemplify a vision of AI as a tool for centralizing power and asserting dominance.
Russia’s focus, though overlapping, leans heavily toward leveraging AI for geopolitical influence, often with destabilizing intent, such as the potential for cyberattacks targeting energy grids or communication systems.
These divergent approaches underscore a broader reality: the wants and needs of global societies regarding AI are becoming increasingly varied. Rogue players, whether states or
non-state actors, add another layer of complexity by seeking to weaponize AI for bioterrorism, hacking, or disinformation campaigns.
This fragmented landscape raises a key question: how can the international community align around shared goals while addressing these divergent priorities?
History offers a partial roadmap. The post-WWII era demonstrated the value of inclusivity in global governance. Institutions like the UN and frameworks like the Marshall Plan aimed not only to rebuild, but also to onboard diverse nations into a shared vision of progress. This inclusivity fostered stability and cooperation, benefiting both dominant powers and smaller states.
The same principle applies to AI. While the US and Europe understandably prioritize their own leadership in developing AI tools and models, we must also recognize the importance of including other nations in this process. Doing so not only establishes goodwill but also mitigates the risk of creating a two-tiered global system, where some nations benefit disproportionately while others fall behind.
Yet inclusivity is easier said than done. Bureaucratic processes in democratic systems, particularly in the West, often slow down decision-making. In an arena as competitive and fast-moving as AI, this can be a liability. Striking the right balance between speed and deliberation is crucial, but if we err towards one, let it be speed. That’s our best chance of shaping this new AI era for good, especially since this transformation won’t slow down.
As we match it and move swiftly, we can rely on the foundational principles of Western democracy—systems of checks and balances—to wield power responsibly, even as we shape it swiftly. And in this system, the checks and balances are not just governmental, but overlapping networks of the public and private sector, and the press and non-profits. All of us are participants. All of us are beneficiaries.
The challenges ahead are still daunting, but so are the opportunities. Framing the dialogue around AI as technologically positive and forward-looking is essential. European nations, in particular, have an opportunity to lead by example, shaping conversations that emphasize collaboration, ethical innovation, and inclusivity. The questions we ask today will define the world we build tomorrow. What safeguards should be embedded into AI to ensure it serves humanity? How can international partnerships accelerate the benefits of AI while mitigating its risks? And how can we ensure that AI development reflects a diverse array of cultural values and perspectives?
As we start to answer these questions on AI, we will only get more questions. But we will also get more global progress, wealth, and opportunities.
AI is not a stagnant river. In fact, it’s perhaps our fastest moving body of water. And soon to be the broadest and most far reaching, with its tributaries extending throughout society. Like the Nile and Euphrates, it can be the cradle of our civilization, if we continue to build with it.
And so we return to Heraclitus—and stand on the river banks once more. What will we do?
Cultures have so many parables around rivers. There’s a Buddhist one that highlights the need to let go of tools once their purpose is served. And a Sufi one that teaches discernment in challenges. And an African one that underscores faith in the unseen. There are countless more from Christinan, Hindu, indigenous and many more communities.
They all can inform us, but I think we need a modern parable—one crafted for this AI era. One that resonates with both Heraclitus then and us today.
I hope to contribute a line to this modern parable, but the truth is we must write it together. Within our society. With other societies.
There will be many drafts. Many authors. And even more readers. But I hope the spirit of it matches these lines from T.S. Elliot:
“We shall not cease from exploration And the end of all our exploring
Will be to arrive where we started And know the place for the first time.”
We will draw from and traverse this AI river many times in the coming decade. It’ll serve us to remember that even as we arrive where we started with AI, we need to approach the technology—and our understanding of it—anew, over and over again.
I think Elliot knew this. Elsewhere, in the same book as the previous stanza, he says: “The river is within us, the sea is all about us.”
We are homo techne. When we cross the river, we are deepening our understanding of technology and ourselves. And there’s something more transformative and powerful ahead: the sea. Let’s not cease from exploration. In technology, let us not cease from iterative deployment. Modern society depends on it.
Thank you.