May / June / 2025

Historian Yuval Noah Harari

Hailed as one of the most significant thinkers of our time since the publication of his debut book Sapiens, Yuval Noah Harari has returned to Korea after eight years with his latest work, Nexus. In the face of ecological collapse, escalating geopolitical tensions and the rise of AI (friend or foe), humanity stands adrift amid the rapids of reality. So what
opportunities remain for us as a global community? We explore this topic with Harari and evolutionary biologist Dayk Jang.

AI Is No Longer a Tool: It’s an Agent

Q

Today’s topic is a huge one. We discuss whether coexistence between humans and AI is possible, along with the meaning, essence and future of AI. At a time when not only Korea but the entire world is navigating extraordinarily turbulent times, I believe your new book Nexus offers answers and insights to our questions.

If I had to pick just one thing we must all know about AI, I would say this: “AI is not a tool. It is an agent.” From the stone implements of the Stone Age to the atomic bomb, every technological artifact created by humans thus far has been merely a tool, entirely subject to our control. AI, however, has the ability to make decisions on its own. It also comes up with new ideas and new inventions. AI is different from any previous scientific or technological revolution. Therefore, if we think we can also control AI, we fundamentally misunderstand it.
Next, let’s talk about ourselves as humans. To manage AI, we are now at a point requiring greater cooperation than at any other time in human history. However, the reality is that globally, humans are rapidly losing trust in one another. In other words, at the heart of the AI revolution lies the “paradox of trust.” As I travel across the world, whenever I meet the leaders steering the AI current, I ask, “Why are you in such a hurry?” Without fail, they reply, “We are fully aware of [the risks], but we cannot trust our human competitors.”
They mean that if they proceed slowly, investing in safety and establishing regulations, they will fall behind in the AI race. Then I ask again, “Can you trust AI, a superintelligent entity?” And they respond by stating they can trust AI. We need to change our priorities. We must build a foundation upon which humans can trust humans. An AI created and trained based on such trust will be the only one we can truly depend on.

Q

That seems like a remarkably sharp insight. After publishing Sapiens, you interviewed many Big Tech owners and CEOs and appeared to have quite amicable relationships. However, since the publication of this new book, Nexus, there seems to be a palpable sense of tension or conflict.

I still engage in extensive dialogue with Big Tech companies. These leaders communicate frequently not only with me but also with other historians and philosophers. They too are afraid, precisely because they understand AI better than perhaps anyone else. But the aforementioned issue involving the paradox of trust persists. I don’t think Big Tech entrepreneurs are inherently evil or our enemies. Again, the task we must prioritize now is restoring trust among these companies. The paramount goal is for tech corporations to collectively navigate the AI revolution in a safe and responsible manner.


Espresso Made Before You Press the Button

Q

Where would you draw the line in defining what constitutes AI?

The crux is whether it possesses “agency.” An espresso machine is merely an automated device. It would be AI if it said something like this: “I’ve learned a bit by observing you, Mr. Harari, for several weeks. Based on that, I concluded you’d likely want an espresso right now. I already brewed you a cup before you could even press the button.” Let’s follow the scenario to the next day: “Thinking you might enjoy it, this time I created a new menu item called ‘Bespresso.’” AI is equipped with the capability to create things autonomously, without human intervention; this is probably fine at the level of coffee.
However, if AI reaches the point of creating medicines, weapons… entirely new things, perhaps even novel religions or ideologies, that will escalate into a profoundly significant issue.

Q

Does providing automatic recommendations necessarily equate to being an autonomous entity? I believe we still retain the freedom to reject AI’s suggestions. In other words, AI might provide optimized means to achieve a certain goal, but it isn’t yet at the stage of setting the goal itself. And that, surely, is a threshold humans must never cede to AI. In what sense, then, do you view AI as an autonomous agent?

Firstly, recommendations themselves wield immense power, and increasingly so. Consider a city like Seoul. Previously, we learned the routes ourselves and navigated around, didn’t we? But now, we often follow an app with little thought. This serves as a prime illustration of how AI recommendations are steadily gaining influence.


The Future Remains Undetermined

Q

You’ve also warned about the peril of AI expanding based on databases containing inherent biases. How do you foresee AI impacting global inequality moving forward?

Ultimately, it hinges on how humans decide to utilize it. AI undeniably possesses the potential to severely exacerbate problems of polarization and inequality. If a handful of corporations or nations monopolize AI’s power, we could witness a repetition of events akin to the 19th century Industrial Revolution. It’s conceivable that the two or three countries leading in AI could dominate the rest of the world militarily and economically, attempting a form of “AI imperialism.” In such a scenario, if war were to break out, it would resemble the conflicts of the 19th century — except it would be waged between armies equipped with AI technology and those without.

Q

There’s an ongoing debate within Korean society arguing that if AI generates unverifiable content indiscriminately, the importance of legacy media might actually increase rather than diminish. What are your thoughts on this?

I don’t believe we necessarily need to revert to older technologies. What’s crucial is the mechanism: “Is this information trustworthy, and is there a system in place to discern reliability?” Legacy media generally possessed such mechanisms. The issue lies with new media, where creators sometimes operate with an astonishingly naive view of information — the notion that “if you just throw the doors wide open, truth will inevitably surface on its own.” This is a big mistake. If the information market is completely unregulated, truth will likely sink to the very bottom. Instead, cheap, simplistic, yet attractively packaged falsehoods will flood the space and utterly dominate the market.
Algorithms possess no inherent interest in truth; their sole focus is on maximizing user engagement. In reality, user engagement often involves triggering our emotion buttons like anger, greed, or fear to instantly capture our attention. While this might be an acceptable metric for entertainment content, it is not suitable for journalism. The fundamental problem with social media currently is the complete blurring of lines between drama, entertainment programming and news. Politicians, too, are heavily swayed by this. The primary concern shifts from whether the disseminated content responsibly engages with the world and strives for truth, to simply how to elicit greater user participation. We urgently need to distinguish between entertainment and news.

Q

Thank you for sharing your deeply insightful perspectives today. A final question: Is humanity’s future one of laughter or tears?

The future is not yet decided. It ultimately depends on the decisions we make collectively in the coming years. Excessive optimism breeds complacency. Conversely, excessive pessimism fosters irresponsibility. The critical element is taking responsibility. Right now, multiple paths toward the future still lie open before us. What truly matters above all is embracing accountability for the course of our lives. In the end, it is this commitment that will shape the future we inherit.

  • Yuval Noah Harari is an Israeli historian and world-renowned author. Through global bestsellers like Sapiens, Homo Deus, and 21 Lessons for the 21st Century, he has offered profound insights into human history, the future, and the changes brought by technological development.
  • Dayk Jang is the Dean and Chair Professor of the Startup College at Gachon University. An authority in the fields of bioethics and the philosophy of science, he has continuously pursued research exploring the relationship between humans, technology and society.
  • Moderated by Dayk Jang
  • Photography by Lim Harkhyoun
Share on