In a remarkable conversation that feels even more relevant today, tech pioneer Mustafa Suleyman and historian Yuval Noah Harari sat down to discuss the future of artificial intelligence and its implications for humanity. Their dialogue, which bridged the gap between technological innovation and historical perspective, offered unique insights into both the promises and perils of AI development.
The discussion began with Suleyman painting a picture of AI’s rapid evolution. He described how we’ve moved from classification capabilities to generative AI, with models becoming exponentially more powerful. But it was Harari’s response that set the tone for the conversation with this striking observation:
“What we just heard is basically the end of human history – not the end of History, the end of human-dominated history. History will continue with somebody else in control.”
This stark assessment wasn’t meant as a doomsday proclamation but rather as a recognition of an unprecedented shift in human affairs. For the first time in history, we’re facing technology that can not only make decisions independently but also create new ideas at a scale far beyond human capability.

The conversation then explored the positive potential of AI, from transformative improvements in healthcare to accelerating scientific discovery and addressing climate change. Suleyman emphasized how everyone would eventually have a “personal intelligence” – a capable AI assistant helping them become smarter and more effective. However, Harari introduced an important counterpoint about the relationship between intelligence and wisdom:
“Homo sapiens at present is the most intelligent entity on the planet. It simultaneously also is the most destructive entity on the planet, and in some ways also the most stupid entity on the planet – the only entity that puts the very survival of the ecosystem in danger.”
This observation led to a crucial discussion about regulation and control. Both speakers agreed on the need for new frameworks to govern AI development, but they also highlighted the challenges of implementing effective oversight in a world divided by geopolitical tensions.
The conversation took a particularly interesting turn when discussing the financial sector’s vulnerability to AI disruption. Harari painted a scenario where AI could create financial instruments so complex that no human could understand them – reminiscent of the 2008 financial crisis but potentially far more devastating.
Perhaps the most compelling part of the discussion centered on the question of containment and control. Suleyman advocated for a precautionary principle, suggesting certain capabilities should be taken “off the table” entirely. Meanwhile, Harari compared the situation to an alien invasion, with a particularly powerful observation:
“This is what we are facing, except that the aliens are not coming in spaceships from planet Zircon – they are coming from the laboratories.”
The discussion concluded with both gentlemen addressing why they continue their work despite the risks. Suleyman emphasized his commitment to developing safe AI systems, while Harari stressed the importance of investing in human consciousness and potential alongside artificial intelligence development.
Looking back at this conversation from our vantage point in 2025, many of their predictions and concerns have proven prescient. The rapid advancement of AI capabilities, the challenges of regulation, and the need for balance between innovation and safety remain central to our global discourse about artificial intelligence.
Their dialogue reminds us that we’re not just developing new tools – we’re potentially reshaping the very nature of human agency and control over our future. As we continue to navigate this transformation, the intersection of technological innovation and human wisdom becomes increasingly crucial.

The conversation between Suleyman and Harari serves as both a warning and a call to action. It suggests that our challenge isn’t just to develop more powerful AI systems, but to ensure that this development aligns with human values and interests. As we move forward, their insights remind us that the decisions we make today about AI development and regulation will shape not just our immediate future, but potentially the entire trajectory of human history. As Suleyman poignantly concluded:
“This is an inevitable unfolding over multiple decades – the coming wave is coming… my contribution is to try to demonstrate in the best way that I can a manifestation of a personal intelligence which really does adhere to the best safety constraints that we could possibly think of.”
What do you think? Reach out and connect with our team of thought leaders to have us share with you our perspectives on what’s ahead for your organization when it comes to Artificial Intelligence.

