The Dangers of Letting AI Take Too Much Control in Our Information Networks
As I sat down to consider the rapid advancements of artificial intelligence in recent years, I couldn’t help but think of the unsettling warnings outlined in Yuval Noah Harari’s Nexus. The book delves deeply into the complex interplay between technology, society, and human autonomy, exploring how humanity could unwittingly surrender control to the very tools we’ve built to serve us. Reflecting on Harari’s principles, I see clear dangers in allowing AI to dominate our information networks.
First, there’s the issue of data centralization and control. AI thrives on data—vast amounts of it. But who owns this data? Who controls the algorithms that process it? When AI takes over our information networks, the centralization of data becomes inevitable. A few powerful corporations or governments could effectively control the flow of information, deciding what we see, what we believe, and ultimately, how we think. Harari warns of a world where power shifts from individuals to those who control data, creating a new aristocracy of tech elites.
This leads me to the second danger: manipulation at scale. AI can tailor information with unprecedented precision, crafting messages that exploit our cognitive biases and emotional vulnerabilities. In Nexus, Harari discusses the concept of “hacking humanity,” where our choices are no longer entirely our own. Imagine living in a world where AI-driven networks know you better than you know yourself—predicting your preferences, steering your decisions, and subtly nudging you toward outcomes you didn’t consciously choose. Democracy, already fragile, could buckle under the weight of algorithmic influence.
Then there’s the threat of algorithmic opacity. As AI grows more complex, the systems that govern our networks could become increasingly opaque, even to their creators. Harari emphasizes the danger of blind faith in technology, where society trusts the infallibility of algorithms without understanding how they operate. This lack of transparency could lead to catastrophic consequences when errors occur, as no one would know how to fix them—or worse, who to hold accountable.
Perhaps the most haunting danger is the loss of human agency and creativity. Harari highlights the importance of retaining our humanity in the face of technological advancement. If AI controls our information networks, it risks turning individuals into passive consumers of information, stripping away our ability to critically analyze, question, or innovate. When everything is curated for us—our news, our entertainment, even our beliefs—we risk losing the ability to think independently.
But the future is not yet written. Harari’s Nexus doesn’t just outline dangers; it serves as a call to action. We must design AI systems with ethical safeguards, prioritize transparency, and ensure that control remains distributed rather than centralized. Most importantly, we must invest in education that equips individuals to navigate an AI-driven world with critical thinking and resilience.
The question isn’t whether AI will shape our future—it already is. The real question is whether we will allow it to shape us in ways that erode our autonomy, creativity, and humanity. The choice, for now, remains in our hands.
Please reach out if you would like to discuss further. I’ll be continuing to post my thoughts on this exciting topic in future blog posts!

