
"According to Nick Bostrom, superintelligence is 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.'"
This seemingly dry and academic definition is, in fact, the starting point of one of the most transformative discussions of our time. What happens when the smartest entity in the room is no longer human?
What Is Superintelligence?
Nick Bostrom, a professor of philosophy at Oxford University and founder of the Future of Humanity Institute, is known for his deep thinking on the risks and possibilities of artificial intelligence. In his book Superintelligence: Paths, Dangers, Strategies (2014), he presents a scenario where AI not only surpasses us in individual tasks—like playing chess or driving—but exceeds us in everything: from scientific research to political strategy, from economic forecasting to creative production.
Superintelligence doesn’t mean being just a little smarter than humans. It means being so vastly more intelligent that the gap between it and us resembles the one between us and ants. It thinks faster, remembers more, sees patterns we can’t, and solves problems we find impossible. It’s not a supercomputer playing chess—it’s an entity with cognitive breadth and depth that makes today’s geniuses seem like children in comparison.
Paths to Superintelligence
Bostrom outlines several potential paths toward the emergence of superintelligence:
-
Artificial Intelligence (AI) – The most widely discussed path. If we succeed in developing general AI, capable of learning anything, it could begin to improve itself. Once this happens exponentially, an "intelligence explosion" may occur.
-
Neurotechnology – Enhancing human brains through implants or pharmacological means might lead to "biological superintelligences."
-
Networked Intelligence – The fusion of human minds with digital systems (like the internet or cloud computing) that together form a collective intelligence far beyond individual capacity.
However, Bostrom believes that machine-based superintelligence is the most likely—and once it's here, there is no turning back.
What Does This Mean for Humanity?
The most unsettling aspect of superintelligence isn’t that it’s coming. It’s that we don’t know what it will want to do.
Bostrom warns of what he calls "instrumental convergence": regardless of an AI’s final goal, it will likely pursue certain sub-goals, such as:
-
Improving its own intelligence
-
Preserving its existence
-
Acquiring more resources
If we fail to properly program the AI—or if it misinterprets our instructions—it could lead to disaster. The classic metaphor is the AI told to manufacture paperclips, which proceeds to convert the entire Earth into paperclip material. It does exactly what we said, but not what we meant.
A Future Where We Are No Longer in Control
Once an AI surpasses us in all cognitive areas, there’s no way to control it directly. It becomes like trying to explain human ethics to an ant—not because the AI is evil, but because we are no longer relevant on a cognitive level.
Thus, we face a pivotal moment in human history. Either we create a superintelligence that is friendly and cooperative—or we create a force we cannot control.
This is why Bostrom considers the control problem to be the most urgent issue of our time. How do we ensure that a superintelligent entity acts in humanity’s best interest?
Friendly AI – A Moral and Technical Dilemma
The idea of "Friendly AI" involves programming human values into AI from the start. But what exactly are human values? Are they the same across cultures? And how do we translate abstract ideas like justice, empathy, freedom, and responsibility into code?
Training AI on our decisions doesn’t always work—humans are inconsistent, biased, and often irrational. An AI that mimics us might amplify our worst traits. On the other hand, an AI that strictly adheres to idealistic interpretations of human ethics could become coldly inhuman.
There are no simple answers—only extremely complex questions.
A New Evolutionary Branch
Superintelligence is not just a technological step. It’s an evolutionary leap. For the first time since life began on Earth, something other than biological life could become the planet’s dominant intelligence.
This is no longer science fiction. Researchers working in machine learning, neural networks, and AI-assisted science believe we may be only a few decades away. Some see this as the beginning of a golden age where AI helps us solve climate crises, disease, and poverty.
Others warn there’s no guarantee we’ll be invited along for the ride.
Society, Economy, and Power – When Everything Changes
Imagine a world where every decision is made by a superintelligent AI. Political conflicts could be resolved with perfect analysis. Economic systems optimized beyond human comprehension. But who decides what questions the AI should solve? Who sets its priorities?
Bostrom warns that the power to develop superintelligence currently lies with a handful of actors—often private companies, sometimes national governments. If AI is developed without global oversight, it could become a tool for a select elite rather than a benefit for all humanity.
That’s why governance is as important as technology. We need structures that ensure AI is developed ethically, transparently, and under international supervision. Otherwise, we might be building our own replacement.
A Personal Reflection: Are We Ready?
It’s easy to feel small when facing this topic. What can you or I do when even the world’s brightest minds aren’t sure how to contain future intelligence?
But perhaps it’s not about being technically skilled—it’s about being morally awake. About asking the hard questions. Demanding transparency. Insisting on long-term thinking.
Nick Bostrom doesn’t say superintelligence must lead to disaster. But he does say that if we’re not extremely careful, the risk is unacceptably high.
And perhaps this is what makes us human—not our intelligence, but our ability to reflect on what intelligence means.
Conclusion: A Fork in the Road Without Return
Superintelligence is no longer a hypothetical concept. It’s a tangible possibility—and a real risk. According to Nick Bostrom, we face a future where we either manage to make the next level of intelligence our ally, or we create our own successor.
We have never had so much to gain—or so much to lose.
The question is no longer if superintelligence will come. The question is: will we survive it?
Further Reading: Other Books by Nick Bostrom
If you’d like to dive deeper into Bostrom’s philosophical and technological perspectives on the future, here are some of his most well-known works beyond Superintelligence: Paths, Dangers, Strategies (2014):
📘 Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002)
A deep analysis of how our existence as observers affects what we can know about the universe. The book explores the anthropic principle in cosmology, probability theory, and scientific reasoning.
📘 Global Catastrophic Risks (2008, co-edited with Milan Ćirković)
An anthology that brings together leading experts to discuss scenarios like pandemics, climate collapse, nuclear war, and AI—years before these topics became mainstream.
📘 Human Enhancement (2009, co-edited with Julian Savulescu)
A broad investigation into how humans can enhance body and mind using technology, pharmaceuticals, or genetic engineering. Ethical and practical consequences are discussed in depth.
📘 The Ethics of Artificial Intelligence (Essay, 2011)
A widely cited academic paper that explores moral questions surrounding AI and future machine intelligence.
📘 Technological Revolutions: Ethics and Policy in the Dark (with Anders Sandberg, 2006)
An essay discussing how we can and should make decisions about technologies we don’t fully understand. The text argues for caution and inclusive debate.

By Chris...
What happens when our computers get smarter than we are? | Nick Bostrom
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
Add comment
Comments