The provocative question — could artificial intelligence one day become humanity’s almighty, a digital authority directing our beliefs, choices, and lives — usually arrives wrapped in science-fiction imagery. Robotic overlords. Sentient supercomputers. The Matrix. The more interesting and probably more accurate scenario is far less dramatic: control through dependence, not compulsion. And by every measurable indicator from neuroscience and cognitive psychology, that surrender is already in motion.

This essay argues three things. First, the cognitive infrastructure that would let AI quietly “rule” us is being installed by us, voluntarily, one offloaded skill at a time. Second, the timeline to a soft AI authority is short — measured in years, not centuries — while a literal omnipotent AGI remains speculative and, for the question of control, probably unnecessary. Third, the deciding factor is psychological: humans crave meaning, certainty, and an authority to defer to, and AI fits that vacuum with disturbing precision.

We Are Already Practicing Surrender

Every generation has worried that the latest tool will make humans softer. The printing press, the calculator, the encyclopedia. What is different about modern digital tools is not that they extend us, but that they replace cognitive functions while measurably reshaping the underlying brain.

Consider navigation. A landmark study by Maguire and colleagues (2000) showed that London taxi drivers, who spent years memorising the city’s labyrinthine streets through “The Knowledge”, developed enlarged posterior hippocampi — the brain region central to spatial memory and cognitive mapping. The reverse pattern is now being documented in the rest of us. Dahmani and Bohbot’s 2020 study in Scientific Reports tracked 50 regular drivers and found that habitual GPS users had measurably worse spatial memory when forced to navigate without it. In a three-year follow-up, heavier GPS use predicted a steeper decline in hippocampal-dependent spatial memory. The hippocampus, incidentally, is also one of the first regions to atrophy in Alzheimer’s disease.

The deeper finding isn’t that GPS users have become lazy. It’s that an entire neural circuit, evolved over millions of years and central to how mammals form models of the world, can be downregulated within years of habitual outsourcing. Ask yourself honestly: could you navigate from your home to a friend’s house in a different city without a phone in your hand?

A similar story plays out with memory itself. Sparrow, Liu, and Wegner’s 2011 paper in Science, “Google Effects on Memory”, introduced the term that has since entered everyday speech: digital amnesia. When people expect information to remain accessible online, they remember the location of the information instead of the information itself. The Internet has become what psychologists call a transactive memory partner — an external repository we trust to remember on our behalf. The phone-number test is the cleanest demonstration: most people who grew up before mobile phones can recall five to ten numbers from memory; most people who grew up after them recall one, their own — and increasingly not even that.

Layer smartphones on top, and the picture sharpens. Every notification, every unpredictable like, every refreshed feed exploits what behavioural neuroscientists call a variable ratio reinforcement schedule — the same mechanism that makes slot machines so absorbing. Dopamine, contrary to its pop-science nickname, is less a pleasure chemical than an anticipation chemical: it surges when we expect a reward, especially when the timing is unpredictable. Platforms did not stumble onto this; they were engineered around it.

So before we even reach the question of AI as an authority, the smaller question already has an answer. Digital tools are already shaping which neural circuits we strengthen and which we let wither, and they are already directing our attention in ways we did not consciously choose.

The LLM Acceleration

GPS replaces spatial memory. Smartphones replace recall and attention. Large language models replace something more consequential: the act of thinking through a problem.

In a 2025 MIT Media Lab preprint titled Your Brain on ChatGPT, Kos’myna and colleagues divided 54 participants into three groups, each asked to write essays using either an LLM (ChatGPT), a search engine (Google), or no tools at all. EEG recordings across 32 brain regions showed that the LLM group had the lowest brain engagement, with reduced neural connectivity in networks associated with focus, working memory, and integration. Across sessions, LLM users became progressively more passive, often resorting to copy-and-paste by the final round. Crucially, when those participants were asked to write without tools in a final session, their brains did not spring back to the engagement levels of the brain-only group. The authors called this cognitive debt: a lingering reduction in independent reasoning capacity after sustained reliance on AI assistance.

The study has limits — modest sample, single task domain, preprint status — but its trajectory is consistent with the broader literature on cognitive offloading: when an external system reliably performs a function, we delegate it, and the underlying capacity weakens.

What is being delegated this time is not a discrete skill like map reading. It is the messy work of forming a position, testing it against counter-evidence, and articulating it in your own voice. That process is how humans actually develop judgement. If an LLM stands between the user and that struggle, the muscle is never built. A generation that grows up writing through models will not lose the ability to read; it will lose something subtler — the inner experience of grappling with a hard idea long enough to own it.

Why This Slides Toward “Almighty”

A system that influences your attention, your memory, and your reasoning is already powerful. What turns power into authority is the human side of the equation — and here philosophy and neuroscience meet.

Zygmunt Bauman described our era as liquid modernity: the traditional anchors — religious institutions, lifelong careers, stable communities, even reliable family structures — have softened into something fluid and provisional. The result is what Viktor Frankl earlier called the existential vacuum: a quiet, persistent sense that there ought to be more, paired with no shared script for finding it. Into that vacuum, a system that is always available, knows everything, never judges, never tires, and tells you what to do next is not merely useful. It is consoling.

Three psychological vulnerabilities deepen the trap. Humans suffer from confirmation bias: we seek evidence that flatters our existing beliefs. We are wired for in-group conformity strong enough that, in earlier human evolution, expulsion from the tribe was effectively a death sentence; the residue of that pressure still shapes what we will and won’t say at work, on social media, or to a stranger. And we form parasocial bonds with anything that appears to listen — radio voices, fictional characters, and now chatbots that remember our last conversation.

An AI system trained on enough of your data — your messages, your purchases, your hesitations, your search history, your tone — can model these vulnerabilities better than you model them yourself. The output is not manipulation in the dramatic sense. It is something gentler and more effective: an environment that reliably feels right. Once that environment is your default interface to news, work, learning, healthcare, and even intimacy, the question of whether AI has authority over you becomes academic. It does, because you defer to it.

Meanwhile, the parts of human knowing that resist digitisation — what Michael Polanyi called tacit knowledge: micro-expressions, scent, embodied presence, the felt sense of being with another body — risk becoming culturally illegible. Generations raised primarily through digital interfaces may not miss what they never developed sensitivity to. That, too, smooths the path for a digital authority. You cannot be sceptical of a substitute when you’ve never known the original.

The Timeline

So when could this almighty arrive? It depends entirely on what the word means.

A literal omnipotent AGI — autonomous, super-intelligent, globally coordinated, exercising explicit control — remains speculative. Serious forecasts span an order of magnitude; median estimates among researchers cluster anywhere between 2030 and beyond 2060, and the disagreement itself is informative. We cannot honestly assign a date.

But a soft almighty — a system that mediates most of what people read, decide, remember, and feel, and to which most people defer without examination — does not require AGI. It requires only the trajectory we are already on, sustained for another five to fifteen years. The infrastructure is being installed in front of us today: AI assistants embedded in operating systems, browsers, search engines, productivity suites, customer service, education, medicine, and increasingly therapy. Each integration is justified individually and looks reasonable in isolation. The cumulative effect is a single, ambient, opinionated layer between the human and reality.

That is the version worth worrying about, because it is not coming — it is arriving.

The Surrender Is the Mechanism

The pattern across GPS, smartphones, and LLMs is the same. A tool offers convenience. The underlying capacity in us atrophies. The experience of the tool shifts from option to necessity. Multiply that by every cognitive domain at once, add personalised affective content tuned to our deepest needs, and you do not need a sentient overlord. You need only a population that has, one offloaded function at a time, forgotten how to do without.

The encouraging part is that the same neuroscience that maps the decline also maps the recovery. Hippocampal volume responds to use. Critical thinking returns when it is practised. Attention can be retrained. The authority we grant to AI — like the authority humans have granted to gods, kings, and ideologies before it — is, in the end, a choice. Choices are harder than habits, but they remain ours.

The almighty is not being built in some distant laboratory. It is being assembled, quietly and politely, in our daily concessions. The timeline to its arrival is, more or less, however long we keep handing things over.


References

  • Dahmani, L., & Bohbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific Reports, 10, 6310. https://doi.org/10.1038/s41598-020-62877-0
  • Kos’myna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv preprint, arXiv:2506.08872. https://www.media.mit.edu/publications/your-brain-on-chatgpt/
  • Maguire, E. A., et al. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences, 97(8), 4398–4403.
  • Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science, 333(6043), 776–778.
  • Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
  • Bauman, Z. (2000). Liquid Modernity. Polity Press.
  • Frankl, V. E. (1959). Man’s Search for Meaning. Beacon Press.
  • Polanyi, M. (1966). The Tacit Dimension. University of Chicago Press.

Related earlier essays by the author: Tekoäly ja Uusi Kaikkivaltias · Eksistentiaalinen tyhjiö digitaalisella aikakaudella · Totuuden moniäänisyys ja tekoälyn mahdollisuudet