Three weeks ago, we were preparing for a launch and running extensive tests. We utilized multiple AIs for the process: GPT, Cohere, Mistral, Perplexity...
At first, I looked at the information provided and made adjustments to improve it. By the third day, however, I was just passing information through. Needless to say, it took more than twice as many days to recover and regain my independent thinking.
What happened to me?
Generative AI offers clear advantages.
It saves time, reduces friction, and provides instant access to sophisticated, relevant language tailored to specific inquiries.
This convenience is undeniable, and indeed, many people are benefiting from it.
However, lurking beneath the surface is a quieter, yet more serious problem. It has less to do with information overload—at least, not in the way people typically imagine.
The more profound problem is that AI frequently delivers answers before our own thoughts have had the chance to fully take shape.
The timing matters.
Human thinking is not just about receiving information and deciding whether it is correct.
- It has its own sequence.
- A question arises.
- Then there is a pause.
In that pause, we search for fragments, predict, test, feel uncertainty, notice resistance, recall earlier experience, and begin to form an orientation of our own.
At times, this process can feel tedious. Before I reach an answer, I may feel anxious or uncomfortable. Sometimes it is slow. Sometimes it feels inefficient. But that interval is not wasted time. It is often the space where thought becomes personal rather than borrowed.
This is also where neural development, cerebral blood flow, awareness, and the growth of the mind are shaped.
Generative AI compresses that "interim space"—that pause—to an extreme degree.
So much so, in fact, that we almost forget it ever existed.
When we pose a question, an answer returns almost instantly. This response is not merely a simple reply; in many cases, it is presented as a structured interpretation, a summary, a recommendation, a reframing of perspective, or even as a coherent and compelling explanation—one so polished that it can be accepted at face value.
Before we even have the chance to harbor a doubt, we have already begun reading the content and finding it convincing.
Before we even have the chance to formulate a hypothesis of our own, we are already engaged in evaluating the hypotheses presented to us. Without waiting for deep contemplation to take root, our brains immediately commence the task of processing information.
What is unfolding here is a situation that goes beyond a mere alteration of "speed." It fundamentally shifts the very "standpoint" of the thinking subject. We no longer follow the traditional process—moving from "inquiry" to "deliberation," and finally to the "formulation of concepts." Instead, we transition directly from "inquiry" to the "management of answers."
We classify, evaluate, compare, edit, refine, and curate these responses. In pursuit of sharper phrasing, more concise summaries, more strategic perspectives, more persuasive language—and, above all, expressions that exude a sense of our own unique "selfhood"—we continuously instruct the AI to make revisions.
Thus, our minds become wholly preoccupied with the ceaseless processing of already-formed "words," before ever having the opportunity to generate original thoughts from within ourselves.
This is one reason why AI can feel mentally exhausting even when it seems to reduce effort.
Many people describe cognitive overload as a problem of information overload.
That's true, but it's not enough. The problem now is not just the amount of information, but the timing. Too many languages are completed before their inner meaning is formed. The brain is incorporated into evaluation too quickly.
Working memory is filled with options, interpretations, next steps, and possible modifications before a person can identify what they are actually thinking or feeling. It's exhausting in a very specific sense. It's not deep work fatigue.
It's constant triage fatigue.
The idea of the brain as a processing system is important here.
One of the serious risks of continued use of AI is not just that we become dependent on the answers. That means that most of the brain's costs will be increasingly mobilized to process the output produced by the AI, before it even generates the thoughts itself.
Over time, the mind can begin to function less as a field of inquiry and more as a system for processing externally generated language. Managing the output will be faster and there will be less practice asking questions.
The reason this shift in perspective is crucial is that human thought is not merely "reactive"; it is also "generative." It does not proceed in a straight line. It wanders here and there, tests possibilities, returns to its starting point, calls upon past experiences, captures emotions, and—while constantly correcting itself—slowly gives shape to that which has not yet taken form. It is here that the brain, the nervous system, and the mind are all actively engaged.
Some of the most significant thoughts reveal themselves precisely *before* they have been formulated into clear words. This phenomenon consists of the sensations that arise while a thought remains in a state not yet amenable to verbal expression—that is, when the thinker still lacks certainty, relying merely on intuition to sense something, and is in the very midst of groping and searching in an effort to grasp the fundamental essence of the "question" lying before them.
As AI responses become more instantaneous and more highly personalized, other subtle risks arise. The more similar the answer is to us, the easier it is to be accepted without much internal consideration. Some people read the responses and think, Okay, that makes sense.
As AI responses become more instantaneous and more highly personalized, other subtle risks arise.
The more similar the answer is to us, the easier it is to be accepted without much internal consideration. Some people read the responses and think, Okay, that makes sense.
But making sense and integrating are not the same thing. Language can feel clear even when your inner life remains unresolved. Maybe your thoughts, emotions, memories, contradictions, and lived experiences haven't caught up yet.
This is why some people feel strangely tired after receiving useful answers. Although the mind is given a structure, the self does not necessarily metabolize it.
And the brain is busy processing one after another.
This discrepancy has consequences. It weakens the developmental value of thought itself.
What is avoided is not delay, but the process by which doubts and questions become your own wisdom.
That means staying with uncertainty, forming hypotheses, noticing inner reactions, recalling memories, testing meaning, and revising views. These steps are not cosmetic. Those are some of the ways to build self-confidence.
They are part of the mechanism by which internal consistency is formed. They are part of the way our nervous system learns by actively participating rather than passively accepting.
That loss also impacts our development. The brain changes through use.
It adapts to repeated patterns of attention, behavior, recall, evaluation, and learning. When we repeatedly form hypotheses, retrieve memories, grapple with uncertainty, and refine our perspectives, we are not merely arriving at an answer.
These uncomfortable moments help strengthen the very capacity for contemplation itself. This is a discipline—the discipline of arriving at an answer through thought.
When these intermediate stages are routinely outsourced, efficiency may improve. However, our practice of generating insight for ourselves may diminish. This does not necessarily lead to an immediate decline in intelligence. The change is much subtler.
A person may still perform exceptionally well, communicate clearly, and act quickly. Yet beneath the surface, the habit of inquiry itself may be weakening.
After all, success in life rarely comes down simply to having drafted the “right” document.
This may help explain why, despite being surrounded by tools designed to make thinking easier, many people still feel mentally constrained, burdened by a strange fatigue, and disconnected from their inner vitality.
They are not simply drowning in information or overworking. Often, the brain is kept in a constant state of processing, leaving too little time for deeper reflection and too little space for genuine deliberation.
The problem is not that AI is making people stupid. The deeper issue is that we are beginning to demand an endless stream of answers from AI, and in doing so, may be training ourselves to prioritize processing over reflection.
Naturally, these effects ripple outward in subtle ways. Mental fatigue deepens, self-trust wavers, and maintaining inner coherence becomes a struggle. People start seeking answers with more haste—settling for "good enough" rather than striving for the optimal—while growing averse to waiting and losing their tolerance for ambiguity.
There is a real risk that we start relying on external frameworks before we even have the chance to shape our own. Over time, this dulls the mind's innate vitality; the brain begins to expend nearly all its energy simply processing information. This doesn't necessarily lead to a total breakdown. Instead, it manifests more quietly: a decline in mental activity, a waning of curiosity, and a lost willingness to sit with a problem until a truly original idea takes shape.
The real danger isn't just that AI thinks for us. It’s more fundamental: AI threatens to sever the very process through which a thought matures and becomes truly our own. We need more than just answers. We need an environment where a question can deepen, where uncertainty is tolerated, and where meaning is spun from within. This is where neuroplasticity happens—this is where the "Aha!" moment lives.
If we lose that, we lose more than just cognitive capacity. We lose the ability to generate the "internal work" that allows real thinking to take root in the first place.
Rie, Founder of DriftLens