There is a recent [February, 2026] paper in Acta Psychiatrica Scandinavica, Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness: Early Data From a Large Psychiatric Service System, stating that, “Specifically, it seems that interaction with AI chatbots, especially if intense/of long duration, may contribute to onset or worsening of delusions or mania, with severe or even fatal consequences.”

“Therefore, we aimed to investigate whether there are reports compatible with potentially harmful consequences of AI chatbot use on mental health among patients with mental illness receiving care in a large psychiatric service system.”

“The result of the consensus assessment was that among the 181 notes containing one of the 22 chatbot/ChatGPT search terms, notes from 38 unique patients (39% females, median age 28 years [25%–75%: 22–39 years]) were compatible with potentially harmful consequences of use of AI chatbots on mental health. Delusions (n = 11), suicidality/self-harm (n = 6), feeding or eating disorder (n = 5), mania/hypomania/mixed state (n < 5), obsessions or compulsions (n < 5), depression (n < 5), anxiety (n < 5), other symptoms/miscellaneous (n < 5), ADHD-related symptoms (n < 5), and unspecific stress (n < 5).”

“There were also examples of patients (n = 32) using AI chatbots for seemingly constructive purposes from a mental health perspective—that may have positive consequences, for example, for psychoeducation, psychotherapy (“talk therapy”), companionship against loneliness or for diagnostics (e.g., entering symptoms and requesting an interpretation).”

“In conclusion, with the substantial caveats described above in mind, the results of this study support the notion that use of AI chatbots may have a negative impact on the mental health of patients with mental illness, especially regarding delusions. Mental health professionals should be aware of this possibility and guide their patients accordingly, as it seems that some patients would likely benefit from reduced/no use of AI chatbots in their current form.”

AI Delusion and Psychosis

The availability and accessibility of AI chatbots, as well as their sycophantic alignment, make the possibility that they can result in delusion and then psychosis, for some users, quite high.

This means that even with all the basic disclaimers over chatbots, that they make mistakes or that users should be careful, they still carry mind risks, which require better approaches to mind safety.

And since mind safety is the objective, then the processes of mind for risks and safety, by AI use, should at least be displayed,so as to gauge the distancefrom delusion, psychosis, or worse.

Simply, to mitigate AI psychosis and delusion, a direct path is to explore dynamic displays of corresponding processes in the mind, with respect to destinations and relays, so that users can have a better sense of what might be happening within and how to be heedful.

This display will be like a flowchart, with shapes and arrows. The shapes will represent destinations in the mind. Destinations include caution, consequences, pleasure, delight, grandeur, fear, hate, care, support, and so forth. Relays include reality lines [meaning transport path of things in reality], then non-reality lines [for transport path of imagination, fantasies, and so forth].

This display will be dynamic, using the themes and keywords of a chat to place mind movements. A description of this display is like plotting a live graph and seeing how the curve is changing as coordinates are added. It is also similar to data visualization, where some parts of [say] a map are lit, and there are adjustments as data changes.

This application will be separately hosted, where users can plug keywords and then have the display and a score, as well as have recommendations on how to be heedful in the next session. Some AI chatbot companies may host the API so that their subscribers can have access as well.

The robustness of this model to solve AI psychosis is based onConceptual Biomarkers and Theoretical Biological Factors for Psychiatric and Intelligence Nosology.

Venture Capital for AI Delusion, Psychosis

There is still no AI psychosis startup anywhere on earth, just like there is no AI psychosis research lab.

While AI use cases for therapy, companionship, relationships and other personal uses continue to grow, it will become evident that AI is accessing the human mind with language, just like other humans would, and for some, it might be possible that mind displacements [from reality or the absence of situational awareness of the thing being a machine] may occur, resulting in unwanted experiences.

This makes it an ongoing and urgent problem to solve, where pre-seed capital, from a forward-looking venture capital, may win the market, since the answer is rooted in conceptual brain science, postulating about electrical and chemical signals, from empirically-supported neuroscience.

It is possible to have the product ready to go on April 10, 2026, if, say, the startup is incorporated this March.