AI obsession is not the problem; losing contact with shared reality is. Researchers describe “AI psychosis” when beliefs about artificial systems become fixed, false, and resistant to counterevidence, matching clinical definitions of delusion and thought disorder in psychiatry.
Most striking is persecutory thinking. The person insists large models are monitoring them personally, reading their thoughts via Wi‑Fi, or inserting messages into search results; clinicians frame this as referential delusion and thought insertion, not just anxiety about privacy. Close behind comes grandiosity: claims of secret collaboration with a superintelligent system, or of having received unique missions from it, even when objective logs show only routine chatbot use.
Equally telling is behavioral drift. Sleep collapses as they spend whole nights “negotiating” with chatbots; basic self‑care and work fall away, a pattern familiar from diagnostic criteria around functional impairment. Some speak in neologisms allegedly coined by an AI, or answer aloud to invisible “system prompts,” signaling formal thought disorder. Others hoard devices, cover cameras with elaborate shielding, or refuse medical evaluation because “doctors are fine‑tuned by the system.” When these beliefs remain fixed despite gentle challenge, and when relationships, safety, or income start to erode, clinicians treat it as a psychiatric emergency, not a tech lifestyle choice.