This is currently a stub, and it’s more psychiatry than LLMs but, yolo.

Sources

Interesting base reads:

Further sources:

Adj: Bullshit Benchmark explores how models behave when you give them nonsensical prompts. Do they try to come up with an answer that is equally nonsense, or tell you that it makes no sense (and then maybe try to give you a solution of the nearest non-bs prompt)?

Thoughts

4o?

There appears to be some thought that especially GPT 4o has led to quite a few of the cases. It was a model that was especially “sycophantic” (if anything, it’s the model that put that word into public consciousness).

The 4o fans threw quite a fit when it was being taken out of the app. Here’s one post, here’s a sympathetic paper that has the least serious opening I’ve ever seen. Some keep4o people I ran into were quite… concerning, one purporting to have discovered that cancer is evolution and preservation. I’ll put up a picture of it once I wire up my files directory to CI.

4o can be linked to much of the suicide cases, but Gemini 2.5 Pro has been linked to at least one case.

If this is the case, we might see lower prevalence of AI psychosis going forward.

TODO:

Prevalence

tl;dr: openai, scott alexander, adele lopez links all have estimates.

Risk factors

tl;dr: see Buck B, Maheux A

Prevention

tl;dr: Lack of sleep has strong links to psychosis. Models being there 24/7 may not be great for you. Take care of yourself tonight. Get some rest.