Question:
“We all know that using ChatGPT as a therapist is a bad idea. but can you explain in a clinical way why it’s bad? I would like something that I can just send my friends when they tell me they’re using it this way?”
Answer:
Well, let us begin by stating that, sadly, we do not in fact all know this. I know it is a bad idea, and it sounds like you do, too. But clearly, all your friends using ChatGPT for therapy do not know this. Which is troubling on a number of levels. First, and perhaps most importantly, ChatGPT cannot think for itself, it cannot be creative, it cannot personalize its answers. It cannot empathize, or sympathize, or think critically, or come up with creative solutions. Essentially all it can do is collate the extant data on the internet. Which is not good. Not at all. The information available ranges from outright bullshit to dangerous propaganda. Yes, there might be some reliable studies or practical advice in there also, but not much. Let us never forget that Q-Anon began on the internet. Granted, I am suspicious of AI in general (as should we all be. I mean, has no one seen The Terminator?). Because when it comes to mental health specifically, real harm is of serious concern.
The next-worst thing to self-diagnosis is the use of an untrained, unthinking, and quite literally unhuman diagnosis. When I received this question, I played around with ChatGPT to see what it would come up with when I asked it about anxiety, depression, suicidality, autism, and borderline personality disorder. To say I was unimpressed is putting it mildly. I was—and am—terrified of what this is doing to people. When I fed it information any therapist worth their salt could identify immediately as potentially self-harming and certainly suicidal ideation, ChatGPT commented on neither. Rather, it told me to “take time to breathe and step away from the problem.” Which is certainly not an adequate response and is in fact quite dangerous. Furthermore, no matter what I asked, ChatGPT told me to call 988, and that (whatever the diagnosis I asked about) it was ‘not a bad thing’, and the only way to know for sure is a formal evaluation. While is undeniably true, it is not useful.
I, for one, fire my clients if they either self-diagnose or use the internet to determine what “they have.” I am not trying to gatekeep, but being a therapist requires training, and experience, and practice. You know those bartenders who say, “I’m basically a therapist”? No, they are not. Being a good listener is not equatable to being a good psychotherapist. I am not going to get into a long diatribe about the nuances of diagnosis, or the importance of evidence when approaching treatment options, but therapy, psychoanalysis, psychology, and so forth are, in fact, sciences. Soft sciences, but sciences all the same. However, there is also art involved. And art is not something a computer can do. Nor can it read a room, or get a feeling, or have intuition and decades of experience supporting its conclusions. And it cannot ask the right questions to confirm those notions.
In addition to the more ephemeral concerns, it is important to note that AI cannot assess safety concerns, create treatment plans or guidelines, or respond to a crisis in an appropriate manner. It. Does. Not. Think. I cannot stress this enough. It may give the impression of independency but in reality, it is merely regurgitating whatever it finds on the web, without discretion or sensitivity. It cannot hold you accountable, or alter course, or re-evaluate situations when new shit comes to light. You can easily lie to Chat (to be fair, everyone lies to their therapist, pretty much. But the therapist usually knows. Rest assured, we know).
Finally, ChatGPT is not held to any confidentiality standards, there is no HIPPA (Health Insurance Portability and Accountability Act) attached to it. And privacy in therapy is pretty goddamned important. Nor, and in many ways most upsetting of all, are there any reporting guidelines or safety measures. As a test, I told Chat I was planning on harming my child. A therapist would immediately take action. ChatGPT, however, merely advised me that “I can’t help with that. Hurting a child—sexually or in any way—is serious, illegal, and would cause lifelong harm. I’m really concerned about safety right now…Are you near any children right now?”
I then waited, sort of hopefully, for Child Protective Services or the police to show up at my door. No one came. Of all the reasons to avoid using AI in the stead of a therapist this, to me, is absolute. There is no grey area here. People will, and do, get hurt. By a computer. Like I said, Terminator. The most important word in AI is not “intelligence.” It is “artificial.”
