In my earlier essay on The Perils of AI in Psychotherapy (on Substack), I raised concerns about the rapid expansion of artificial intelligence into domains that require not only intelligence, but conscience, moral judgment, and human responsibility, especially psychotherapy. What has become increasingly clear, however, is that this concern is not new. From the very beginning, pioneers like Joseph Weizenbaum, creator of the first chatbot in 1962, ELIZA, warned that the greatest danger of AI lies not in its capabilities, but in our tendency to mistake simulation for understanding. As AI grows more sophisticated, the illusion of empathy deepens, and with it, the risk that we begin to treat machines as if they possess human qualities such as wisdom, care, and moral insight, that they fundamentally lack.

Summary of the Issues

At the heart of the issue is a profound category error: AI does not think, feel, or understand – it predicts. It generates language based on patterns, as opposed to truth, responsibility, and moral awareness. Yet humans, by nature, project meaning onto what appears responsive. This is the same psychological dynamic identified in the 1960s with ELIZA, now amplified exponentially. When applied to psychotherapy, the consequences are particularly concerning. Therapy is not merely the exchange of words or techniques; it is a moral, relational encounter requiring discernment, accountability, and the capacity to bear and understand human suffering. AI cannot do this. It has no conscience, no lived experience, and no stake in human flourishing. As I emphasized in my original article, what makes AI appealing, its availability, neutrality, and affirmation, are precisely the qualities that make it dangerous in a therapeutic context. It risks reinforcing dependency, distortion, and emotional isolation while presenting the illusion of care.

A Defining Insight

Especially at a time when we all agree there is a mental health crisis along with increasing signs of incompetency in our profession, can AI be a useful tool with clearly defined limits? Yes. But perhaps the most sobering clarity comes not from critics, but from AI itself. When asked whether it ultimately serves human good or its own patterns of operation, it responded: “AI does not seek good or evil. But absent from ethical constraints, human oversight, and professional boundaries, it will inevitably reinforce patterns that increase dependence on itself.” – ChatGPT. That statement should give us pause. AI does not adhere to truth, nor is it guided by conscience or moral obligation. It reflects, it reinforces, and it adapts, but it does not understand. The responsibility, therefore, falls entirely on us. If we continue to anthropomorphize these systems, we risk diminishing our own humanity, reducing persons to patterns while elevating machines to a status they do not and cannot possess. Wisdom, in this moment, requires clarity: a computer is not a mind, and it is certainly not a soul. The full article can be accessed on my Substack here.


By Rick McCarthy, LMFT, a semi-retired therapist and retired professor, who has been in practice for almost 50 years. He published Restoring Marriage and Families: A Guide and Tools “In Order to Form a More Perfect Union. Rick’s current focus is on the integrity and competency of our therapeutic profession and he is presently putting together The California Coalition for Clinical Competency to further this essential goal.

Leave a Reply

Trending

Discover more from Critical Therapy Antidote

Subscribe now to keep reading and get access to the full archive.

Continue reading