Therapy is one of the most dangerous applications you could imagine for an LLM. Exposing people who already have mental health issues, who are extremely vulnerable to manipulation or delusions to a machine that's designed to to produce human-like text is so obviously risky it boggles the mind that anyone would even consider it.
I'd hope it was obvious I wasn't referring to the user. You shouldn't prosecute a person using a tool that wasn't intended for that function on themselves, in the same way you can't prosecute a person for performing 'surgery' on themselves without medical knowledge or the right tools.
I mean we should prosecute companies or individuals positioning LLMs as therapy tools, suggesting that they can be replacements for therapy, or are appropriate for medical use at all. It's medical malpractice and misrepresentation, pure and simple.
The paper we're discussing is literally about this. It's the second sentence in the abstract: "In this paper, we investigate the use of LLMs to replace mental health providers, a use case promoted in the tech startup and research space."