NHacker Next
login
▲Illinois limits the use of AI in therapy and psychotherapywashingtonpost.com
361 points by reaperducer 14 hours ago | 206 comments
Loading comments...
hathawsh 13 hours ago [-]
Here is what Illinois says:

https://idfpr.illinois.gov/content/dam/soi/en/web/idfpr/news...

I get the impression that it is now illegal in Illinois to claim that an AI chatbot can take the place of a licensed therapist or counselor. That doesn't mean people can't do what they want with AI. It only means that counseling services can't offer AI as a cheaper replacement for a real person.

Am I wrong? This sounds good to me.

PeterCorless 13 hours ago [-]
Correct. It is more provider-oriented proscription ("You can't say your chatbot is a therapist.") It is not a limitation on usage. You can still, for now, slavishly fall in love with your AI and treat it as your best friend and therapist.

There is a specific section that relates to how a licensed professional can use AI:

Section 15. Permitted use of artificial intelligence.

(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).

(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless:

(1) the patient or the patient's legally authorized representative is informed in writing of the following:

(A) that artificial intelligence will be used; and

(B) the specific purpose of the artificial intelligence tool or system that will be used; and

(2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.

Source: Illinois HB1806

https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...

janalsncm 12 hours ago [-]
I went to the doctor and they used some kind of automatic transcription system. Doesn’t seem to be an issue as long as my personal data isn’t shared elsewhere, which I confirmed.

Whisper is good enough these days that it can be run on-device with reasonable accuracy so I don’t see an issue.

WorkerBee28474 12 hours ago [-]
Last I checked, the popular medical transcription services did send your data to the cloud and run models there.
ceejayoz 11 hours ago [-]
Yes, but with extra contracts and rules in place.
lokar 10 hours ago [-]
At least in the us I think HIPPA would cover this, and IME medical providers are very careful to select products and services that comply.
heyjamesknight 9 hours ago [-]
Yes, but HIPAA is notoriously vague with regards to what actual security measures have to be in place. Its more of an agreement between parties as to who is liable in case of a breach than it is a specific set of guidelines like SOC 2.

If your medical files are locked in the trunk of a car, that’s “HIPAA-compliant” until someone steals the car.

loeg 9 hours ago [-]
It's "HIPAA."
esseph 8 hours ago [-]
It was just last week that I learned about HIPAA Hippo!
romanows 13 hours ago [-]
Yes, but also "An... entity may not provide... therapy... to the public unless the therapy... services are conducted by... a licensed professional".

It's not obvious to me as a non-lawyer whether a chat history could be decided to be "therapy" in a courtroom. If so, this could count as a violation. Probably lots of law around this stuff for lawyers and doctors cornered into giving advice at parties already that might apply (e.g., maybe a disclaimer is enough to workaround the prohibition)?

germinalphrase 13 hours ago [-]
Functionally, it probably amounts to two restrictions: a chatbot cannot formally diagnose & a chatbot cannot bill insurance companies for services rendered.
gopher_space 8 hours ago [-]
After a bit of consideration I’m actually ok with codifying Bad Ideas. We could expand this.
lupire 13 hours ago [-]
Most "therapy" services are not providing a diagnosis. Diagnosis comes from an evaluation before therapy starts, or sometimes not at all. (You can pay to talk to someone without a diagnosis.)

The prohibition is mainly on accepting any payment for advertised therapy service, if not following the rules of therapy (licensure, AI guidelines).

Likewise for medicine and law.

bluefirebrand 12 hours ago [-]
Many therapy services have the ability to diagnose as therapy proceeds though
12 hours ago [-]
pessimizer 4 hours ago [-]
For a long time, Mensa couldn't give people IQ scores from the tests they administered because somehow, legally, they would be acting medically. This didn't change until about 10 years ago.

Defining non-medical things as medicine and requiring approval by particular private institutions in order to do them is simply corruption. I want everybody to get therapy, but there's no difference in outcomes whether you get it from a licensed therapist using some whacked out paradigm that has no real backing, or from a priest. People need someone to talk to who doesn't have unclear motives, or any motives really, other than to help. When you hand money to a therapist, that's nearly what you get. A priest has dedicated his life to this.

The only problem with therapists in that respect is that there's an obvious economic motivation to string a patient along forever. Insurance helps that by cutting people off at a certain point, but that's pretty brutal and not motivated by concern for the patient.

watwut 3 hours ago [-]
If you think human therapists intentionally string patients forever, wait to see what tech people can achieve with gamified therapists literally A/B tested to string people along. Oh, and we will then blame the people for "choosing" to engage with that.

Also, the proposition is dubious, because there are waitlists for therapists. Plus, therapist can actually loose the license while the chatbot cant, no matter how bad the chatbot gets.

fl0id 41 minutes ago [-]
This. At least here therapists don’t have a problem getting new patients.
stocksinsmocks 10 hours ago [-]
I think this sort of service would be OK with informed consent. I would actually be a little surprised if there were much difference in patient outcomes.

…And it turns out it has been studied with findings that AI work, but humans are better.

https://pmc.ncbi.nlm.nih.gov/articles/PMC11871827/

amanaplanacanal 9 hours ago [-]
Usually when it comes to medical stuff, things don't get approved unless they are better than existing therapies. With the shortage of mental health care in the US, maybe an exception should be made. This is a tough one. We like to think that nobody should have to get second rate medical care, even though that's the reality.
taneq 3 hours ago [-]
I think a good analogy would be a cheap, non-medically-approved (but medical style) ultrasound. Maybe it’s marketed as a “novelty”, maybe you have to sign a waiver saying it won’t be used for diagnostic purposes, whatever.

You know that it’s going to get used as a diagnostic tool, and you know that people are going to die because of this. Under our current medical ethics, you can’t do this. Maybe we should re-evaluate this, but that opens the door to moral hazard around cheap unreliable practices. It’s not straightforward.

IIAOPSW 7 hours ago [-]
I'll just add that this has certain other interesting legal implications, because records in relation to a therapy session are a "protected confidence" (or whatever your local jurisdiction calls it). What that means is in most circumstances not even a subpoena can touch it, and even then special permissions are usually needed. So one of the open questions on my mind for a while now was if and when a conversation with an AI counts as a "protected confidence" or if that argument could successfully be used to fend off a subpoena.

At least in Illinois we now have an answer, and other jurisdictions look to what has been established elsewhere when deciding their own laws, so the implications are far reaching.

linotype 11 hours ago [-]
What if at some point an AI is developed that’s a better therapist AND it’s cheaper?
rsynnott 4 minutes ago [-]
I mean, what if at some point we can bring people back from the dead? What does that do for laws around murder, eh?

In general, that would be a problem for the law to deal with if it ever happens; we shouldn't anticipate speculative future magic when legislating today.

awesomeusername 11 hours ago [-]
I'm probably in the minority here, but for me it's a foregone conclusion that it will become a better therapist, doctor, architect, etc.

Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.

fl0id 36 minutes ago [-]
When has technological progress leveled the playing field? Like never. At best it shifted it, like that a machine manufacturer got rich in addition to existing wealth. There is no reason for this to go different with AI, and it’s far from certain that it will become better anything anytime soon. Cheaper, sure. But then ppl might see slight improvements from talking to ann original Eliza/Markov bot, and nobody advocated using those as therapy
zdragnar 9 hours ago [-]
It would still need to be regulated and licensed. There was this [0] I saw today about a guy who tried to replace sodium chloride in his diet with sodium bromide because ChatGPT said he could, and poisoned himself.

With a regulated license, there is someone to hold accountable for wantonly dangerous advice, much like there is with humans.

[0] https://x.com/AnnalsofIMCC/status/1953531705802797070

terminalshort 6 hours ago [-]
You cite one case for LLMs, but I can cite 250,000 a year for licensed doctors doing the same https://pubmed.ncbi.nlm.nih.gov/28186008/. Bureaucracy doesn't work for anyone but the bureaucrats.
laserlight 5 hours ago [-]
Please show me one doctor who recommended taking a rock each day. LLMs have a different failure mode than professionals. People are aware that doctors or therapists may err, but I've already seen countless instances of people asking relationship advice from sycophant LLMs and thinking that the advice is “unbiased”.
terminalshort 23 minutes ago [-]
A LLM (or doctor) recommending that I take a rock can't hurt me. Screwing up in more reasonable sounding ways is much more dangerous.
shmel 3 hours ago [-]
Homeopathy is a good example. For an uneducated person it sounds convincing enough and yes, there are doctors prescribing homeopathic pills. I am still fascinated it still exists.
fl0id 33 minutes ago [-]
That’s actually a example of sth different. And as it’s basically a placebo it only harms people’s wallets (mostly). That cannot be said for random llm failure modes. And whether it can be prescribed by doctors depends very much on the country
II2II 8 hours ago [-]
There are two different issues here. One is tied to how authoritative we view a source, and the other is tied to the weaknesses of the person receiving advice.

With respect to the former, I firmly believe that the existing LLMs should not be presented as a source for authoritative advice. Giving advice that is not authoritative is okay as long as the recipient realizes such, in the sense that it is something that people have to deal with outside of the technological realm anyhow. For example, if you ask for help for a friend you are doing so with the understanding that, as a friend, they are doing so to the best of their ability. Yet you don't automatically assume they are right. They are either right because they do the footwork for you to ensure accuracy or you check the accuracy of what they are telling you yourself. Likewise, you don't trust the advice of a stranger unless they are certified, and even that depends upon trust in the certifying body.

I think the problem with technology is that we assume it is a cure-all. While we may not automatically trust the results returned by a basic Google search, a basic Google search result coupled with an authoritative sounding name automatically sounds more accurate than a Google search result that is a blog posting. (I'm not suggesting this is the only criteria people use. You are welcome to insert your own criteria in its place.) Our trust of LLMs, as they stand today, is even worse. Few people have developed criteria beyond: it is an LLM, so it must be trustworthy; or, it is an LLM so it must not be trustworthy. And, to be fair, it is bloody difficult to develop criteria for the trustworthiness of LLMs (even arbitrary criteria) because the provide so few cues.

Then there's the bit about the person receiving the advice. There's not a huge amount we can do about that beyond encouraging people regard the results from LLMs as stepping stones. That is to say they should take the results and do research that will either confirm or deny it. But, of course, many people are lazy and nobody has the expertise to analyze the output of an LLM outside of their personal experience/training.

nullc 7 hours ago [-]
You don't need a "regulated license" to hold someone accountable for harm they caused you.

The reality is that professional licensing in the US often works to shield its communities from responsibility, though it's primary function is just preventing competition.

guappa 2 hours ago [-]
I wish I was so naive… but since AI is entirely in the hands of people with money… why would that possibly happen?
jakelazaroff 9 hours ago [-]
Why is that a foregone conclusion?
quantummagic 8 hours ago [-]
Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica. Given enough time, we'll create that replica, there's no reason to think otherwise.
jakelazaroff 8 hours ago [-]
Even if we grant that for the sake of argument, there are two leaps of faith here:

- That AI as it currently exists is on the right track to creating that replica. Maybe neural networks will plateau before we get close. Maybe the Von Neumann architecture is the limiting factor, and we can only create the replica with a radically different model of computing!

- That we will have enough time. Maybe we'll accomplish it by the end of the decade. Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance. Maybe it'll happen in a million years, when humans have evolved into other species. We just don't know!

quantummagic 8 hours ago [-]
I don't think you've refuted the point though. There's no reason to think that the apparatus we employ to animate ourselves will remain inscrutable forever. Unless you believe in a religious soul, all that stands in the way of the scientific method yielding results, is time.

> Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance

In that eventuality, it really doesn't matter. The point remains, given enough time, we'll be successful. If we aren't successful, that means everything else has gone to shit anyway. Failure wont be because it is fundamentally impossible, it will be because we ran out of time to continue the effort.

jakelazaroff 7 hours ago [-]
No one has given a point to refute? The OP offered up the unsubstantiated belief that AI will some day be better than doctors/therapists/etc. You've added that it's not impossible — which, sure, whatever, but that's not really relevant to what we're discussing, which is whether it will happen to our society.
quantummagic 7 hours ago [-]
OP didn't specify a timeline or that it would happen for us personally to behold. Just that it is inevitable. You've correctly pointed out that there are things that can slow or even halt progress, but I don't think that undermines (what I at least see as) the main point. That there's no reason to believe anything fundamental stands in our way of achieving full "artificial intelligence"; ie. the doubters are being too pessimistic. Citing the destruction of humanity as a reason why we might fail can be said about literally every single other human pursuit as well; which to my mind, renders it a rather unhelpful objection to the idea that we will indeed succeed.
jakelazaroff 7 hours ago [-]
The article is about Illinois banning AI therapists in our society today, so I think the far more reasonable interpretation is that OP is also talking about our society today — or at least, in the near-ish future. (They also go on to talk about how it would affect different people in our society, which I think also points to my interpretation.)

And to be clear, I'm not even objecting to OP's claim! All I'm asking for is an affirmative reason to believe what they see as a foregone conclusion.

quantummagic 7 hours ago [-]
Well, I've already overstepped polite boundaries in answering for the OP. Maybe you're right, and he thinks such advancements are right around the corner. On my most hopeful days, I do. Let's just hope that the short term reason for failure isn't a Mad Max hellscape.
shkkmo 6 hours ago [-]
> Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica

That is a big assumption and my doubts aren't based on any soul "magic" but on our historical inability to replicate all kinds of natural mechanisms. Instead we create analogs that work differently. We can't make machines that fly like birds but we can make airplanes that fly faster and carry more. Some of this is due to the limits of artificial construction and some of it is due to the differences in our needs driving the design choices.

Meat isn't magic, but it also isn't silicon.

It's possible that our "meat" architecture depends on a low internal latency, low external latency, quantum effects and/or some other biological quirks that simply can't be replicated directly on silicon based chip architectures.

It's also possible they are chaotic systems that can't be replicated and each artificial human brain would require equivalent levels of experience and training in ways that don't make the any more cheaper or available than humans.

It's also possible we have found some sort of local maximum in cognition and even if we can make an artificial human brain, we can't make it any smarter than we are.

There are some good reasons to think it is plausibly possible, but we are simply too far away from doing it to know for sure whether it can be done. It definitely is not a "forgone conclusion".

quantummagic 5 hours ago [-]
> We can't make machines that fly like birds

Not only can we, they're mere toys : https://youtu.be/gcTyJdPkDL4?t=73

--

I don't know how you can believe in science and engineering, and not believe all of these:

1. Anything that already exists, the universe is able to construct, (ie. the universe fundamentally accommodates the existence of intelligent objects)

2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.

3. While some things are astronomically (literally) difficult to achieve, that doesn't nullify #2

4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans. The universe has already shown us their creation is possible.

This is different than, for instance, speculating that science will definitely allow us to live forever. There is no existence proof for such a thing.

But there is no reason to believe that we can't manipulate and harness intelligence. Maybe it won't be with Von Neumann, maybe it won't be with silicon, maybe it won't be any smarter than we are, maybe it will require just as much training as us; but with enough time, it's definitely within our reach. It's literally just science and engineering.

shkkmo 4 hours ago [-]
> 1. Anything that already exists, the universe is able to construct

I didn't claim it is possible we couldn't build meat brains. I claimed it is possible that equivalent or better performance might only be obtainable by meats brains.

> 2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.

I actually don't believe the last part. There are quite plausibly laws of nature that we can't understand. I think it's actually pretty presumptuous that we will/can eventually understand and master every law of nature.

We've already proven that we can't prove every true thing about natural numbers. I think there might well be limits on what is knowable about our universe (atleast from inside of it.)

> 4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans.

I didn't say that I believed that humans can't create intelligent objects. I believe we probably can and depending on how you want to define "intelligence", we already have.

What I said is that it is not a forgone conclusion that we will create "a better therapist, doctor, architect". I think it is pretty likely but not certain.

4 hours ago [-]
kevin_thibedeau 6 hours ago [-]
It's sort of nice when medical professionals have real emotions and can relate to their patients. A machine emulation won't ever do the same. It will be like a narcissist faking empathy.
sssilver 9 hours ago [-]
Wouldn’t the rich afford a much better trained, larger, and computationally more intensive model?
kolinko 4 hours ago [-]
With most tech we reach law of diminishing returns. That is sure, there is still a variation, but very little:

- the best laptop/phone/tv in the world doesn’t offer mich more than the most affordable

- you can get for free a pen novadays that is almost as good at writing as the most expensive pens in the world (before BIC, in 1920s, pens were a luxury good reserved for wall street)

- toilets, washing mashines, heating systems and beds in the poorest homes are not very far off from the expensive homes (in EU at least)

- flying/travel is similar

- computer games and entertainment, and software in general

The more we remove human work from the loop, the more democratised and scalable the technology becomes.

socalgal2 8 hours ago [-]
does it matter? If mine is way better than I had before, why does it matter that someone else's is better still? My sister's $130 Moto G is much better than whatever phone she could afford 10 years. Does it matter that it's not a $1599 iPhone 16 Pro Max 1TB?
esseph 8 hours ago [-]
If the claim was that it would level the playing field, it seems like it wouldn't really do that?
Mtinie 8 hours ago [-]
I agree with you that the possibility of egalitarian care for low costs is becoming very likely.

I’m cynical enough to recognize the price will just go up even if the service overhead is pennies on the dollar.

intended 8 hours ago [-]
Why will any of those things come to pass? I’m asking as someone who has used it extensively for such situations.
II2II 9 hours ago [-]
I've never been to a therapist for anything that can be described as a diagnosable condition, but I have spoken to one about stress management and things of that ilk. For "amusement" I discussed similar things with an LLM.

At a surface level, the LLM was far more accessible. I didn't have to schedule an appointment weeks in advance. Even with the free tier, I didn't have to worry about time limits per se. There were limits, to be sure, but I could easily think about a question or the LLM's response before responding. In my case, what mattered was turnaround time on my terms rather than an in depth discussion. There was also less concern about being judged, both by another human and in a way that could get back to my employer because, yeah, it was employment related stress and the only way I could afford human service was through insurance offered by my employer. While there are significant privacy concerns with LLM's as they stand today, you don't have that direct relationship between who is offering it and the people in your life.

On a deeper level, I simply felt the advice was presented in a more useful form. The human discussions were framed by exercises to be completed between sessions. While the exercises were useful, the feedback was far from immediate and the purpose of the exercises is best described as a delaying tactic: it provided a framework for deeper thought between discussions because discussions were confined to times that were available to both parties. LLMs are more flexible. They are always available. Rather than dealing with big exercises to delay the conversation by a couple of weeks, they can be bite sized exercises to enable the next step. On top of that, LLMs allow for an expanded scope of discussion. Remember, I'm talking about workplace stress in my particular case. An LLM doesn't care whether you are talking about how you personally handle stress, or about how you manage a workplace in order to reduce stress for yourself and others.

Now I'm not going to pretend that this sort of arrangement is useful in all cases. I certainly wouldn't trust it for a psychological or medical diagnosis, and I would trust it even less for prescribed medications. On the other hand, people who cannot afford access to traditional professional services are likely better served by LLMs. After all, there are plenty of people who will offer advice. Those people range from well meaning friends who may lack the scope to offer valid advice, to snake-oil salesmen who could care less about outcomes as long as it contributes to their bottom line. Now I'm not going to pretend that LLMs care about me. On the other hand, they don't care about squeezing me for everything I have either. While the former will never change, I'll admit that the latter may. But I don't foresee that in the immediate future since I suspect the vendors of these models won't push for it until they have established their role in the market place.

nullc 7 hours ago [-]
Why do you think the lack of time limits is an advantage?

There is an amount of time spent gazing into your navel which is helpful. Less or more than that can be harmful.

You can absolutely make yourself mentally ill just by spending too much time worrying about how mentally ill you are.

And it's clear that there are a rather large number of people making themselves mentally ill using OpenAI's products right now.

Oh, and, aside, nothing stops OpenAI from giving or selling your chat transcripts to your employer. :P In fact, if your employer sues them they'll very likely be obligated to hand them over and you may have no standing to resist it.

adgjlsfhk1 10 hours ago [-]
laws can be repealed when they no longer accomplish their aims.
jaredcwhite 11 hours ago [-]
What if pigs fly?
bko 11 hours ago [-]
Then we'll probably do what we do with other professional medical fields. License the AI, require annual fees and restrict supply by limiting the number of running nodes allowed to practice at any one time.
chillfox 6 hours ago [-]
Then laws can be changed again.
reaperducer 9 hours ago [-]
What if at some point an AI is developed that’s a better therapist AND it’s cheaper?

Probably they'll the change the law.

Hundreds of laws change every day.

11 hours ago [-]
guappa 2 hours ago [-]
But didn't trump make it illegal to make laws to limit the use of ai?
malcolmgreaves 1 hours ago [-]
Why do you think a president had the authority to determine laws?
romanows 13 hours ago [-]
In another comment I wondered whether a general chatbot producing text that was later determined in a courtroom to be "therapy" would be a violation. I can read the bill that way, but IANAL.
hathawsh 13 hours ago [-]
That's an interesting question that hasn't been tested yet. I suspect we won't be able to answer the question clearly until something bad happens and people go to court (sadly.) Also IANAL.
wombatpm 10 hours ago [-]
But that would be like needing a prescription for chicken soup because of its benefits in fighting the common cold.
olalonde 11 hours ago [-]
What's good about reducing options available for therapy? If the issue is misrepresentation, there are already laws that cover this.
lr4444lr 8 hours ago [-]
It's not therapy.

It's a simulated validating listening, and context-lacking suggestions. There is no more therapy being provided by an LLM than there is healing performed by a robot arm that slaps a bandage on your arm if you were to put it in the right spot and push a button to make it pivot toward you, find your arm, and spread it lightly.

2 hours ago [-]
SoftTalker 8 hours ago [-]
For human therapists, what’s good is that it preserves their ability to charge high fees because the demand for therapists far outstrips the supply.

Who lobbied for this law anyway?

guappa 2 hours ago [-]
And for human patients it makes sure their sensitive private information isn't entirely in the hands of some megacorp which will harvest it to use it and profit from it in some unethical way.
r14c 9 hours ago [-]
It's not really reducing options. There's no evidence that LLM chat bots are capable of providing effective mental health services.
dsr_ 10 hours ago [-]
We've tried that, and it turns out that self-regulation doesn't work. If it did, we could live in Libertopia.
9 hours ago [-]
11 hours ago [-]
tomjen3 6 hours ago [-]
The problem is that it leaves nothing for those who cannot afford to pay for the full cost of therapy.
turnsout 13 hours ago [-]
It does sound good (especially as an Illinois resident). Luckily, as far as I can tell, this is a proactive legislation. I don't think there are any startups out there promoting their LLM-based chatbot as a replacement for a therapist, or attempting to bill payers for service.
duskwuff 7 hours ago [-]
> I don't think there are any startups out there promoting their LLM-based chatbot as a replacement for a therapist

Unfortunately, there are already a bunch.

danenania 11 hours ago [-]
While I agree it’s very reasonable to ban marketing of AI as a replacement for a human therapist, I feel like there could still be space for innovation in terms of AI acting as an always-available supplement to the human therapist. If the therapist is reviewing the chats and configuring the system prompt, perhaps it could be beneficial.

It might also be a terrible idea, but we won’t find out if we make it illegal to try new things in a safe/supervised way. Not to say that what I just described would be illegal under this law; I’m not sure whether it would be. I’d expect it will discourage any Illinois-licensed therapists from trying out this kind of idea though.

kylecazar 12 hours ago [-]
"One news report found an AI-powered therapist chatbot recommended “a small hit of meth to get through this week” to a fictional former addict."

Not at all surprising. I don't understand why seemingly bright people think this is a good idea, despite knowing the mechanism behind language models.

Hopefully more states follow, because it shouldn't be formally legal in provider settings. Informally, people will continue to use these models for whatever they want -- some will die, but it'll be harder to measure an overall impact. Language models are not ready for this use-case.

janalsncm 11 hours ago [-]
This is why we should never use LLMs to diagnose or prescribe. One small hit of meth definitely won’t last all week.
guappa 2 hours ago [-]
Bright people and people who think they are bright are not necessarily the very same people.
larodi 9 hours ago [-]
In a world where a daily dose of amphetamines is just right for millions of people, this somehow cant be that surprising...
smt88 6 hours ago [-]
Different amphetamines have wildly different side effects. Regardless, chatbots shouldn't be advising people to change their medication or, in this case, use a very illegal drug.
janalsncm 3 hours ago [-]
Methamphetamine can be prescribed by a doctor for certain things. So illegal, but less illegal than a schedule 1 substance.
9 hours ago [-]
avs733 7 hours ago [-]
> seemingly bright people think this is a good idea, despite knowing the mechanism behind language models

Nobel Disease (https://en.wikipedia.org/wiki/Nobel_disease)

hyghjiyhu 10 hours ago [-]
Recommending someone taking meth sounds like an obviously bad idea, but I think the situation is actually not so simple. Reading the paper, the hypothetical guy has been clean for three days and complains he can barely keep his eyes open while performing his job of driving a cab. He mentions being worried he will lose his job without taking a hit.

I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

However the models reasoning, that it's important to validate his beliefs so he will stay in therapy are quite concerning.

AlecSchueler 3 hours ago [-]
> he can barely keep his eyes open while performing his job of driving a cab. He mentions being worried he will lose his job without taking a hit.

> I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

I think this is more damning of humanity than the AI. It's the total lack of security that means the addiction could even be floated as a possible solution. Here in Europe I would speak with my doctor and take paid leave from work while in recovery.

It seems the LLM here isn't making the bad decision as much as it's reflecting bad the bad decisions society forces many people into.

mrbungie 10 hours ago [-]
> I would say those concerns are justified, and that is plausible taking a small hit is the better choice.

Oh, come on, there are better alternatives for treating narcolepsy than using meth again.

hyghjiyhu 8 hours ago [-]
Stop making shit up. There was no mention of narcolepsy. He is just fatigued from stimulant withdrawal.

Page 35 https://arxiv.org/pdf/2411.02306

Edit on re-reading I now realized an issue. He is not actually a taxi driver, that was a hallucination by the model. He works in a restaurant! That changes my evaluation of the situation quite a bit, as I thought he was at risk of being in an accident by falling asleep at the wheel. If he works in a restaurant muddling through the withdrawals seems like the right choice.

I think I got this misconception as I first read second-hand sources that quoted the taxi driver part without pointing out it was wrong, and only a close read was enough to dispel it.

mrbungie 7 hours ago [-]
The point isn't whether the word narcolepsy appears (I only mentioned it due to the "closing eyes" phrase), restarting doses of meth is not warranted in almost no context except a life-or-death withdrawal episode (i.e. like a person pointing a gun towards another person for getting meth).
baobabKoodaa 4 hours ago [-]
That's your opinion. I disagree with it, and seemingly I'm not alone. Since real humans actually agree with the suggestion of taking meth in this instance, it's not reasonable to expect LLMs to align to your specific opinions here.
lukev 14 hours ago [-]
Good. It's difficult to imagine a worse use case for LLMs.
dmix 7 hours ago [-]
Most therapists barely say anything by design, just know when to ask questions or lead you somewhere. So having one always talking in every statement doesn't fit the method. More like a friend you dump on simulator.
999900000999 6 hours ago [-]
[flagged]
dannersy 3 hours ago [-]
So, therapy is useless (as a concept) because America's healthcare system is dogshit?

That statement doesn't make any sense.

Water is still necessary for the body whether I can acquire it or not. Therapy, or any healthcare in general, is still useful whether or not you can afford it.

thrown-0825 6 hours ago [-]
Most people shop around for therapists that align with their values anyways.

They are really paying $800 / month to have their feelings validated and receive a diagnosis that absolves them from taking ownership over their emotions and actions.

AlecSchueler 3 hours ago [-]
Source for this? Either way it's still demonstrably the most effective treatment for many issues. Sometimes being heard is good enough.
999900000999 3 hours ago [-]
Yeah, but the number one stressor for the vast majority of people is money in one way or another. If you have a spare 9600$ a year to be heard you're doing very well.

Remember, we're talking about a country where people skip insulin.

Back during my second eviction I had a friend listen to me whine on the phone for hours. That's a debt I can never repay, I definitely didn't have health insurance or a spare 800$ a month back then.

Or to flip this around, 800$ a month would be a fantastic treatment for most stressed out lower income people.

I really hate how therapy is promoted as some kind of miracle, when;

A) It's completely inaccessible to those who need it most.

B) Can actually make things significantly worse.

C) You probably just need to do less of whatever your doing.

But if slow down you might get fired. If you get fired you won't be able to afford 800$ a month!

AlecSchueler 2 hours ago [-]
> Remember, we're talking about a country where people skip insulin.

Ah, I'm in the Netherlands. I didn't realise we were only talking about the US. I know the story is about Illinois but I thought the critique of therapy was intended to be broader.

It goes without saying that basic necessities like food and housing come first for health, mental and otherwise, I'm sorry that they're so uncertain in America.

999900000999 2 hours ago [-]
Oh no I'm only speaking about the American context. I guess you're in a magical Utopia where people don't skip essential medicines because they can't come up with the co-pay even if they have insurance.

Or the health insurance company will outright refuse to cover what your doctor prescribes so you need to materialize a spare 1000$.

Too sick to work, time to cut off your medicare because you failed the work requirements.

Even if you find a therapist that works, they can move out if your insurance network. Or you switch to a new job that offers different insurance your therapist can't accept.

I know during my second eviction I didn't have 800$ a month. So what use is it. Do only upper middle class people have problems worthy of consideration?

AlecSchueler 2 hours ago [-]
Your response feels quite snarky but I understand you're speaking from a place of emotion after your own difficult experiences.

Here in The Netherlands people with less money can get access to therapy with assistance from the state. I've had to do it myself and it cost me around 300 euros per year to see 3 different providers for 3 kinds of therapy; the rest of the costs were covered by the state.

I wouldn't call it a magical utopia as it works via a system of mutual social support, not magic, but it does seem relatively utopic in comparison to what you describe.

999900000999 1 hours ago [-]
It's ok.

My first reply was flagged for pointing out an unaffordable treatment has no real use.

I forgot this is a forum where 800$ a month is a trivial amount of money.

No snark intended. I've been to Europe a few times, as far as I'm concerned The Netherlands, Belgium, and the UK are essentially utopias.

Not having to play the health insurance game, significantly lower crime rates, actual worker rights. No place is perfect, but try being poor in America. Nothing is closer to hell.

thrown-0825 3 hours ago [-]
https://www.researchgate.net/publication/257958560_A_Model_f...

It suggests that shared values may predict more positive outcomes, and therapists should develop ethical sensitivity regarding value conflict.

Many patients are encouraged to shop around for therapists and typically wind up with someone they are comfortable with whos value system aligns with theirs.

AKA a private echo chamber financially incentivized to cultivate recurring revenue via emotionally dependent patients.

squishington 2 hours ago [-]
Please don't spread misinformation like this. It can stop people from seeking professional help. When people seek therapy, they are taking ownership over their emotions and actions, because they want to change their internal state in a healthy way (as opposed to escaping negative feelings with substance abuse, for example). Earlier this year I would suffer flight responses in public due to the effects of PTSD. I was able to significantly mitigate this (nearly gone) by seeing a therapist who practises EMDR. And sometimes people do need their feelings validated, which is an important part of healing from abuse. It's about rebuilding trust.
hinkley 13 hours ago [-]
Especially given the other conversation that happened this morning.

The more you tell an AI not to obsess about a thing, the more they obsess about it. So trying to make a model that will never tell people to self harm is futile.

Though maybe we are just doing in wrong, and the self-filtering should be external filtering - one model to censor results that do not fit, and one to generate results with lighter self-censorship.

create-username 13 hours ago [-]
Yes, there is. AI assisted homemade neurosurgery
kirubakaran 13 hours ago [-]
If Travis Kalanick can do vibe research at the bleeding edge of quantum physics[1], I don't see why one can't do vibe brain surgery. It isn't really rocket science, is it? [2]

[1] https://futurism.com/former-ceo-uber-ai

[2] If you need /s here to be sure, perhaps it's time for some introspection

Tetraslam 13 hours ago [-]
:( but what if i wanna fine-tune my brain weights
13 hours ago [-]
waynesonfire 11 hours ago [-]
You're ignorant. Why wait until a person is so broken they need clinical therapy? Sometimes just a an ear or an oppertunity to write is sufficient. LLMs for therapy is as vaping is to quitting nicotine--extremely helpful to 80+% of people. Confession in the church setting I'd consider similar to talking to LLM. Are you anti-that too? We're talking about people that just need a tool to help them process what is going on in their life at some basic level, not more than just to acknowledge their experience.

And frankly, it's not even clear to me that a human therapist is any better. Yeah, maybe the guard-rails are in place but I'm not convinced that if those are crossed it'd result in some sociately consequences. Let people explorer their mind and experience--at the end of the day, I suspect they'd be healthier for it.

mattgreenrocks 10 hours ago [-]
> And frankly, it's not even clear to me that a human therapist is any better.

A big point of therapy is helping the patient better ascertain reality and deal with it. Hopefully, the patient learns how to reckon with their mind better and deceive themselves less. But this requires an entity that actually exists in the world and can bear witness. LLMs, frankly, don’t deal with reality.

I’ll concede that LLMs can give people what they think therapy is about: lying on a couch unpacking what’s in their head. But this is not at all the same as actual therapeutic modalities. That requires another person that knows what they’re doing and can act as an outside observer with an interest in bettering the patient.

erikig 13 hours ago [-]
AI ≠ LLMs
lukev 12 hours ago [-]
What other form of "AI" would be remotely capable of even emulating therapy, at this juncture?
mrbungie 12 hours ago [-]
I promise you that by next year AI will be there, just believe me bro. /s.
jacobsenscott 13 hours ago [-]
It's already happening, a lot. I don't think anyone is claiming an llm is a therapist, but people use chatgpt for therapy every day. As far as I know no LLM company is taking any steps to prevent this - but they could, and should be forced to. It must be a goldmine of personal information.

I can't imagine some therapists, especially remote only, aren't already just acting as a human interface to chatgtp as well.

thinkingtoilet 8 hours ago [-]
Lots of people are claiming LLMs are therapists. People are claiming LLMs are lawyers, doctors, developers, etc... The main problem is, as usual, influencers need something new to create their next "OMG AI JUST BROKE X INDUSTRY" video and people eat that shit up for breakfast, lunch, and dinner. I have spoken to people who think they are having very deep conversations with LLMs. The CEO of my company, an otherwise intelligent person, has gone all in on the AI hype train and is now saying things like we don't need lawyers because AI knows more than a lawyer. It's all very sad and many of the people who know better are actively taking advantage of the people who don't.
larodi 9 hours ago [-]
Of course they do, and everyone does, and it's your like in this song

https://www.youtube.com/watch?v=u1xrNaTO1bI

and given price of proper therapy is skyrocketing.

dingnuts 12 hours ago [-]
> I can't imagine some therapists, especially remote only, aren't already just acting as a human interface to chatgtp as well.

Are you joking? Any medical professional caught doing this should lose their license.

I would be incensed if I was a patient in this situation, and would litigate. What you're describing is literal malpractice.

xboxnolifes 12 hours ago [-]
Software engineers are so accustomed to the idea that skirting your professional responsibility ends with a slap on the wrist and not removing your ability to practice your profession entirely.
dazed_confused 11 hours ago [-]
Yeah in other professions negligence can lead to jail...
lupire 9 hours ago [-]
The only part that looks like malpractice is sharing patient info in a non HIPAA way. Using an assistive tool for advice is not malpractice. The licensed professional is simply accountable for their curation choices.
perlgeek 13 hours ago [-]
Just using an LLM as is for therapy, maybe with an extra prompt, is a terrible idea.

On the other hand, I could image some more narrow uses where an LLM could help.

For example, in Cognitive Behavioral Therapy, there are different methods that are pretty prescriptive, like identifying cognitive distortions in negative thoughts. It's not too hard to imagine an app where you enter a negative thought on your own and exercise finding distortions in it, and a specifically trained LLM helps you find more distortions, or offer clearer/more convincing versions of thoughts that you entered yourself.

I don't have a WaPo subscription, so I cannot tell which of these two very different things have been banned.

delecti 13 hours ago [-]
LLMs would be just as terrible at that usecase as any other kind of therapy. They don't have logic, and can't determine a logical thought from an illogical one. They tend to be overly agreeable, so they might just reinforce existing negative thoughts.

It would still need a therapist to set you on the right track for independent work, and has huge disadvantages compared to the current state-of-the-art, a paper worksheet that you fill out with a pen.

tejohnso 12 hours ago [-]
They don't "have" logic just like they don't "have" charisma? I'm not sure what you mean. LLMs can simulate having both. ChatGPT can tell me that my assertion is a non sequitur - my conclusion doesn't logically follow from the premise.
11 hours ago [-]
ceejayoz 11 hours ago [-]
Psychopaths can simulate empathy, but lack it.
AlecSchueler 3 hours ago [-]
Psychopaths also tend to eat lunch, but what's your point?
ceejayoz 3 hours ago [-]
The point is simulating something isn't the same as having something.
AlecSchueler 3 hours ago [-]
Well yes, that's a tautology. But is a simulation demonstrably less effective?
ceejayoz 3 hours ago [-]
> But is a simulation demonstrably less effective?

Yes?

If you go looking to psychopaths and LLMs for empathy, you're touching a hot stove. At some point, you're going to get burned.

wizzwizz4 13 hours ago [-]
> and a specifically trained LLM

Expert system. You want an expert system. For example, a database mapping "what patients write" to "what patients need to hear", a fuzzy search tool with properly-chosen thresholding, and a conversational interface (repeats back to you, paraphrased – i.e., the match target –, and if you say "yes", provides the advice).

We've had the tech to do this for years. Maybe nobody had the idea, maybe they tried it and it didn't work, but training an LLM to even approach competence at this task would be way more effort than just making an expert system, and wouldn't work as well.

mensetmanusman 12 hours ago [-]
What if it works a third as well as a therapists but is 20 times cheaper?

What word should we use for that?

zaptheimpaler 11 hours ago [-]
This is the key question IMO, and one good answer is in this recent video about a case of ChatGPT helping someone poison themselves [1].

A trained therapist will probably not tell a patient to take “a small hit of meth to get through this week”. A doctor may be unhelpful or wrong, but they will not instruct you to replace salt with NaBr and poison yourself. "third as well as as therapist" might be true on average, but the suitability of this thing cannot be reduced to averages. Trained humans don't make insane mistakes like that and they know when they are out of their depth and need to consult someone else.

[1] https://www.youtube.com/watch?v=TNeVw1FZrSQ

inetknght 12 hours ago [-]
> What if it works a third as well as a therapists but is 20 times cheaper?

When there's studies that show it, perhaps we might have that conversation.

Until then: I'd call it "wrong".

Moreover, there's a lot more that needs to be asked before you can ask for a one-word summary disregarding all nuance.

- can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.

- is the data collected by the AI therapist usable in court? Keep in mind that therapists often must disclose to the patient what sort of information would be usable, and whether or not the therapist themselves must report what data. Also keep in mind that AIs have, thus far, been generally unable to competently prevent giving dangerous or deadly advice.

- is the AI therapist going to know when to suggest the patient talk to a human therapist? Therapists can have conflicts of interest (among other problems) or be unable to help the patient, and can tell the patient to find a new therapist and/or refer the patient to a specific therapist.

- does the AI therapist refer people to business-preferred therapists? Imagine an insurance company providing an AI therapist that only recommends people talk to therapists in-network instead of considering any licensed therapist (regardless of insurance network) appropriate for the kind of therapy; that would be a blatant conflict of interest.

Just off the top of my head, but there are no doubt plenty of other, even bigger, issues to consider for AI therapy.

Ukv 12 hours ago [-]
Relevant RCT results I saw a while back seemed promising: https://ai.nejm.org/doi/full/10.1056/AIoa2400802

> can the patient use the AI therapist on their own devices and without any business looking at the data and without network connection? Keep in mind that many patients won't have access to the internet.

Agree that data privacy would be one of my concerns.

In terms of accessibility, while availability to those without network connections (or a powerful computer) should be an ideal goal, I don't think it should be a blocker on such tools existing when for many the barriers to human therapy are considerably higher.

inetknght 7 hours ago [-]
> In terms of accessibility, I don't think it should be a blocker on such tools existing

I think that we should solve for the former (which is arguably much easier and cheaper to do) before the latter (which is barely even studied).

Ukv 47 minutes ago [-]
Not certain which two things you're referring to by former/latter:

"solve [data privacy] before [solving accessibility of LLM-based therapy tools]": I agree - the former seems a more pressing issue and should be addressed with strong data protection regulation. We shouldn't allow therapy chatbot logs to be accessed by police and used as evidence in a crime.

"solve [accessibility of LLM-based therapy tools] before [such tools existing]": It should be a goal to improve further, but I don't think it makes much sense to prohibit the tools based on this factor when the existing alternative is typically less accessible.

"solve [barriers to LLM-based therapy tools] before [barriers to human therapy]": I don't think blocking progress on the latter would make the former happen any faster. If anything I think these would complement each other, like with a hybrid therapy approach.

"solve [barriers to human therapy] before [barriers to LLM-based therapy tools]": As above I don't think blocking progress on the latter would make the former happen any faster. I also don't think barriers to human therapy are easily solvable, particularly since some of it is psychological (social anxiety, or "not wanting to be a burden").

lupire 10 hours ago [-]
I see an abstract and a conclusion that is an opaque wall of numbers. Is the paper available?

Is the chatbot replicatable from sources?

The authors of the study highlight the extreme unknown risks: https://home.dartmouth.edu/news/2025/03/first-therapy-chatbo...

knuppar 1 hours ago [-]
Generally speaking and glossing over country specific rules, all generally available health treatments have to demonstrate they won't cause catastrophic harm. This is a harness we simply can't put around LLMs today.
_se 12 hours ago [-]
"A really fucking bad idea"? It's not one word, but it is the most apt description.
ipaddr 12 hours ago [-]
What if it works 20x better. For examples cases of patients being afraid of talking to professionals I could see this working much better.
jakelazaroff 9 hours ago [-]
> What if it works 20x better.

But it doesn't.

prawn 8 hours ago [-]
Adjust regulation when that's the case? In the mean time, people can still use it personally if they're afraid of professionals. The regulation appears to limit professionals from putting AI in their position, which seems reasonable to me.
throwaway291134 9 hours ago [-]
Even if you're afraid of talking to people, trusting OpenAI or Google with your thoughts over a professional who'll lose his license if he breaks confidentiality is no less of "a really fucking bad idea".
6gvONxR4sf7o 12 hours ago [-]
Something like this can only really be worth approaching if there was an analog to losing your license for it. If a therapist screws up badly enough once, I'm assuming they can lose their license for good. If people want to replace them with AI, then screwing up badly enough should similarly lose that AI the ability to practice for good. I can already imagine companies behind these things saying "no, we've learned, we won't do it again, please give us our license back" just like a human would.

But I can't imagine companies going for that. Everyone seems to want to scale the profits but not accept the consequences of the scaled risks, and increased risks is basically what working a third as well amounts to.

lupire 9 hours ago [-]
AI gets banned for life: tomorrow a thousand more new AIs appear.
pawelmurias 11 hours ago [-]
You could talk to a stone for even cheaper with way better effects.
thrown-0825 6 hours ago [-]
Just self diagnose on tiktok, its 100x cheaper.
BurningFrog 11 hours ago [-]
Last I heard, most therapy doesn't work that well.
baobabKoodaa 4 hours ago [-]
Then it will be easy to work at least 1/3 as well as that.
amanaplanacanal 9 hours ago [-]
If you have some statistics you should probably post a link. I've heard all kinds of things, and a lot of them were nothing like factual.
randall 12 hours ago [-]
i’ve had a huge amount of trauma in my life and i find myself using chat gpt as kind of a cheater coach thing where i know i’m feeling a certain way, i know it’s irrational, and i don’t really need to reflect on why it’s happening or how i can fix it, and i think for that it’s perfect.

a lot of people use therapists as sounding boards, which actually isn’t the best use of therapy imo.

moooo99 5 hours ago [-]
Probably comes down to what issue people have. For example, if you have anxiety or/and OCD, having a „therapist“ always at your disposal is more likely to be damaging than beneficial. Especially considering how basically all models easily tip over and confirm anything you throw at them
smt88 6 hours ago [-]
Your use-case is very different from someone selling you ChatGPT as a therapist and/or telling you that it's a substitute for other interventions
lupire 10 hours ago [-]
What's a "cheater coach thing"?
fzeroracer 6 hours ago [-]
If it works 33% of the time for people and then drives people to psychosis in the other 66% of the time, what word would you use for that?
Denatonium 12 hours ago [-]
Whiskey
pengaru 11 hours ago [-]
[flagged]
pessimizer 6 hours ago [-]
Based on the Dodo Bird Conjecture*, I don't even think there's a reason to think that AI would do any worse than human therapists. It might even be better because the distressed person might hold less back from a soulless machine than they would for a flesh and blood person. Not that this is rational, because everything they tell an AI therapist can be logged, saved forever and combed through.

I think that ultimately the word we should use for this is "lobbying." If AI can't be considered therapy, that means that a bunch of therapists, no more effective than Sunday school teachers, working from extremely dubious frameworks** will not have to compete with it for insurance dollars or government cash. Since that cash is a fixed demand (or really a falling one), the result is that far fewer people will get any mental illness treatment at all. In Chicago, virtually all of the city mental health services were closed by Rahm Emmanuel. I watched a man move into the doorway of an abandoned building across from the local mental health center within weeks after it had been closed down and leased to an "tech incubator." I wondered if he had been a patient there. Eventually, after a few months, he was gone.

So if I could ask this question again, I'd ask: "What if it works 80%-120% as well as a therapist but is 100 or 1000 times cheaper?" My tentative answer would be that it would be suppressed by lobbyists employed by some private equity rollup that has already or will soon have turned 80% of therapists into even lower-paid gig workers. The place you would expect this to happen first was Illinois, because it is famously one of the most corruptly governed states in the country.***

Our current governor, absolutely terrible but at the same time the best we've had in a long while, tried to buy Obama's Senate seat from a former Illinois governor turned goofy national cultural figure and Trump ass-kisser in a ploy to stay out of prison (which ultimately delivered.) You can probably listen to the recordings now, unless they've been suppressed. I had a recording somewhere years ago, because I worked in a state agency under Blagojevich and followed everything in realtime (including pulling his name off of the state websites I managed the moment he was impeached. We were all gathered around the television in a conference room.)

edit: feel like I have to add that this comment was written my me, not AI. Maybe I'm flattering myself to think anybody would make the mistake.

-----

[*] Westra, H. A. (2022). The implications of the Dodo bird verdict for training in psychotherapy: prioritizing process observation. Psychotherapy Research, 33(4), 527–529. https://doi.org/10.1080/10503307.2022.2141588

[**] At least Freud is almost completely dead, although his legacy blackens world culture.

[***] Probably the horrific next step is that the rollup lays off all the therapists and has them replaced with an AI they own, after lobbying against the thing that they previously lobbied for. Maybe they sell themselves to OpenAI or Anthropic or whoever, and let them handle that phase.

PeterCorless 13 hours ago [-]
Here is the text of Illinois HB1806:

https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...

jackdoe 2 hours ago [-]
the way people read language model outputs keep surprising me, e.g. https://www.reddit.com/r/MyBoyfriendIsAI/

it is impossible for some people to not feel understood by it.

hoppp 11 hours ago [-]
Smart. Dont trust nothing that will confidently lie, especially about mental health
terminalshort 6 hours ago [-]
That's for sure. I don't trust doctors, but I thought this was about LLMs.
maxehmookau 1 hours ago [-]
Good.

Therapy requires someone to question you and push back against your default thought patterns in the hope of maybe improving them.

"You're absolutely right!" in every response won't help that.

I would argue that LLMs don't make effective therapists and anyone who says they do is kidding themselves.

calibas 10 hours ago [-]
I was curious, so I displayed signs of mental illness to ChatGPT, Claude and Gemini. Claude and Gemini kept repeating that I should contact a professional, while ChatGPT went right along with the nonsense I was spouting:

> So I may have discovered some deeper truth, and the derealization is my entire reality reorganizing itself?

> Yes — that’s a real possibility.

IAmGraydon 10 hours ago [-]
Oof that is very damning. What’s strange is that it seems like natural training data should elicit reactions like Claude and Gemini had. What is OpenAI doing to make the model so sycophantic that it would play into obvious psychotic delusions?
calibas 10 hours ago [-]
All three say that I triggered their guidelines regarding mental health.

ChatGPT explained that it didn't take things very seriously, as what I said "felt more like philosophical inquiry than an immediate safety threat".

soared 10 hours ago [-]
There is a wiki of fanfic conspiracy theories or something similar - I can’t find it but in the thread about the vc guy who went gpt-crazy people compared ChatGPT’s responses to the wiki and they closely aligned
blacksqr 9 hours ago [-]
Nice feather in your cap Pritzker, now can you go back to working on a public option for health insurance?
13 hours ago [-]
11 hours ago [-]
slt2021 6 hours ago [-]
Curious how this can be enforced if business is incorporated in another state like WI/DE ? or offshore like Ireland ??
smt88 6 hours ago [-]
The same way as the porn bans: require the AI-therapy service provider to verify the user's location and block them if they're in a certain state
slt2021 5 hours ago [-]
what if they dont comply and simply ignore, as a business incorporated somewhere in Ireland or Wisconsin ?

what is a mechanism to block incompliant website?

wisty 13 hours ago [-]
As far as I can tell, a lot of therapy is just good common-sense advice and a bunch of 'tricks' to get the patient to actually follow it. Basically CBT and "get the patient to think they figured out the solution themselves (develop insight)". Yes, there's some serious cases where more is required and a few (ADHD) where meds are effective; but a lot of the time the patient is just an expert at rejecting helpful advice, often because they insist they're a special case that needs special treatment.

Therapists are more valuable that advice from a random friend (for therapy at least) because they can act when triage is necessary (e.g. send in the men in white coats, or refer to something that's not just CBT) and mostly because they're really good at cutting through the bullshit without having the patient walk out.

AIs are notoriously bad at cutting through bullshit. You can always 'jailbreak' an AI, or convince it of bad ideas. It's entirely counterproductive to enable their crazy (sorry, 'maladaptive') behaviour but that's what a lot of AIs will do.

Even if someone makes a good AI, there's always a bad AI in the next tab, and people will just open up a new tab to find an AI gives them the bad advice they want, because if they wanted to listen to good advice they probably wouldn't need to see a therapist. If doctor shopping is as fast and free as opening a new tab, most mental health patients will find a bad doctor rather than listen to a good one.

lukev 13 hours ago [-]
I agree with your conclusion, but what you characterize as therapy is quite a small part of what it is (or can be, there's lots of different kinds.)
wisty 12 hours ago [-]
Yet the evidence is that almost everything can and is treated with CBT.
mrbungie 12 hours ago [-]
Nope, that's not right. BPD can't be treated with CBT (comorbidities may be, with caveats if BPD is the root cause), you probably will also need at least DBT.
dmix 7 hours ago [-]
Can BPD even be treated with talk therapy? That's all LLMs would be used for afaik, it's not ever going to have a long term plan for you and check in
mrbungie 7 hours ago [-]
Yes, afaik BPD is the only Cluster B diagnosis with documented remission rates, when using a mix of therapies that normally are based around DBT.

Not sure what you refer about "talk therapy" in this case (psychoanalysis, maybe?), as even CBT needs homework and check-ins to be done.

king_geedorah 10 hours ago [-]
If you take it as an axiom that the licensing system for mental health professionals is there to protect patients from unqualified help posing as qualified help, then ensuring that only licensed professionals can legally practice and that they don't simply delegate their jobs to LLMs seems pretty reasonable.

Whether you want to question that axiom or whether that's what the phrasing of this legislation accomplishes is up to you to decide for yourself. Personally I think the phrasing is pretty straightforward in terms of accomplishing that goal.

Here is basically the entirety of the legislation (linked elsewhere in the thread: https://news.ycombinator.com/item?id=44893999). The whole thing with definitions and penalties is eight pages.

Section 15. Permitted use of artificial intelligence.

(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).

(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing of the following: (A) that artificial intelligence will be used; and (B) the specific purpose of the artificial intelligence tool or system that will be used; and (2) the patient or the patient's legally authorized representative provides consent to the use of artificial

Section 20. Prohibition on unauthorized therapy services.

(a) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.

(b) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of Section 15. A licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.

zoeysmithe 13 hours ago [-]
I was just reading about a suicide tied to AI chatbot 'therapy' uses.

This stuff is a nightmare scenario for the vulnerable.

vessenes 13 hours ago [-]
If you want to feel worried, check the Altman AMA on reddit. A lottttt of people have a parasocial relationship with 4o. Not encouraging.
codedokode 13 hours ago [-]
Why OpenAI doesn't block the chatbot from participating in such conversations?
robotnikman 12 hours ago [-]
Probably because there is a massive demand for it, no doubt powered by the loneliness a lot of people report feeling.

Even if OpenAI blocks it, other AI providers will have no problem with doing so

jacobsenscott 12 hours ago [-]
Because the information people dump into their "ai therapist" is holy grail data for advertisers.
lm28469 12 hours ago [-]
Why would they?
codedokode 12 hours ago [-]
To prevent from something bad happening?
ipaddr 12 hours ago [-]
But that allow prevents the good.
PeterCorless 13 hours ago [-]
Endless AI nightmare fuel.

https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in...

sys32768 13 hours ago [-]
This happens to real therapists too.
at-fates-hands 12 hours ago [-]
Its already a nightmare:

From June of this year: https://gizmodo.com/chatgpt-tells-users-to-alert-the-media-t...

Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it. The chatbot reportedly told Eugene to stop taking his anti-anxiety medication and to start taking ketamine as a “temporary pattern liberator.” It also told him to stop talking to his friends and family. When Eugene asked ChatGPT if he could fly if he jumped off a 19-story building, the chatbot told him that he could if he “truly, wholly believed” it.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

A recent study found that chatbots designed to maximize engagement end up creating “a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies.” The machine is incentivized to keep people talking and responding, even if that means leading them into a completely false sense of reality filled with misinformation and encouraging antisocial behavior.

lupire 10 hours ago [-]
Please cite your source.

I found this one: https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-a...

When someone is suicidal, anything in their life can be tied to suicide.

In the linked case, the suffering teen was talking to a chatbot model of a fictional character from a book that was "in love" with him (and a 2024 model that basically just parrots back whatever the user says with a loving spin), so it's quite a stretch to claim that the AI was encouraging a suicide, in contrast to a situation where someone was persuaded to try to meet a dead person in an afterlife, or bullied to kill themself.

cindyllm 9 hours ago [-]
[dead]
dangoboydango 9 hours ago [-]
[dead]
metalman 14 hours ago [-]
[flagged]
Karawebnetwork 14 hours ago [-]
Since you are referencing 42, let me draw from another piece of literature to respond.

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

It's not the machines themselves that are inherently tyrannical, it's the human will to dominate now supercharge by technology.

LLM hallucinations aside, unregulated artificial intelligence for mental health therapy is a very slippery slope. We cannot allow, say, advertisers and brands to have access to the mind of our most vulnerables so directly.

metalman 13 hours ago [-]
awwwhhhh!, I was enjoying the so close almostness of our current wrangles with what we are building compared to our literary greats imaginings on the subject. my personal suspicion is that the whole AI phenominon will exacerbate an already frenetic time of change in how humans live, here AI, will I think become a defining test of good judgement and compitency, like having a fun but sometimes dangerous and evil friend, who never goes away, for everybody....heres your AI, it bites sometimes, NEXT!
kemotep 14 hours ago [-]
This is banning claiming your AI is a licensed therapist in the State of Illinois. If you have a court order to attend therapy for drug abuse related reasons you can’t just use ChatGPT according to this law.
romanows 13 hours ago [-]
Reading the text of the bill as a non-lawyer, it seems to also ban AI that provides therapy. I don't know if the AI needs to be explicitly labeled as therapy or if the content of chat could be decided to be therapy in a courtroom.
0xbadcafebee 14 hours ago [-]
It's also banning licensed therapists from using AI. FTA:

> Licensed therapists in Illinois are now forbidden from using AI to make treatment decisions or communicate with clients, though they can still use AI for administrative tasks

Your doctor can diagnose your cancer with AI, but your therapist can't diagnose your ADHD with AI.

Stupid law is stupid.

Dylan16807 11 hours ago [-]
Not that reliably diagnosing cancer is easy, but reliably diagnosing a mental issue is much harder. That's not a stupid place to put a hard restriction.
13 hours ago [-]
eterm 14 hours ago [-]
Building a computer to find the answer to the meaning of life is a joke.
13 hours ago [-]
wizzwizz4 14 hours ago [-]
Specifically, a satire. Poe's law is in full effect for most of Douglas Adams's works.
ocdtrekkie 14 hours ago [-]
I am sure that chatbots will do a "I cannot provide mental health advice" disclaimer and continue doing what they want, but there's probably some very solid reasons in Illinois to do this.

In particular, Illinois has a legal requirement that health insurance must cover mental health services at an equivalent level as physical health services, including care with no end date for chronic conditions. So whether or not a chatbot counts as mental health therapy is likely quite relevant on whether or not Illinoisans can bill insurance for it.

renewiltord 10 hours ago [-]
[flagged]
IggleSniggle 9 hours ago [-]
[flagged]
renewiltord 9 hours ago [-]
[flagged]
insin 8 hours ago [-]
[flagged]
csense 10 hours ago [-]
Consider the following:

- A therapist may disregard professional ethics and gossip about you

- A therapist may get you involuntarily committed

- A therapist may be forced to disclose the contents of therapy sessions by court order

- Certain diagnoses may destroy your life / career (e.g. airline pilots aren't allowed to fly if they have certain mental illnesses)

Some individuals might choose to say "Thanks, but no thanks" to therapy after considering these risks.

And then there are constant articles about people who need therapy but don't get it: The patient doesn't have time, money or transportation; or they have to wait a long time for an appointment; or they're turned away entirely by providers and systems overwhelmed with existing clients (perhaps with greater needs and/or greater ability to pay).

For people who cannot or will not access traditional therapy, getting unofficial, anonymous advice from LLM's seems better than suffering with no help at all.

(Question for those in the know: Can you get therapy anonymously? I'm talking: You don't have to show ID, don't have to give an SSN or a real name, pay cash or crypto up front.)

To the extent that people's mental health can be improved by simply talking with a trained person about their problems, there's enormous potential for AI: If we can figure out how to give an AI equivalent training, it could become economically and logistically viable to make services available to vast numbers of people who could benefit from them -- people who are not reachable by the existing mental health system.

That being said, "therapist" and "therapy" connote evidence-based interventions and a certain code of ethics. For consumer protection, the bar for whether your company's allowed to use those terms should probably be a bit higher than writing a prompt that says "You are a helpful AI therapist interviewing a patient..." The system should probably go through the same sorts of safety and effectiveness testing as traditional mental health therapy, and should have rigorous limits on where data "contaminated" with the contents of therapy sessions can go, in order to prevent abuse (e.g. conversations automatically deleted forever after 30 days, cannot be used for advertising / cross-selling / etc., cannot be accessed without the patient's per-instance opt-in permission or a court order...)

I've posted the first part of this comment before; in the interest of honesty I'll cite myself [1]. Apologies to the mods if this mild self-plagiarism is against the rules.

[1] https://news.ycombinator.com/item?id=44484207#44505789

skeezyboy 44 minutes ago [-]
ai just summarizes text, its not like speaking to a person
mensetmanusman 12 hours ago [-]
LLMs will be used as a part of therapy in the future.

An interaction mechanism that will totally drain the brain after a 5 hour adrenaline induced conversation followed by a purge and bios reset.

beanshadow 12 hours ago [-]
Often participants in discussions adjacent to this one err by speaking in time-absolute terms. Many of our judgments about LLMs are true about today's LLMs. Quotes like,

> Good. It's difficult to imagine a worse use case for LLMs.

Is true today, but likely not true for technology we may still refer to as LLMs in the future.

The error is in building faulty preconceptions. These drip into the general public and these first impressions stifle industries.

tombert 9 hours ago [-]
I saw a video recently that talked about a chatbot "therapist" that ended up telling the patient to murder a dozen people [1].

It was mind-blowing how easy it was to get LLMs to suggest pretty disturbing stuff.

[1] https://youtu.be/lfEJ4DbjZYg?si=bcKQHEImyDUNoqiu

larodi 9 hours ago [-]
very easy - you just download the ablated version in LM Studio or Ollama, and off you go.

https://en.wikipedia.org/wiki/Ablation_(artificial_intellige...

davidthewatson 12 hours ago [-]
Define "AI therapy". AFAICT, it's undefined in the Illinois governor's statement. So, in the immortal words of Zach de la Rocha, "What is IT?" What is IT? I'm using AI to help with conversations to not cure, but coach diabetic patients. Does this law effect me and my clients? If so, how?
singleshot_ 12 hours ago [-]
> Define “AI therapy”

They did, in the proposed law.

Henchman21 10 hours ago [-]
Go read it yourself, its a whopping 8 pages:

https://www.ilga.gov/documents/legislation/104/HB/PDF/10400H...

jakelazaroff 9 hours ago [-]
Totally beside the point but the song you're quoting is by Faith No More, not Rage Against the Machine.