- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Who are these people? This is ridiculous. :)
I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.
People even have romantic relationships with these things.
I dont agree with the argument that chat gpt should “push back”. They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.
Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?
Very slippery slope if you ask me.
I mean, having it not help people commit suicide would be a good starting point for AI safety.
It will take another five seconds to find the same info using the web. Unless you also think we should censor the entire web and make it illegal to have any information about things that can hurt people, like knives, guns, stress, partners, cars…
People will not be stopped suiciding because a chat bot doesnt tell them the best way, unfortunately.
This is also a problem for search engines.
A problem that while not solved has been somewhat mitigated by including suicide prevention resources at the top of search results.
This is a bare minimum AI can’t meet, and in conversation with AI vulnerable people can get more than just information, there are confirmed cases of the AI encouraging harmful behaviors up to and including suicide.
good. every additional hurdle between a suicidal person and the actual act saves lives.
this isn’t a slippery slope. we can land on a reasonable middle ground.
you don’t know that. maybe some will.
the general trend i get from your comment is you’re thinking in very black and white terms. the world doesn’t operate on all or nothing rules. there is always a balance between safety and practicality.