- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Gun company says you “broke the TOS” when you pointed the gun at a person. It’s not their fault you used it to do a murder.
This is a chat bot
While I don’t care for openAI I don’t see why they would be liable.
Did you know that talking someone into committing suicide is a felony?
It isn’t a person though
It is a mindless chatbot
Someone programmed/trained/created a chatbot that talked a kid into killing himself. It’s no different than a chatbot that answers questions on how to create explosive devices, or make a toxic poison.
If that doesn’t make sense to you, you might want to question whether it’s the chatbot that is mindless.
This is a lot of framing to make it look better for OpenAI. Blaming everyone and rushed technology instead of them. They did have these guardrails. Seems they even did their job and flagged him hundreds of times. But why don’t they enforce their TOS? They chose not to do it. Once I breach my contracts and don’t pay, or upload music to youtube, THEY terminate my contract with them. It’s their rules, and their obligation to enforce them.
I mean why did they even invest in developing those guardrails and mechanisms to detect abuse, if they then choose to ignore them? This makes almost no sense. Either save that money and have no guardrails, or make use of them?!
Well if people started calling it for what it is, weighted random text generator, then maybe they’d stop relying on it for anything serious…
I call it enhanced autocomplete. We all know how inaccurate autocomplete is.






