• wavebeam@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Gun company says you “broke the TOS” when you pointed the gun at a person. It’s not their fault you used it to do a murder.

          • freddydunningkruger@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Someone programmed/trained/created a chatbot that talked a kid into killing himself. It’s no different than a chatbot that answers questions on how to create explosive devices, or make a toxic poison.

            If that doesn’t make sense to you, you might want to question whether it’s the chatbot that is mindless.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    This is a lot of framing to make it look better for OpenAI. Blaming everyone and rushed technology instead of them. They did have these guardrails. Seems they even did their job and flagged him hundreds of times. But why don’t they enforce their TOS? They chose not to do it. Once I breach my contracts and don’t pay, or upload music to youtube, THEY terminate my contract with them. It’s their rules, and their obligation to enforce them.

    I mean why did they even invest in developing those guardrails and mechanisms to detect abuse, if they then choose to ignore them? This makes almost no sense. Either save that money and have no guardrails, or make use of them?!

    • ShadowRam@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Well if people started calling it for what it is, weighted random text generator, then maybe they’d stop relying on it for anything serious…