• 0 Posts
  • 43 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle





  • I think the point [email protected] is trying to make is that, if I were to send you digital information and then demanded from you to delete it, one would have potentially a harder time convincing people, that it’s not within your rights to demand remuneration.
    Especially with how US-centric and -representative the international media landscape has become.
    Even though in most(?) European countries I imagine (didn’t actually check) I could sue you for damages, maybe reduced due to my causing the issue, should you publish the information after I asked you delete it.

    But with the power imbalance at play here the police can just roll in and arrest the guy. Allowing them to be terminally stupid in the best case, or malevolent in the worst. They could just as well claim they sent someone secret information, they refused to comply with the request for deletion, so they were arrested.
    Depending on how little oversight there actually is, that either is the end of the story or, when asked for proof of this series of events, the “proof” was “accidentally deleted” during the investigation, how clumsy.









  • I know they’re not synonymous. But at some point someone left the marketing monkeys in charge of communication.
    My point is that our current “AI” is inadequate at what we’re told is its purpose and should it ever become adequate (which the current architecture shows no sign of being capable) we’re in a lot of trouble because then we’ll have no way to control an intelligence vastly superior to our own.

    So our current position on that journey is bad and the stated destination is undesirable, so it would be in our best interest to stop walking.


  • It’s quite bad at what we’re told it’s supposed to do (producing reliably correct responses), hallucinating up to 40% of the time.
    It’s also quite bad at not doing what it’s not supposed to. Meaning the “guardrails” that are supposed to prevent it from giving harmful information can usually be circumvented by rephrasing the prompt or some form of “social” engineering.
    And on top of all that we don’t actually understand how they work in a fundamental level. We don’t know how LLMs “reason” and there’s every reason to assume they don’t actually understand what they’re saying. Any attempt to have the LLM explain its reasoning is of course for naught, as the same logic applies. It just makes up something that approximately sounds like a suitable line of reasoning.
    Even for comparatively trivial networks, like the ones used for written number recognition, that we can visualise entirely, it’s difficult to tell how the conclusion is reached. Some neurons seem to detect certain patterns, others seem to be just noise.