• misk@piefed.socialOP
    link
    fedilink
    English
    arrow-up
    15
    ·
    6 days ago

    They are now pretending that what they have created is something at the verge of becoming sentient and with dignity to protect.

    • Tm12@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 days ago

      In reality, just ending violating chats early to save resources.

      • themeatbridge@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        That’s probably part of it, and all of this is pretty silly.

        But maybe an upside is that if people stop being shitty to chatbots, maybe we can normalize live customer service agents ending interactions when they become abusive. Maybe Claude is monitoring live agent conversations, making and documenting the decision to terminate the call. Humans have a higher threshold for abuse, and will often tolerate shitty behavior because they err on the side of customer service. If it’s an automated process, that protects the agent.

        Of course, all of this is wishful thinking on my part. It would be nice if new tech wasn’t used for evil, but evil is profitable.

  • Kairos@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    I guarantee you it’s not the model doing that. Maybe its a secondary model trained to detect stuff but not the one just generating tokens.