• Tm12@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 days ago

    In reality, just ending violating chats early to save resources.

    • themeatbridge@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      That’s probably part of it, and all of this is pretty silly.

      But maybe an upside is that if people stop being shitty to chatbots, maybe we can normalize live customer service agents ending interactions when they become abusive. Maybe Claude is monitoring live agent conversations, making and documenting the decision to terminate the call. Humans have a higher threshold for abuse, and will often tolerate shitty behavior because they err on the side of customer service. If it’s an automated process, that protects the agent.

      Of course, all of this is wishful thinking on my part. It would be nice if new tech wasn’t used for evil, but evil is profitable.