• FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      11 days ago

      This is a peer reviewed study backed by experimental data showing that it, in fact, doesn’t “make shit up”.

      The hallucinations that you’re referring to are artifacts of Transformer-based language models. This is a clustering constrained attention multiple-instance learning algorithm. They are two completely different things.

      If you’re going to repeat “ai bad” memes at least understand simple machine learning concepts like “not every instance of machine learning is ChatGPT”.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 days ago

          What answer could I give that would make me wrong?

          I single-handily invented neural networks, discovered Transformers and am the majority shareholder of every AI company on the planet.

          None of that changes the fact that you said something that is ignorant and unrelated to anything in the article.

    • remotelove@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      This kind of ML/AI work has been going on for a while now. Text recognition, image recognition, pattern detection, predictive analysis and may more types of work actually benefit a ton from this recent LLM fad.

      The biggest difference in the output between more “normal” AI systems and LLMs is that LLMs seem much more confident in incorrect responses. If you use something more traditional, in say, facial recognition as an example, you can see immediately if the AI determines a hand drawn smiley face is an actual human face.

      AI/ML systems have always been error prone in one way or another. A biggest issue I see is how results from these systems are presented.