• FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    19 days ago

    This is a peer reviewed study backed by experimental data showing that it, in fact, doesn’t “make shit up”.

    The hallucinations that you’re referring to are artifacts of Transformer-based language models. This is a clustering constrained attention multiple-instance learning algorithm. They are two completely different things.

    If you’re going to repeat “ai bad” memes at least understand simple machine learning concepts like “not every instance of machine learning is ChatGPT”.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 days ago

        What answer could I give that would make me wrong?

        I single-handily invented neural networks, discovered Transformers and am the majority shareholder of every AI company on the planet.

        None of that changes the fact that you said something that is ignorant and unrelated to anything in the article.