cross-posted from: https://lemmy.ml/post/35078393

AI has made the experience of language learners way shittier because now people will just call them AI on the internet.

Also, imagine trying to learn a language and not being able to tell whether it’s your own lack of knowledge or if what you’re reading is actually AI slop and doesn’t make sense.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    5 days ago

    Nah, there’s a difference between chatbot and not knowing a language.

    They’re predictive text, so grammar is actually one of it’s strong suits. The problem is the words don’t actually mean anything.

    Someone online would likely spend a little time trying to understand it, run thru a translator, realized it was slop, and move on relatively quickly.

    • TranquilTurbulence@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      Children aren’t obsessed with technically correct punctuation, whereas LLMs are. For example, they just love to use em dashes — like this.

  • spongebue@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    5 days ago

    Nah. If I’m learning a new language, I’m going to speak like a toddler at first. I’m more likely to be accused of that than an LLM capable of long paragraphs giving minimal accuracy about stuff

  • garbagebagel@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    4 days ago

    As a person who learned English as a second language, I would say probably not. If anything, a human’s grammar/conjugations might be off if they’re learning a new language. A machine, as others have pointed out, would have proper grammar but might be nonsensical.

    • x00z@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      What if you use AI to translate some text because you can’t express yourself well enough yet?

      • Flax@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I think translation machines have always used similar language models

    • Apytele@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      Depends on what you call a mistake. I was interacting on ich iel a while back (I don’t speak German but the meme had clear meaning as a fellow nurse) and they said chat gpt had almost perfect grammar but that it translated “median basillic vein” as the “median basil vein” (like the herb).

      • Flax@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Haven’t machine translators always been some form of language model? Pretty sure it was invented by training a system with two human translations of the same document/work

  • psx_crab@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    4 days ago

    The way LLM write compared to people who just learning the second language is probably gonna be significantly different, so i doubt there will be the case. Unless the person learn the second language from LLM then yeah.

      • Engywook@lemmy.zip
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 days ago

        Exclusively if it’s people you care about or you depend on. Anyone else can royally fuck themselves with their opinions.

    • Perspectivist@feddit.uk
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      4 days ago

      Every single person on earth. There’s nothing more inherently human than caring about what others think.

      I get what you mean though - I also try not to let other people’s judgment affect what I do, but I’d be lying to myself if I claimed it doesn’t, and especially if I claimed I don’t care.