A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • ipkpjersi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    9
    ·
    edit-2
    4 days ago

    Honestly, unfortunately, I agree. It IS unfortunately helpful, and if you’re a competent developer using AI tooling, you can make sure it doesn’t generate slop. You are responsible for your code, at the end of the day.

    AI does generate societal damage, but that’s mostly because of how companies abuse it and less because of the technology itself.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      4 days ago

      By telling people he expected this and obfuscating the authorship afterwards, he is doing damage in the form of eroding trust for a tool that has otherwise proven reliable.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        It seems like you’re glossing over the fact that he was including authorship until he was targeted with a harassment campaign by the anti-ai nutjobs.

        He removed authorship in response to being harassed. His point was that including authorship has only led to harassment which takes resources away from the actual project. If a person can’t tell that the code was AI generated with out a ‘Generated by Claude Code’ tag then their complaints about AI’s quality seem to fall flat.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        He removed the authorship specifically because he was attacked for using AI.

        People were already going after him for using AI.

        I have no problem with him using AI personally, because I trust that he is a competent enough dev if he has built and maintained this program thus far. If you don’t trust him specifically because he’s using AI now, and you don’t trust him to review the code the AI produces, then that’s your choice.

        • Tony Bark@pawb.socialOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 days ago

          Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago.

          He knew it was going to be an issue. This wasn’t about being attacked.

    • Venia Silente@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      AI does generate societal damage, but that’s mostly because of how companies abuse it and less because of the technology itself.

      Don’t excuse the technology. It was created to be useless and wasteful. Every question on an AI engine helps burn down entire forests. Every AI that is kept awaken and serving dries the lagoons and rivers of an indigenous tribe, if not a small town. Every model is built upon the sustained theft of art, code and identity, to the point the main financers are proud of it and using it as legal justification.

      People who are evil, made a tool for evil, and those using the tool of evil are doing little more than enabling evil. Number must go up.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I’m excusing the technology because it’s specifically not useless, I have found uses for it. I’m not going to demonize the technology when the companies that are abusing it are nearly the entire problem. It’s about the scope of resources required and the job loss produced by this tech.

        Do you really think running LLMs locally on your GPU is causing irreversible societal harm?

        I know, it’s not popular to say AI isn’t the problem, but honestly, the companies abusing it are the problem.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      4 days ago

      As I’ve said elsewhere here, I really don’t have a problem with people holding a moral stance against the use of genAI. It’s fine to just say “However useful this might be, I don’t want to see it used because I think it has too many ethical costs/consequences.” But blanket accusing all work that involved genAI in any capacity of being “slop” isn’t holding a moral stance, it’s demanding that reality conform to your beliefs; “I hate this, therefore it must be terrible in every respect.”

      If you truly hold a well founded ethical stance against the use of genAI, that stance shouldn’t be threatened by people doing good and effective work with genAI, because it’s effectiveness should have nothing to do with your objections.

    • InternetCitizen2@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      but that’s mostly because of how companies abuse it and less because of the technology itself.

      In any other context this is tech to help us in our post scarce future.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        I agree.

        If you read the anti-AI comments you’ll find that when they say ‘AI’ they mean ‘LLMs fine tuned to be chatbots’ and ‘Diffusion models which generate bitmaps or video files’

        They’re seemingly ignorant of all of the other things that Transformers and Deep Neural Networks are used for.

        Remember how there were all of these projects trying to crowd source an algorithm to fold proteins given an amino acid sequence? Well, a trained neural network ‘AI’ called Alphafold was created and it can complete the task with >90% accuracy. THEN, using a network like AlphaFold another group of scientists made a diffusion model that could be prompted with protein parameters and then generate the string of amino acids which would fold into that protein.

        I find it hard to believe that the ‘fuck AI’ crowd understands that ‘AI’ is completely separate from the capitalist frenzy over chatbots and image generation. The vast majority of their complaints are not about the technology, they are about assholes who have a lot of money that are abusing and overhyping the technology in order to get more money.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      4 days ago

      If he is using it for backlog because he is swamped do you honestly think he is verifying the code.