A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    edit-2
    2 days ago

    Moral of the story is don’t let Claude do commits. It insists on crediting itself

    Also stop harassing openspurce developers

    Also be transparent when you have vibecoded commits. There’s no reason to hide it. Just say that parts of your codebase is vibecoded or coded with ai assist and those who don’t like it can fork it or use something else.

  • JensSpahnpasta@feddit.org
    link
    fedilink
    English
    arrow-up
    74
    arrow-down
    17
    ·
    3 days ago

    I really hate this new trend of FOSS developers being attacked and harassed for using AI. You might not like if they are using AI. Or you might not like AI at all, but there’s no reason to harass people who are providing you free software. Let them develop it like they want. If you don’t like that they used AI, use another software. Or fork the software before they started using AI. But attacking people like that is not okay on so many levels. It’s not okay to attack people for the software they are using. It’s not okay to attack developers providing a free service and it’s not okay to attack people at all.

  • r1veRRR@feddit.org
    link
    fedilink
    English
    arrow-up
    93
    arrow-down
    12
    ·
    3 days ago

    From his perspective, he’s investing his free time and likely money into a project for people that are 99% of the time just leechers, as in they never contribute back and only complain.

    Now he has a tool that he feels helps him deal with all that FREE labor is doing for everyone, and the very same people now want to tell him how to do his FREE labor he does for them.

    I completely understand being pissed off by that.

  • Crozekiel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    11
    ·
    3 days ago

    AI is actively destroying the environment and harming people. Data centers have been caught using methane burner generators (which are banned for use by the EPA) which significantly increase health risk to residents that live nearby (cancer and asthma rates already significantly increased). Then you have the ridiculous effects it is having on computer hardware markets, energy and water infrastructure and prices.

    Then after all of that, the AI themselves are hallucinating somewhere in the neighborhood of 25% of the time, and multiple studies have found that people that use them regularly are losing their own skills.

    I can’t figure out why people would choose to use them. I can’t figure out why programming is the one place where people that might have otherwise been considered experts in the field are excited to use them. Writers, artists, lawyers, doctors, basically every other professional field that AI companies have suggested these would be good for, they get trashed by experts in the fields for making garbage. I have a hard time believing the only thing AI can do well is write code when it sucks so badly at everything else it does. Does development suck this much? Do developers have so little idea what they are doing that this seems like a good idea?

  • super_user_do@feddit.it
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    19
    ·
    3 days ago

    I understand the hatred towards AI, but people gotta understand that there’s a difference between coding with AI and Vibecoding. They are DIFFERENT THINGS! AI is userful, what is not are both vibecoding and shaming a developer with 30 years of real world experience with no AI support for using it for once. Using AI is ok if you do that critically and with common sense

    • PrettyFlyForAFatGuy@feddit.uk
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      3
      ·
      3 days ago

      If it’s making commits for you you’re vibe coding.

      I use it at work, I use it for troubleshooting and if I get it to generate anything for me, I stage them and review them before committing myself

    • flop_leash_973@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      3 days ago

      You are correct but people in general are pretty bad at subtly and grey area. Just look at the current state of political discourse in the US. Probably half the people that support the likes of Trump do so because they like black/white binary choice and can’t handle shades of grey in their life emotionally.

    • SigmarStern@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      I totally agree. I’m not an AI hype man. I want to scream whenever I see a PR littered with emojis, bullet lists, and way too much text for a simple change. I hate the discussions about the transformative power of AI, the 10x production gains, all the million tools, agents, skills, plugins, methods I should be using but I am already behind and old and probably unemployed next week, right? Still, AI use is not inherently bad. It gets me unstuck. It finds subtle errors I wasn’t noticing, it writes documentation faster and better than I can. I hate the companies who are pushing it, the methods of it’s training, but the tool itself is just a tool and sometimes a very useful one. IMHO we shouldn’t shame every open source developer just for using it. As long as they are responsible with it, I’m fine with some AI code in my software.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      3 days ago

      I for news for you its the same thing. There is no difference besides maybe the prompt same AI is writing the code. And I do bit believe a coder is going over every single line of code.

  • ipkpjersi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    9
    ·
    edit-2
    3 days ago

    Honestly, unfortunately, I agree. It IS unfortunately helpful, and if you’re a competent developer using AI tooling, you can make sure it doesn’t generate slop. You are responsible for your code, at the end of the day.

    AI does generate societal damage, but that’s mostly because of how companies abuse it and less because of the technology itself.

  • nialv7@lemmy.world
    link
    fedilink
    English
    arrow-up
    189
    arrow-down
    13
    ·
    4 days ago

    you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don’t harass the developer.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      78
      arrow-down
      18
      ·
      4 days ago

      You make a fair point, but I feel like the trolling reaction they gave was asking for more backlash. Not responding was probably the best move.

      • Zos_Kia@jlai.lu
        link
        fedilink
        English
        arrow-up
        75
        arrow-down
        4
        ·
        4 days ago

        It’s typical of dev burnout, though. Communication starts becoming more impulsive and less constructive, especially in the face of conflicts of opinions.

        I’ve seen it play a few times already. A toxic community will take a dev who’s already struggling, troll them, screenshot their problematic responses, and use that in a campaign across relevant places such as github, reddit, lemmy… Maybe add a little light harassment on the side, as a treat. It’s a fun activity ! The dev spirals, posts increasingly unhinged responses and often quits as a result.

        The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          28
          arrow-down
          1
          ·
          4 days ago

          I see your point. I might also have responded poorly to that, on some level at least.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            16
            ·
            4 days ago

            Yeah same. I’d like to think i’d answer “I’ll use AI, if you don’t like it you can fork the project and i wish you good luck. Go share your opinion on AI in an appropriate place.”. But realistically there’s a high chance it catches me on a bad day and i get stupid.

        • MousePotatoDoesStuff@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 days ago

          … You’re right. I definitely wouldn’t be above such a response.

          The problem is, a lot of people here - myself included - were/are also being impulsive about their responses to this issue, at least partially due to all the shitty stuff caused by GenAI.

          There might be some toxic people too, I wouldn’t be surprised - but this can happen without them, too.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            3 days ago

            The thing is, toxic people thrive in mob situations and are often found leading or even manufacturing them. I tend to be wary around this kind of setups as they are easy to get caught up in and hard to get out of.

        • Venia Silente@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.

          No, it was literally an important question to have answered. And booooy did the dev answer.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            Is it appropriate to ask a stranger a question by first calling their work “slop” ? Is that how you communicate with people ? How is that working out irl ?

            Y’all are so immersed in bully culture that this seems normal to you smh

            • Venia Silente@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 day ago

              Wow, calling asking to identify if something is a thing by the name of the thing that it’s being asked about is “bully culture” now? This is a whole new low level of argument in the pro-AI take.

              • Zos_Kia@jlai.lu
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 day ago

                So yes, you think this is normal human behaviour. Good luck with that shit, i hope the world treats you with the same energy.

      • aksdb@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        30
        ·
        4 days ago

        Trolling? They gave a pretty good answer explaining their reasoning.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          72
          arrow-down
          3
          ·
          4 days ago

          I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

          Seems pretty obvious to me that they knew this wouldn’t go over well. It was inflammatory by design.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            24
            ·
            4 days ago

            Yeah ok. True. I think the rest of the post has much more weight, though. But yeah, he should have swallowed that last sentence.

    • S_H_K@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      I agree with you with the current state of things in the world is hard to keep up and easy to complain. I’d say instead of asking the guy to not use AI ask him what he needs for help. He’s clearly stating that he’s in burnout.
      I don’t have the time or skills to help so I wouldn’t go complaining.

    • UnfortunateShort@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      6
      ·
      4 days ago

      They are on liberapay if you want to support the project btw. Combined with Patreon, they sit at less than 700$ a week. That’s like half a dev before tax

    • 4am@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      18
      ·
      4 days ago

      They want to put clanker code that they freely admit they don’t validate into a product that goes on the computers of people who’s experience with Linux is “I heard it’s faster for games”

      It’s irresponsible to hide it from review. It doesn’t matter if AI tools got better, AI tools still aren’t perfect and so you still have to do the legwork. Or at least let your community.

      Also, you should let your community make ethics decisions about whether to support you.

      Overall it was a rash reaction to being pressured rudely in a GitHub thread; but you know AI is a contentious topic and you went in anyway. It’s weak AF to then have a tantrum and spit in the community’s face about it.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        5
        ·
        4 days ago

        Nothing is being hidden from review. The code is open source. They removed the specific attribution that indicates which parts of the code were created using Claude. That changes absolutely nothing about the ability to review the code, because a code review should not distinguish between human written code and machine written code; all of it should be checked thoroughly. In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          15
          ·
          edit-2
          4 days ago

          In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.

          Oh, it’s more than subconscious, as you can see in this thread.

          Lutris developer makes a perfectly sane and nuianced response to a reactionary “is lutris slop now” comment, and gets shit on for it, because everybody has to fight in black and white terms. There are no grey opinions, only battle lines to be drawn to these people.

          What? Are you all going to shit on your lord and savior Linus himself for also saying he uses LLMs? Oh, what, you didn’t know?!?

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            4
            ·
            edit-2
            4 days ago

            The response is only nuanced until the “good luck” sentence. If he swallowed that it would be an almost perfect response. But that sentence is a quite big “fuck you”.

              • aksdb@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                Yes, and I didn’t say that. I even argued in favor of his response thoughout this whole post (getting a shit ton of downvotes all along). But I think that doesn’t invalidate my point either: without this one sentence, his whole chain of arguments would have been pretty good and reasonable. It was just unnecessary to then add this snarky remark. It’s understandable if he’s pissed, but just because you are pissed when you say something doesn’t make what you said a clever move.

                • dream_weasel@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 days ago

                  I get it. You can’t get by “Ai iS slOp” at top level comments anymore. I get that kind of ending because I would add it… but then I also don’t mind collecting downvotes so ymmv I guess.

            • P03 Locke@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              6
              ·
              3 days ago

              It’s not as much of a “fuck you” as much as “I’m tired of this same fucking response, when all I’m trying to do is get some work done, which I do for fucking free, by the way”.

  • Cyv_@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    212
    arrow-down
    3
    ·
    edit-2
    4 days ago

    I mean, I get if you wanna use AI for that, it’s your project, it’s free, you’re a volunteer, etc. I’m just not sure I like the idea that they’re obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I’d still prefer transparency.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      75
      arrow-down
      8
      ·
      4 days ago

      I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.

      • Scrollone@feddit.it
        link
        fedilink
        English
        arrow-up
        40
        arrow-down
        2
        ·
        4 days ago

        Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard

        • yucandu@lemmy.world
          link
          fedilink
          English
          arrow-up
          21
          arrow-down
          1
          ·
          4 days ago

          It’s like having an idiot employee that works for free. Depending on how you manage them, that employee can either do work to benefit you or just get in your way.

          • daikiki@lemmy.world
            link
            fedilink
            English
            arrow-up
            23
            arrow-down
            1
            ·
            edit-2
            4 days ago

            Only it’s not free. If you run it in the cloud, it’s heavily subsidized and proactively destroying the planet, and if you run it at home, you’re still using a lot of increasingly unaffordable power, and if you want something smarter than the average American politician, the upfront investment is still very significant.

            • yucandu@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              32
              ·
              4 days ago

              Yeah I’m not buying the “proactively destroying the planet” angle. I’d imagine there’s a lot of misinformation around AI, given that the products surrounding it are mostly Western, like vaccines…

          • BackgrndNoize@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            4 days ago

            Not even free, just cheaper than an actual employee for now, but greed is inevitable and AI is computationally expensive, it’s only a matter of time before these AI companies start cranking up the prices.

      • Vlyn@lemmy.zip
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        5
        ·
        4 days ago

        You might genuinely be using it wrong.

        At work we have a big push to use Claude, but as a tool and not a developer replacement. And it’s working pretty damn well when properly setup.

        Mostly using Claude Sonnet 4.6 with Claude Code. It’s important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).

        Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.

        Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it’s extremely rare that it hallucinates something that doesn’t exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          edit-2
          4 days ago

          Agreed, I don’t understand people not even giving it a chance. They try it for five minutes, it doesn’t do exactly what they want, they give up on it, and shout how shit it is.

          Meanwhile, I put the work in, see it do amazing shit after figuring out the basics of how the tech works, write rules and skills for it, have it figure out complex problems, etc.

          It’s like handing your 90-year-old grandpa the Internet, and they don’t know what the fuck to do with it. It’s so infuriating.

          Probably because, like your 90-year-old grandpa with the Internet, you have to know how to use the search engine. You have to know how to communicate ideas to an LLM, in detail, with fucking context, not just “me needs problem solvey, go do fix thing!”

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            Just yesterday I had one of those moments of grace that are becoming commonplace.

            Basically I have to migrate a service from a n8n workflow to an actual nodejs server for performance reasons. I spent 15 minutes carefully scoping the migration, telling it exactly what tools to use and code style to adopt. Gave it the original brief and access to the n8n workflows.

            The whole thing was done in 4 minutes and 30 seconds. It even noticed a bug which has been in production unnoticed for the past year. Gave me some good documentation on how to setup the Google service account, the kind of memory usage to expect so I can dimension the instant accordingly. Another five minutes and I had a whole test suite with decent coverage. I had negotiated with the client that it would take around a week, well that was the under promise of the year…

            People who go around telling it doesn’t work are incompetent, out of their minds or straight up lying.

          • Vlyn@lemmy.zip
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            4 days ago

            It’s not really that simple. Yes, it’s a great tool when it works, but in the end it boils down to being a text prediction machine.

            So a nice helper to throw shit at, but I trust the output as much as a random Stackoverflow reply with no votes :)

            • dream_weasel@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              3 days ago

              I feel like there needs to be a dedicated post (and I don’t want to write it, but maybe I eventually will) that outlines what a model really is. It is not just a statistical text prediction machine unless you are being so loose with the definition of “statistical” that it doesn’t even mean anything anymore.

              A decent example of a statistical text prediction machine is the middle word suggested by your phone when you’re using the keyboard. An LLM is not that.

              In the most general terms, this kind of language model tokenizes a corpus of text based on a vocabulary (which is probably more than just the words in the dictionary), uses an embedding model to translate these tokens into a vector of semantic “meaning” which minimized loss in a bidirectional encoding (probably), that is then trained against a rubric for one or more topic area questions, retrained for instruction and explainability, retrained with reinforcement learning and human feedback to provide guardrails, and retrained again to make use of supplemental materials not part of the original training corpus (resource augmented generation), then distilled, then probably scaled and fine tuned against topic areas of choice (like coding or Korean or whatever) and maybe THEN made available to people to use. There are generally more parts to curriculum learning even than that but it’s a representative-ish start.

              My point being that, yes, it would be nuts to pose ANY question to a predictor that says “with 84% probability, the word that is most likely follows ‘I really like’ is ‘gooning’ on reddit”, but even Grok is wildly more sophisticated than that and Grok is terrible.

              Edit: And also I really like your take at the start of this thread: user error is a pretty huge problem in this space.

              • Vlyn@lemmy.zip
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                3 days ago

                The training is sophisticated, but inference is unfortunately really a text prediction machine. Technically token prediction, but you get the idea.

                For every single token/word. You input your system prompt, context, user input, then the output starts.

                The

                Feed the entire context back in and add the reply “The” at the end.

                The capital

                Feed everything in again with “The capital”

                The capital of

                Feed everything in again…

                The capital of Austria

                It literally works like that, which sounds crazy :)

                The only control you as a user can have is the sampling, like temperature, top-k and so on. But that’s just to soften and randomize how deterministic the model is.

                Edit: I should add that tool and subagent use makes this approach a bit more powerful nowadays. But it all boils down to text prediction again. Even the tools are described per text for what they are for.

            • P03 Locke@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              4
              ·
              4 days ago

              but in the end it boils down to being a text prediction machine.

              And we’re barely smarter than a bunch of monkeys throw piles of shit at each other. Being reductive about its origins doesn’t really explain anything.

              I trust the output as much as a random Stackoverflow reply with no votes :)

              Yeah, but that’s why there’s unit tests. Let it run its own tests and solve its own bugs. How many mistakes have you or I made because we hate making unit tests? At least the LLM has no problems writing the tests, after you know it works.

              • svtdragon@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                I’ve had better luck with using it in a TDD style. “Write a test for this issue, watch it fail, then make it pass.”

      • Fatal@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        4 days ago

        At a minimum, the agent should be compiling the code and running tests before handing things back to you. “It references non-existent APIs” isn’t a modern problem.

        • Zos_Kia@jlai.lu
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I don’t know what they are using cause all agents routinely do that. I suspect they are fibbing or tested things out in 2024 and never updated their opinion.

      • CompassRed@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        7
        ·
        4 days ago

        The symptoms you describe are caused by bad prompting. If an AI is providing over-complicated solutions, 9 times out of 10 it’s because you didn’t constrain your problem enough. If it’s referencing tools that don’t exist, then you either haven’t specified which tools are acceptable or you haven’t provided the context required for it to find the tools. You may also be wanting too much out of AI. You can’t expect it to do everything for you. You still have to do almost all the thinking and engineering if you want a quality project - the AI is just there to write the code. Sure, you can use an AI to help you learn how to be a better engineer, but AIs typically don’t make good high-level decisions. Treat AI like an intern, not like a principal engineer.

          • CompassRed@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            3 days ago

            It’s not about stupid or smart. It’s a tool, not a person. If you don’t get the same results that other people get with the same tool, then what could possibly be the problem other than how the person is using the tool?

        • Bronzebeard@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          3 days ago

          “it’s your fault that it just made up tools that don’t exist” is a bold statement, bro.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            The junior analogy comes to mind. If you hire a fresh face and they ship code that doesn’t work, it’s definitely on you, bro.

          • CompassRed@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            3 days ago

            No, it’s not. It doesn’t have intention. It’s literally just a tool. If you don’t get the results you expect with a tool when other people do get those results, then the problem isn’t the tool.

      • yucandu@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        4 days ago

        I create custom embedded devices with displays and I’ve found it very useful for laying things out. Like asking it to take secondly wind speed and direction updates and build a Wind Rose out of it, with colored sections in each petal denoting the speed… it makes mistakes but then you just go back and reiterate on those mistakes. I’m able to do so much more, so much faster.

      • aloofPenguin@piefed.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        4 days ago

        I had the same experience. Asked a local LLM about using sole Qt Wayland stuff for keyboard input, a the only documentation was the official one (which wasn’t a lot for a noob), no.examples of it being used online, and with all my attempts at making it work failing. it hallucinated some functions that didn’t exist, even when I let it do web search (NOT via my browser). This was a few years ago.

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          3
          ·
          4 days ago

          This was a few years ago.

          That’s 50 years in LLM terms. You might as well have been banging two rocks together.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      5
      ·
      4 days ago

      I expect because it wasn’t a user - just a random passer by throwing stones on their own personal crusade. The project only has two major contributors who are now being harassed in the issues for the choices they make about how to run their project.

      Someone might fork it and continue with pure artisanal human crafted code but such forks tend to die off in the long run.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      4 days ago

      Considering the amount of damage AI has done to well-funded projects like Windows and Amazon’s services, I agree with this entirely. It might be crucial to help fix bigger issues down the line.

    • Fizz@lemmy.nz
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      7
      ·
      4 days ago

      I’m the opposite. Its weird to me for someone to add an AI as a co author. Submit it as normal.

      • svtdragon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        It’s mostly not a thing developers do. It’s a thing the tools themselves do when asked to make a commit.

  • PerogiBoi@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    Aaaaand just uninstalled lutris. There are many other ways to install windows games and applications that aren’t ensloppified.

  • lohky@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    3 days ago

    There hasn’t been anything I haven’t been able to run between Heroic and Steam. I didn’t like using lutris anyway. ¯\_(ツ)_/¯

  • Skankhunt420@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    6
    ·
    3 days ago

    Open source stuff is awesome and I really like people improving Linux in their spare time

    But, to do it this way is basically saying “fuck you” to the community which is fucked up.

    Could have talked about how AI helps him or how he uses it for templates or whatever and damn even if I didn’t agree with those points either that’s a lot better than being like “alright good luck finding it now then bitch

    I wouldn’t mess with anything this guy does anymore after this.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 days ago

      Yeah, management wants us to use AI at $DAYJOB and one of the strategies we’ve considered for lessening its negative impact on productivity, is to always put generated code into an entirely separate commit.

      Because it will guess design decisions at random while generating, and you want to know afterwards whether a design decision was made by the randomizer or by something with intelligence. Much like you want to know whether a design decision was made by the senior (then you should think twice about overriding this decision) or by the intern that knows none of the project context.

      We haven’t actually started doing these separate commits, because it’s cumbersome in other ways, but yeah, deliberately obfuscating whether the randomizer was involved, that robs you of that information even more.

    • Holytimes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      16
      ·
      4 days ago

      Well when you have a massive problem of harassment, death threats and fucking retarded shit stains screaming at every single dev that is even theorized to use ai regardless if it’s true or not.

      I blame fucking no one for hiding the fact.

      This is on the users not the dev. The users are fucking animals and created this very problem.

      Blaming the wrong people and attacking them is the yuck.

      Scream at the executives and giant corpos who created the problem not some random indie dev using a tool.

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Then just quit it isn’t worth it. I know AI has uses and is useful.

  • adeoxymus@lemmy.world
    link
    fedilink
    English
    arrow-up
    140
    arrow-down
    41
    ·
    4 days ago

    Tbh I agree, if the code is appropriate why care if it’s generated by an LLM

    • deadcade@lemmy.deadca.de
      link
      fedilink
      English
      arrow-up
      88
      arrow-down
      45
      ·
      4 days ago

      It’s still made by the slop machine, the same one that could only be created by stealing every human made artwork that’s ever been published. (And this is not “just one company”, every LLM has this issue.)

      Not only that, the companies building massive datacenters are taking valuable resources from people just trying to live.

      If the developer isn’t able to keep up, they should look for (co-)maintainers. Not turn to the greedy megacorps.

      • bookmeat@fedinsfw.app
        link
        fedilink
        English
        arrow-up
        54
        arrow-down
        13
        ·
        4 days ago

        A few years ago we were all arguing about how copyright is unfair to society and should be abolished.

        • wirelesswire@lemmy.zip
          link
          fedilink
          English
          arrow-up
          62
          arrow-down
          2
          ·
          4 days ago

          Sure, but these same companies will drag you to court and rake you over the coals if you infringe on their copyrights.

          • lumpenproletariat@quokk.au
            link
            fedilink
            English
            arrow-up
            20
            arrow-down
            2
            ·
            4 days ago

            More reason to destroy copyright.

            Normal people can’t afford to fight the big companies who break theirs anyway. It’s only really a tool for big businesses to use against us.

          • P03 Locke@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            4 days ago

            The GPL license only exists because copyright fucked over the public contract that it promised to society: Copyrights are temporary and will be given back to public domain. Instead, shitheads like Mark Twain and Disney extended copyright to practically forever.

            • everett@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 days ago

              I don’t understand your position here. If we went back to a more reasonable 7 or 14 year copyright term, how would that obviate the need for a license like the GPL, which permits instant use of code provided you share-alike? Those shorter copyright lengths would be pretty reasonable for books or movies, but would still suck for tech.

              • P03 Locke@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 days ago

                We would be faaaaaar less hostile towards copyrights if we had a regular source of RECENT public domain coming out every year.

                I’m not saying that it would make GPL or OSS licenses useless. I’m just saying that the motivation and need for those licenses are because we don’t live in a society where freely available media and data are much more commonplace.

          • Luminous5481 "Lawless Heathen" [they/them]@anarchist.nexus
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            10
            ·
            edit-2
            4 days ago

            Licenses only matter if you care about copyright. I’d much rather just appropriate whatever I want, whenever I want, for whatever I want. Copyright is capitalist nonsense and I just don’t respect notions of who “owns” what. You won’t need the GPL if you abolish the concept of intellectual property entirely.

            • astro@leminal.space
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              2
              ·
              4 days ago

              It is offensive to me on a philosophical level to see that so many people feel that they should have control, in perpetuity, over who can see/read/experience/use something that they’ve put from their mind into the world. Doubly so when considering that their own knowledge and perspective is shaped by the works of those who came before. Software especially. It is sad that capitalism has so thoroughly warped the notion of what society should be that even self-proclaimed leftists can’t imagine a world where everything isn’t transactional in some way.

              • obelisk_complex@piefed.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 days ago

                Precisely this, yes, well said. We all stand on the shoulders of those who came before us, one way or another.

        • Beacon@fedia.io
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          4 days ago

          We weren’t all saying copyright altogether was unfair. In fact i think most of us have always said copyright law should exist, just that it shouldn’t be like ‘lifetime of the creator plus another 75 years after their death’. Copyright should be closer to how it was when the law was first started, which is something like 20 years.

          (And personally imo there should also be some nuanced exceptions too.)

        • Bronzebeard@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 days ago

          Yeah people making that argument were dumb. Copyright needs to be fixed, not abolished.

      • Ganbat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        edit-2
        4 days ago

        If the developer isn’t able to keep up, they should look for (co-)maintainers.

        Same energy as “Just go on Twitter and ask for free voice actors,” a la Vivziepop. A lot of people think this kind of shit is super easy, but realistically, it’s nearly impossible to get people to dedicate that kind of effort to something that can never be more than a money/time sink.

        • prole@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          4 days ago

          I was under the impression that FOSS developers do it for the love of the game and not for monetary compensation. They’re literally putting the software out for free even though they don’t need to. They are going to be making this shit regardless.

          • Ganbat@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            4 days ago

            My point was “Help me with my passion project for nothing” is a much harder sell. “Just find some help,” is advice along the lines of “Just get in a plane and fly it.”

          • tempest@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            That is what they are technically doing but they often don’t always consider the consequences and often react poorly when they realize that an Amazon (it whatever) comes along and contributes nothing and monetizes their work while dumping the support and maintenance on them.

            That is the name of the game though if you use an MIT license.

          • P03 Locke@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            At this point, teachers do it “for the love of the game”, but they still want to get paid more than minimum wage.

        • deadcade@lemmy.deadca.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 days ago

          Absolutely true, but there’s one clear and obvious way; drop support for the project yourself.

          If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.

          FOSS maintainers don’t owe anyone anything. What some developers do is amazing and I want them to keep developing and maintaining their projects, but I don’t fault them for quitting if they do.

          • P03 Locke@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            4 days ago

            XKCD, of course

            If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.

            No, they won’t. This line of thinking is how we got the above.

            Their line of work is thankless, and nobody wants to do a fucking thankless job, especially when the last maintainer was given a bunch of shit for it.

        • Vlyn@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Hey, if your project is important enough you might get your own Jia Tan (:

      • silver_wings_of_morning@feddit.dk
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        4 days ago

        Speaking only on the programming part of the slop machine, programmers typically copy code anyways. It’s not an ethical issue for a programmer using a tool that has been trained on other people’s “stolen” code.

      • Goretantath@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        15
        ·
        4 days ago

        Just like how every other human artist learned how to draw by looking at examples their art teacher gave them, aka “stealing it” in your words.

    • Dettweiler@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      1
      ·
      4 days ago

      It’s all about curation and review. If they use AI to make the whole project, it’s going to be bloated slop. If they use it to write sections that they then review, edit, and validate; then it’s all good.

      I’m fairly anti-AI for most current applications, but I’m not against purpose-built tools for improving workflow. I use some of Photoshop’s generative tools for editing parts of images I’m using for training material. Sometimes it does fine, sometimes I have to clean it up, and sometimes it’s so bad it’s not worth it. I’m being very selective, and if the details are wrong it’s no good. In the end, it’s still a photo I took, and it has some necessary touchups.

    • drolex@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      5
      ·
      4 days ago
      • Ethical issue: products of the mind are what makes us humans. If we delegate art, intellectual works, creative labour, what’s left of us?
      • Socio-economic issue: if we lose labour to AI, surely the value produced automatically will be redistributed to the ones who need it most? (Yeah we know the answer to this one)
      • Cultural issue: AIs are appropriating intellectual works and virtually transferring their usufruct to bloody billionaires
    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 days ago

      If a human is reviewing the code they submit and owning the changes I don’t care if they use an LLM or not. It’s when you just throw shit at the wall and hope it sticks that’s the problem.

      I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 days ago

        It’s the same for me.

        I don’t care if somebody uses Claude or Copilot if they take ownership and responsibility over the code it generates. If they ask AI to add a feature and it creates code that doesn’t fit within the project guidelines, that’s fine as long as they actually clean it up.

        I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

        This is the problem I have with it too. Using something that vulnerable to prompt injection to not only write code but commit it as well shows a complete lack of care for bare minimum security practices.

    • RightHandOfIkaros@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      4 days ago

      Personally, I have never seen LLM generated code that works without needing to be edited, but I imagine for routine blocks of code and very common things it probably does fine. I dont see why a programmer needs to rewrite the same code blocks over and over again for different projects when an LLM can do that part leaving more time for the programmer to write the more specialized parts. The programmer will still have to edit and verify the generated code, but programming is more mechanical than something like art.

      However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure. However, this programmer claims to have 30 years of experience, and if thats the case then he likely knows this and probably edits the LLM output code himself.

      As I have said before, Generative AI is a tool, like PhotoShop. I dont see why people should reject a tool if it can make their job easier. It won’t be able to completely replace people effectively. Businesses will try, but quality will drop off because its not being used by people that understand what the end result needs to be, and businesses will inevitably lose money.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure.

        That’s not completely true. Claude and some of the Chinese coding models have gotten a lot better at creating a good first pass.

        That’s also why I like tests. Just force the model to prove that it works.

        Oh, you built the thing and think it’s finished? Prove it. Go run it. Did it work? No? Then go fix the bugs. Does it compile now? Cool, run the unit test platform. Got more bugs? Fix them. Now, go write more unit tests to match the bugs you found. You keep running into the same coding issue? Go write some rules for me that tell yourself not to do that shit.

        I mean, I’ve been doing this programming shit for many decades, and even I’ve been caught by my overconfidence of trying to write some big project and thinking it’s just going to work the first time. No reason to think even a high-powered Claude thinking model is going to magically just write the whole thing bug-free.

    • The_Blinding_Eyes@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      While I know there is more nuance than this, but why should I spend any of my time on something, when you spent no time creating it? I know that applies more to the slop, but that’s where I am with most LLM generated stuff.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      edit-2
      4 days ago

      “If” doing all the lifting here.

      If we ignore the mountain of evidence saying the opposite…

    • Kowowow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      4 days ago

      I want to one day make a game and there is no way I’m not prototyping it with llm code, though I would want to get things finalized by a real coder if I ever got the game finished but I’ve never made real progress on learning code even in school

        • Dremor@lemmy.worldM
          link
          fedilink
          English
          arrow-up
          38
          arrow-down
          4
          ·
          edit-2
          4 days ago

          Being a developer, I don’t care if someone else uses my code. Code is like a brick. By itself it has little value, the real value lies on how it is used.
          If I find an optimal way to do something, my only wish is to make it available to as much people as possible. For those who comes after.

            • Dremor@lemmy.worldM
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 days ago

              That’s not how LLMs work either.

              An LLM had no knowledge, but has the statically probability of a token to follow another token, and given an overall context it create the statically most likely text.
              To calculate such probability as accurently as possible you need as much examples as possible, to determine how often word A follow word B. Thus the immense datasets required.
              Luckily for us programmers, computer programs are inherently statically similar, which makes LLMs quite good at it.
              Now, the programs it create aren’t perfect, but it allows to write long, boring code fast, and even explain it if you require it to. This way I’ve learned a lot of new things that I wouldn’t have unless I had the time and energy to screw around with my programs (which I wished I had, but don’t), or looked around Open Source programs source code, which would take years to an average human.

              Now there is the problem of the ethic use of AI, which is a whole other aspect. I use only local models, which I run on my own hardware (usually using Ollama, but I’m looking into NPU enabled alternatives).

            • Dremor@lemmy.worldM
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              I can live with helping some assholes if my contributions help others. At least I don’t make them richer since I only use local IAs.

        • adeoxymus@lemmy.world
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          4
          ·
          4 days ago

          Tbh all programmers have been copy pasting from each other forever. The middle step of searching stack overflow or GitHub for the code you want is simply removed

          • galaxy_nova@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            4 days ago

            Exactly. If someone has already come up with an optimal solution why the hell would I reimplement it. My real problems are not with LLMs themselves but rather the sourcing of the training data and the power usage. If I could use an “ethically sourced” llm locally I’d be mostly happy. Ultimately LLMs are also only good for code specifically. Architecture or things that require a lot of thought like data pipelines I’ve found AI to be pretty garbage at when experimenting

          • wholookshere@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            25
            arrow-down
            4
            ·
            4 days ago

            LLMs have stolen works from more than just artists.

            ALL of public repositories at a minimum have been used as training, regardless of licence. including licneses that require all dirivitive work be under the same license.

            so there’s more than just lutris stollen.

            • Lung@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              26
              ·
              4 days ago

              So he’s a badass Robinhood pirate that steals code from corporations and gives it to the people?

              • wholookshere@piefed.blahaj.zone
                link
                fedilink
                English
                arrow-up
                8
                ·
                4 days ago

                The fuck you talking about.

                Using a tool with billions of dollars behind it robinhood?

                How is stealing open source prihcets code regardless of license stealing fr corporation’s?

                • Lung@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  edit-2
                  4 days ago
                  • he’s not anthropic, and doesn’t have billions of dollars
                  • stealing from open source is not stealing, that’s the point of open source
                  • the argument above is that these models are allegedly trained “regardless of license” i.e. implying they are trained on non-oss code
          • prole@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            4
            ·
            4 days ago

            No, the LLM was trained on other code (possibly including Lutris, but also probably like billions of lines from other things)

  • southsamurai@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    19
    ·
    4 days ago

    Yeah, this is actually one of the good things a technology like this can do.

    He’s dead right, in terms of slop, if it’s someone with training and experience using a tool, it doesn’t matter if that tool is vim or claude. It ain’t slop if it’s built right.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      7
      ·
      edit-2
      4 days ago

      It ain’t slop if it’s built right.

      Yeah but the problem is, is it? They absolutely insist that we use AI at work, which is not only insane concept in and of itself, but the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?

      He says it helps him get work done he wouldn’t otherwise do, but how’s that possible? how is it possible that he is giving every line of code the same scrutiny he would if he wrote it himself, if he himself admits that he would never have got around to writing that code had the AI not done it? The math ain’t matching on this one.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        12
        ·
        edit-2
        4 days ago

        the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?

        When was the last time you coded something perfectly? “If I have to nanny you to make sure you don’t make a mistake, then how are you a useful employee?” See how that doesn’t make sense. There’s a reason why good development shops live on the backs of their code reviews and review practices.

        The math ain’t matching on this one.

        The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.

        There’s also something to be said about the value in being able to tell an LLM to go chew on some code and tests for 10 minutes while I go make a sandwich. I get to make my sandwich, and come back, and there’s code there. I still have to review it, point out some mistakes, and then go back and refill my drink.

        And there’s so much you can customize with personal rules. Don’t like its coding style? Write Markdown rules that reflect your own style. Have issues with it tripping over certain bugs? Write rules or memories that remind it to be more aware of those bugs. Are you explaining a complex workflow to it over and over again? Explain it once, and tell it to write the rules file for you.

        All of that saves more and more time. The more rules you have for a specific project, the more knowledge it retains on how code for that project, and the more experience you gain in how to communicate to an entity that can understand your ideas. You wouldn’t believe how many people can’t rubberduck and explain proper concepts to people, much less LLMs.

        LLMs are patient. They don’t give a shit if you keep demanding more and more tweaks and fixes, or if you have to spend a bit of time trying to explain a concept. Human developers would get tired of your demands after a while, and tell you to fuck off.

        • BorgDrone@feddit.nl
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          3 days ago

          The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.

          But the problem never was typing in the actual code. The majority of coding is understanding the problem you’re trying to solve and figuring out a good solution. If you let the AI do the thinking for you, then you’re building AI slop. You can’t review your way out of it because a proper review still requires that level of understanding the problem. If you just let the AI do the typing for you, there’s very little to be gained there as the time spent typing is negligible.

          AI may be good at building simple, boilerplate-level code. But that’s what we have junior developers for. Junior developers we need because they grow into medior and senior developers.

          • P03 Locke@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            3 days ago

            If you let the AI do the thinking for you, then you’re building AI slop.

            No, for major projects, you start out with a plan. I may spend upwards of 2-3 hours just drafting a plan with the LLM, figuring out options, asking questions when it’s an area I don’t have top-familiarity with, crafting what the modules are going to look like. It’s not slop when you’re planning out what to do and what your end result is supposed to be.

            We are not the same

            People who talk this way have zero experience with actually using LLMs, especially coding models.

            • Auli@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 days ago

              Oh so I didn’t vibe code a go program that I have no understanding of the language cause I knew what I wanted the program to do in the end. Got you I am now a go developer. I didn’t just ask the ai to do something I new which library I wanted it to use and new what I wanted it to interface with and new exactly what I wanted it to do.

              • P03 Locke@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                I didn’t just ask the ai to do something I new which library I wanted it to use and new what I wanted it to interface with and new exactly what I wanted it to do.

                I have no understanding of the language

                No shit… you don’t even have an understanding of the English language. No wonder the LLM didn’t understand you.

          • r1veRRR@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            This really depends on the project. For example, if you’re creating a CRUD web app for managing some kind of data, the main tough decisions involve system and data architecture. After that, most other work is straight forward menial work. It doesn’t take a genius to validate a gajillion text fields for a specific min and max length, map them to the correct field in the API, validate on the server again, and write them to the correct database field.

            I agree that AI might screw companies over in the long run, when there’s no more juniors that can become seniors. That doesn’t apply to this case at all.

      • r1veRRR@feddit.org
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        3 days ago

        You do have to consider that this is an opensource developer creating something free in his free time. The app is also not life or death. Meaning, his quality standards are UNDERSTANDABLY not as high as if he was working for money on a banks money system.

        In the end, all of the complainers are welcome to do the work themselves. That way, he won’t have to use AI at all.

      • southsamurai@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        4 days ago

        Well, I’m not a code monkey, between dyslexia and an aging brain. But if it’s anything like the tiny bit of coding I used to be able to do (back in the days of basic and pascal), you don’t really have to pore over every single line. Only time that’s needed is when something is broken. Otherwise, you’re scanning to keep oversight, which is no different than reviewing a human’s code that you didn’t write.

        Look at it like this; we automated assembly of machines a long time ago. It had flaws early on that required intense supervision. The only difference here on a practical level is about how the damn things learned in the first place. Automating code generation is way more similar to that than llms that generate text or images that aren’t logical by nature.

        If the code used to train the models was good, what it outputs will be no worse in scale than some high school kid in an ap class stepping into their first serious challenges. It will need review, but if the output is going to be open source to begin with, it’ll get that review even if the project maintainers slip up.

        And being real, lutris has been very smooth across the board while using the generated code so far. So if he gets lazy, it could go downhill; but that could happen if he gets lazy with his own code.

        Another concept that I am more familiar with, that does relate. Writing fiction can take months. Editing fiction usually takes days, and you can still miss stuff (my first book has typos and errors to this day because of the aforementioned dyslexia and me not having a copy editor).

        My first project back in the eighties in basic took me three days to crank out during the summer program I was in. The professor running the program took an hour to scan and correct that code.

        Maybe I’m too far behind the various languages, but I really can’t see it being a massively harder proposition to scan and edit the output of an llm.