If you’re having success with your use cases, then good. Just be sure to verify the results, and that’s key because many people using LLMs aren’t looking hard at what they get but just copying it as right. When LLMs fail, they fail gloriously, because they don’t understand what they’re outputting since they aren’t AGI, even though they’re sold as such.
Yeah, I know. I just use it as “from this, I did this, so make the same process for this thing”, they statistically only try to guess the next token, so they are only good at these “from this work, do the same thing here”. The buggy code that I commit is artisanal, the tests are just copying from other tests.
If you’re having success with your use cases, then good. Just be sure to verify the results, and that’s key because many people using LLMs aren’t looking hard at what they get but just copying it as right. When LLMs fail, they fail gloriously, because they don’t understand what they’re outputting since they aren’t AGI, even though they’re sold as such.
Yeah, I know. I just use it as “from this, I did this, so make the same process for this thing”, they statistically only try to guess the next token, so they are only good at these “from this work, do the same thing here”. The buggy code that I commit is artisanal, the tests are just copying from other tests.