…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.
The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.
I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.
The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.
I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?
For reference, I used Opus 4.6 Extended.
my experience with LLM’s and numerical computations like with MATLAB or GNU octave, has been poor. I assume its more of an issue that the data isn’t there, MATLAB has it’s own proprietary AI (which I don’t believe is trained on users code) and Octave has no AI associated on it’s end so the major LLM’s only get trained by the data it is prompted by users online or otherwise. Which is why if you prompt it to do a 3D plot, it will almost always pull something out of it’s ass.
your feeling of a “mental-fog” is my experience with AI in general, the language model explains the ideas well, but then the code editor does some obscure move that makes no fucking sense. also, because you’re not programming it and learning from your mistakes it makes you uncertain of your code. its unfortunate to see search engines are going to shit because of AI, because AI is not ready.
The solutions it generated were almost write every time
Did you vibe code this post? 😂
In my experience there are three ways to be successful with this tool:
- write something that already exists so it doesn’t need to think
- do all the thinking for it upfront (hello waterfall development)
- work in very small iterations that doesn’t require any leaps of logic. Don’t reprompt when it gets something wrong, instead reshape the code so it can only get it right
The issue with debugging is that it doesn’t actually think. LLMs pattern match to a chain of thought based on signals, not reasoning. For it to debug you need good signals in your code that explicitly tell what it is doing and the LLMs do not write code with that level of observability by default.
Edit: one of my workflows that I had success with is as follows:
- write a gherkin feature file describing desired functionality, maybe have the LLM create multiple scenarios after I defined one to copy from
- tell the LLM to write tests using those feature files, does an okay job but needs help making tests run in parallel.
- if the feature is simple, ask the LLM to make a plan and review it
- if the feature is complex then stub out the implementation in code and add TODOs, then direct the LLM to plan. Giving explicit goals in the code itself reduces token consumption and yield better plans
write something that already exists so it doesn’t need to think
If something already exists, it shouldn’t need to be rewritten.
Doing otherwise is a sign that something has gone wrong.
That was the case before LLMs and it is still the case today.
What they mean is rewrite something that has a LICENSE my company can’t use.
If the rewrite is based on something which has a license that your company can’t use, then the rewrite likely can’t be used either
I’m pretty sure if code is AI generated it’s likely considered original, but I’m not a lawyer by any stretch.
Only something created by a human can be copyrightable. (See the copyright status of monkey who took a selfie for precedent).
Any code written by an LLM is not copywritable because a human did not write it.
Also the company that trained the LLM is likely in breach of the licenses the code palls under.
Absolutely. It’s amazing how many articles showcasing vibe coding is just people reinventing things like a password generator.
I have a full pro model for Kiro at work. It does actually work, but we have custom MCP servers for all the internal tools, context on how to use these tools, style guidelines, etc. and then on top of that we have a lot of AI context files in the code base to help the AI understand the code base and make the correct changes.
I’ve been using it on a side project and it works if you know how to constrain it. It does get things wrong a lot. But the big thing about it is doing spec driven development where you give it a write up and it makes a requirements doc and a design doc with a lot of correctness properties in them to follow when generating and making the tasks.
I don’t believe people can vibe code unless they can actually code. It’s a whole different way of coding. I still manually edit what it does a lot.
A lot of people explain it like it’s a brand new junior developer. You need to give it as much context as possible, tell it to exactly what you want, tell it what you don’t want, tell it why, etc. and it still may not listen exactly.
You just didn’t use the right prompts!!!
/s
Key is having it write tests and have it iterate by itself, and also managing context in various ways. It only works on small projects in my experience. And it generates shit code that’s not worth manually working on, so it kind of locks your project in to being always dependent on AI. Being always dependant on AI, and AI hitting a brick wall eventually means you’ll reach a point where you can’t really improve the project anymore. I.e. AI tools are nearly useless.
The trick about vibe coding is that you confidently release the messed up code as something amazing by generating a professional looking readme to accompany it.
The more Emojis in that Readme the better!
you need to fully be able to program to work with these things, in my experience.
you have to explain what you want very specifically, in precise programming terms.i tried a preview of chatgpt codex and it’s working better than my free version of claude, but codex creates a whole virtual programming environment, you have to connect it to a github repository, then it spins up an instance with tools you include and actually tests the code and fixes bugs before sending it back to you.
but you still need to be able to find the bugs and fix them yourself.oh and i think they work best with python, but i’ve also used ruby and dart and it’s decent.
it’s kinda like a power tool, it’ll definitely help you a lot to fix a car but if you can’t do it with wrenches it won’t help very much.I’ve never been able to program in anything more complex than BASIC and command line batch files, but I’m able to get useful output from Claude.
I’m an IT Infrastructure Manager by trade, and I got there through 20 years of supporting everything from desktop to datacenter including weird use cases like controlling systems in a research lab. On top of that, I’ve gotten under the hood of software in the form of running game servers in my spare time.
What you need to get good programs out of AI boils down to 3 things:
- The ability to teach an entity whose mistakes resemble those of a gifted child where it went wrong a step or ten back from where it’s currently looking.
- The ability to provide useful beta test / debug output regarding programs which aren’t behaving as expected. This does include looking at an error log and having some idea what that error means.
- Comfort using (either executing or compiling depending on the language) source code associated with the language you’re doing things in. This might be as simple as “How do I run a Powershell script or verify that I meet the version and module requirements for the script in question?”, or it might be as complicated as building an executable in Visual Studio. Either way whatever the pipeline is from source to execution, it must be a pipeline you’re comfortable working with. If you’re doing things anywhere outside the IT administration space, it’s reasonable to be looking at Python as the best first path rather than Powershell. Personally, I must go where supported first party modules exist for the types of work I’m developing around. In IT Administration, that’s Powershell.
I’ve made tools which automate and improve my entire department’s approach to user data, device data, application inventory, patch management, vulnerability management, and these are changes I started making with a free product three months ago, and two months back I switched to the paid version.
Programming is sort of like conversation in an alien language. For that reason, if you can give precise instructions sometimes you really can pull something new into existence using LLM coding. It’s the same reason that you could say words which have never been said in that specific order before, and have an LLM translate them to Portuguese.
I always used to talk about how everything in a computer was math, and that what interested me more than quantum computing would be a machine which starts performing the same sorts of operations on words or concepts that computers of that day ('90s and '00s when “quantum” was being slapped on everything to mean “fast” or “powerful”) were doing on math. I said that the best indicator when linguistic computing arrives would be that without ever learning to program, I’d start being able to program. I was looking at “Dragon Naturally Speaking” when I had this idea. It was one of the earliest effective speech to text programs. I stopped learning to program immediately and focused exclusively on learning operations from that point forward.
I’ve been testing the code generation abilities of LLMs for about three years. Within the last six months I feel like I’m starting to see evidence that the associations being made internally by LLMs are complex enough to begin considering them the fulfillment of my childhood dream of a “word computer”.
All the shitty stuff about environment and theft of art is all there too, which sucks, but more because our economic model sucks than because LLMs either do or do not suck. If we had a framework for meeting everybody’s basic needs, this software in its current state has the potential to turn everyone with a passion for grammatical and technical precision into a concept based developer practically overnight.
I’ve never been able to program in anything more complex than BASIC and command line batch files, but I’m able to get useful output from Claude.
Chatbots being deemed useful in tasks by people unqualified to make those judgments is a running problem.
I have no qualifications to judge the quality of the generated results, yet the generated results are always of great quality.
Do you seriously not realize how out of touch this sounds?
Of course it sounds out of touch. I didn’t say it, or anything like it. Just like the other commenter, you seem to have stopped after the first sentence.
20 years of IT experience from a support perspective does qualify me to put anybody in the programming space on notice. The tools might not be as good as a talented and well trained dev, but they’re already better than a lazy dev. The output I get from Claude Code takes effort to get running. It just takes less of it than the output from my outsourced offshore MSP.
I recently started using Pro to debug a problem I couldn’t solve. The one thing I need from it is an extra insight, a second opinion (because I’m the only developer), and it allowing me to let it read the whole folder helps, it identified a problem I didn’t consider because it’s a file outside of where I was looking.
I think it’s mostly going to be useful for boilerplate generation, and effectiveness is going to vary wildly based on what language you’re using. JS or Python? It’ll probably do OK. Plenty of open source for it to “learn” from. Delphi? Forget it.
Brief experimentation showed it liked to bullshit if it was wrong, rather than fix things.
Don’t just use it as a drop in replacement for a programmer; use it to automate menial tasks while employing trust but verify with every output it produces.
A well written CLAUDE.md and prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification before doing anything will keep everything in your control while also aiding menial maintenance tasks like repetitive sections or user tests.
verify with every output it produces.
I agree that you can get quality output using these tools, but if you actually take the time to validate and fix everything they’ve output then you spend more time than if you’d just written it, rob yourself of experience, and melt glaciers for no reason in the process.
prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification
Anything in the prompt is a suggestion, not a restriction. You are correct you should restrict those actions, but it must be done outside of the chatbot layer. This is part of the problem with this stuff. People using it don’t understand what it is or how it works at all and are being ridiculously irresponsible.
repetitive sections
Repetitive sections that are logic can be factored down and should be for maintainability. Those that can’t be can be written with tons of methods. A list of words can be expanded into whatever repetitive boilerplate with sed, awk, a python script etc and you’ll know nothing was hallucinated because it was deterministic in the first place.
user tests.
Tests are just as important as the rest of the code and should be given the same amount of attention instead of being treated as fine as long as you check the box.
I agree it’s not perfect; I still only use it very sparingly, I was just just saying as an alternative to trusting everything it does out of the box.
Don’t jump right in to coding.
Take a feature you want, and use the plan feature to break it down. Give the plan a read. Make sure you have tests covering the files it says it’ll need to touch. If not, add tests (can use LLM for that as well).
Then let the LLM work. Success rates for me are around 80% or higher for medium tasks (30 mins–1 hour for me without LLM, 15–30 mins with one, including code review)
If a task is 5 mins or so, it’s usually a hit or miss (since planning would take longer). For tasks longer than 1 hour or so, it depends. Sometimes the code is full of simple idioms that the LLM can easily crush it. Other times I need to actively break it down into digestible chunks
I haven’t tried any Anthropic models personally.
So far, between the free online chats by OpenAI and DeepSeek, and the smaller models I’ve run on my own machine, the most useful things I have gotten from it were to treat it as an overeager student that lacks the first-hand experience needed to see the big picture, asking it questions that I’m pretty sure I already know the answer to and seeing if 1) it “understands” what I’m getting at and 2) it can surprise me with a viewpoint I hadn’t thought of before.
Using them to double-check my own ideas seems to be marginally useful, especially when there’s no qualified human being whose attention I can borrow. Using them as a sort of semantic web search can sometimes get me what I’m looking for faster than Google. If anything, they’re an opportunity to exercise critical thinking; if I can tell where it’s getting things wrong I can be fairly confident that my own understanding of the problem/subject is pretty solid.
Vibe coding, though? I have yet to see it work out. Maybe as some starting slop so that I can get to work refactoring code (and get the ideas flowing) instead of staring at a blank file.
Also working on some 3d maths.
I’ve used the free versions a bit, but not really to the extent that I’d call it vibe coding. The chat bots often know where to find libraries or pre-existing functions that I don’t know. It’s also okay at algorithms for well defined problems, but it often says be careful not to do something I absolutely need to do or visea versa. It’s very hit and miss on debugging. It’ll point out obvious stuff (typos) reliably, and it can do some iteration stuff usually, but it usually doesn’t pick up on other things. Once in a rare while it will impress me by suggesting I look at a particular thing, and I think it manages this better in new chats, but most complex issues fail for it. I use it as a faster stackoverflow, but you need to be able to work through the code yourself, understand what you’re doing, and test that individual steps are doing what they need to do. The bots can’t really do any sort of planning or breaking down a problem into sub-problems, and they really suck at thinking about 3d stuff.
I think it’s pretty heavily dependent on what you’re trying to do. I’ve gotten a lot of push from higher ups at my company to use copilot wherever possible. So, I’ve spent a lot of time lately having copilot + opus write code for me. Most of what I’m doing is super straightforward middleware APIs or basic internal front ends. Since it has access to very similar codebases for reference, and we have custom agents that point it in the right direction, it’s a pretty good experience.
However, if I ask it to do something totally new, it does okay, more like what you’ve experienced. It takes a lot of hand holding, but it usually gets the job done as long as you’re very descriptive in your prompt. Probably not faster than an experienced developer at the moment though







