Guys, you can laugh at a joke. The AI doesn’t win just because someone upvoted a meme. Maintainability of codebases has been a joke for longer than LLMs have been around because there’s a lot of truth to it.
Even the most well intentioned design has weaknesses that we didn’t see coming. Some of its abstractions are wrong. There are changes to the requirements and feature set that they didn’t anticipate. They over engineered other parts that make them more difficult to navigate for no maintainability gain. That’s ok. Perfectly maintainable code requires us to be psychics and none of us are.
I actually laughed out loud at this meme.
I mean, yes, absolutely I can. So can my peers. I’ve been doing this for a long, long time, as have my peers.
The code we produce is many times more readable and maintainable than anything an LLM can produce today.
That doesn’t mean LLMs are useless, and it also doesn’t mean that we’re irreplaceable. It just means this argument isn’t very effective.
If you’re comparing an LLM to a Junior developer? Then absolutely. Both produce about the same level of maintainable code.
But for Senior/Principal level engineers? I mean this without any humble bragging at all: but we run circles around LLMs from the optimization and maintainability standpoint, and it’s not even close.
This may change in the future, but today it is true (and I use all the latest Claude Code models)
The biggest problem with using AI instead of junior developers is that junior developers eventually become senior developers. LLMs … don’t.
They might, but it does not seem likely to me and is definitely not guaranteed.
It’s more likely than it happening with an LLM, though. Without junior developers the number of future senior devs approaches zero.
Sorry, my wording was very unclear. I was referring to the LLMs having a small but non-zero chance to actually get good enough to replace senior devs.
You are right though, chances for the average junior dev to reach senior status are much better than for LLMs to reach that stage any time soon (if ever).
sir, this is programmer_humor
and some jokes just aren’t funny
😞 Sir this is a Wendy’s.
With LLMs I get work done about 3-5x faster. Same level of maintainability and readability I’d have gotten writing it myself. Where LLMs fail is architecting stuff out- they can’t see the blind alleys their architecture decisions being them down. They also can’t remember to activate python virtual environments, like, ever.
I think it depends on what you’re writing code for. For greenfield/new features that don’t touch legacy code or systems too much? Sure, I agree with that assessment.
Unfortunately that’s a small fraction of the kind of work I am required to do as most of the work in most places doing software dev are trying to add shit to bloated and poorly maintained legacy systems.
Working in those environments LLMs are a lot less effective. Maybe that’ll change some day. But today, they don’t know how to code reuse, refactor methods across classes/design patterns, etc. At least, not very well. Not without causing side effects.
No, so let’s vibe unmaintainable code together!

Yus [good image]. Use it to assist and expedite learning (mostly by double checking its output, and debugging its code) to get better. Not as a slave to do your work for you.
Maybe the real slop was the code we wrote along the way
But, I didn’t check any of mine in?
Bah, you both read the same Stack Exchange. But it remembered it byte for byte.
I might not be the best, but I can still do a better job than AI
This is a bold claim I will not make.
If you are complete novice then obviously not but I think anyone reasonably proficient in a language would be able to identify optimisations that an AI just doesn’t seem to perceive largely because humans are better at context.
It’s like that question about whether it’s worth driving your car to the car wash if the car wash is only 10 metres away. AIs have no experience of the real world so they don’t inherently understand that you can’t wash a car if it’s not at the car wash. A human would instantly know that that’s a stupid statement without even thinking about it, and unless you instruct an AI to actually deeply think about something they just give you the first answer they come up with.
What’s why they’re pushing for the datacenters, they want to turn make every query that deep. The tech is here, but the ability to sustain it isn’t. They build the data centers, kick the developers out, depress the education market for it, and then raise the prices.
Companies will be paying the AI companies 60k per year per seat in a decade.
At that price it would be cheeper to use humans
That’s the brilliance. There won’t be a pool of trained young developers by then.
What makes you think there will be a decade passing with any result to exist from here?
I agree with you. But the tool will output a basic code that mostly do what asked in seconds instead of tens of minutes if not hours. So now we could argue if the optimization you make are worth the added cost I’d writing the code yourself or if it’s better to have the tool to generate the code and then optimizing it.
A tale as old as time. The US nuclear missile codes were 000000, but it didn’t matter. The chain of command was purpose-built, ironically, so the front line soldier in a cold war scenario had to make the last decision to delete all life on the planet. Chain of command doesn’t matter at that point. You are choosing to kill everyone you know from an order from who knows who. The ultimate checksum.
You will always be better at decisions than an n-dimensional matrix of numbers on an overpriced GPU.
I don’t understand your point about the solider on the front line, but I’m interested. If you get a chance, can you elaborate please?
You will always be better at decisions than an n-dimensional matrix of numbers on an overpriced GPU.
I’d be careful about these claims. Maybe with our current iteration of “attention-based” LLMs, yes. But keep in mind that our way of processing information is strongly limited compared to how much data is fed to these LLMs while training, so they in theory have a lot more foundation to be able to reason about new problems.
We’re vastly more capable at the moment at interpreting our limited view on foreign code, being actually creative, find new ways to reason, yes. Capable developers (open source…) often have seen quite a bit more code than the average developer and are highly skilled, still with just a tiny subset of the code that an LLM has seen.
But say these models improve in creativity and “higher-level of thought” through whatever means (e.g. through more reinforcement learning). Well, let’s just say I’m careful with these claims. These LLMs are already quite a help with stupid boilerplaty code (less so with novel stuff, and writing idiomatic non-redundant code, but compared to 2-3 years ago it’s quite a step already, to the point that they’re actually helpful, disregarding all the hype and obvious marketing strategies of these AI-companies)
When that coworker tells you “hah you must have generated this” but you coded this yourself 👀
“You need to try your best” “This was my best…”
Yes, but only I can maintain it.
I can maintain it. But I won’t.
I could.
I choose not to! Take that, LLM!
Exactly. I’ve been sabotaging the AI with shitty code output since long before LLMs existed. That’s how I play 4D chess. (This is just meant to get a laugh. Some of my code is even quite nice, actually.)
10 PRINT 'Hello World!'20 GOTO 10EZ
Infinite loop and hard coded magic constant; this should have a configurable timeout and a resource file the string is read from so we can internationalize the application. Additionally, the use of a goto with a hard coded line number is a runtime bug waiting to happen after unrelated refactors; it’s best to use a looping construct that has more deterministic bounds.
*while true gangsign*
yes. yes I can. been doing it for 25 years.
I’m ass at coding and I still can, lmao
Yes.
The code I wrote before LLMs was maintainable, because it was concise clean code.
The code I’ve let LLMS write for me, unless I spend 30x as long telling it to do it right, is a verbose unreadable noisy spray.
Whoever upvoted this needs to read some books.
What are these books you speak of? Do they have special features?
Since “some books” did not specify, presumably any books are okay.
Perhaps starting with The Pet Goat.
Apparently it works upside down too. ;)
[Extra points for any who get the reference.]
Evasive deflection.
ouch








