There’s a war happening in tech right now. On one side you have people who love vibe coding( aka use AI to write every semicolon). On the other side, people who hate AI from the bottom of their hearts and would mass-report ChatGPT if they could. And somehow I’m standing right in the middle, getting yelled at from both directions.
A Brief History of Letting Machines Type For Me
I started using AI before Cursor existed. I was running local models from Ollama (Llama, Code Wizard) a few weeks before VS Code even introduced Copilot. Back then, there was no way to give an AI context about your entire product. You described exactly what you needed, it handed you a snippet, and you figured out where to put it. It was like ordering furniture from a catalog, except the catalog occasionally sent you a bookshelf when you asked for a chair.
But this wasn’t even new. Years before that, we were doing the same thing with PHPStorm/Sublime Text snippets. Little curated shortcuts that let you tab-expand boilerplate and skip the boring parts of a project. You’d build up a personal library of them like a chef collecting recipes. The idea was identical: “I’ve done this a hundred times, let me skip to the interesting part.” The difference is that Sublime snippets did exactly what you told them, every single time. AI snippets do approximately what you told them, most of the time, with occasional creative liberties nobody asked for.
Then Copilot arrived and auto-complete actually became fun. Then Cursor. Then Claude Code. And here I am today where 100% of my code is AI-generated. Processes I find redundant? Automated with skills, sub-agents, hooks, Claude running headless in the background. My terminal looks like a mission control room for a space program that only launches CRUD apps.
So why am I writing this? Because I’m exhausted by a debate where both sides are screaming past each other, and neither is entirely wrong.
The Case for Vibe Coders
I get it, AI is the easiest path ever to building the app of your dreams with the smallest effort. These are people who just want their product out there in the world. Maybe they don’t care about code quality the way we do. They’ll never read the code. The AI wrote it, the AI will maintain it, and if it breaks, the AI will fix it. That’s the theory anyway.
In my opinion, the real cost is brain atrophy and the slow, comfortable growth of laziness. But that’s their choice. And I still don’t think the rest of the internet should mock and blame them for everything that goes wrong in tech. As if some guy vibe-coding a recipe app in his living room is personally responsible for the collapse of software engineering.
At the end of the day, vibe coders are placing a bet. A bet that AI will eventually be better than human reasoning, and when that day comes, they’ll have a head start in a fully autonomous world. I don’t think that day is coming. But I also wouldn’t go on Twitter to laugh at someone else’s bet.
But the cost is real. Not understanding what’s under the hood hurts in the long run. If you can’t visualize the structure of a building, you can’t envision an expandable blueprint around it. And you definitely can’t fix the plumbing when it bursts at 2 AM on a Friday.
You’ll say AI will fix it. From experience, I can confidently say: when you lose control of the project knowledge, the project is doomed. I’ve lost count of how many times I’ve git reset an entire codebase back to a state I still understood because AI had spiraled into a loop of mistakes with no exit. It fixes one thing, breaks two others, fixes those two, breaks four more. Like a plumber who floods your kitchen trying to fix a leaky tap.
And the worst part? During those spirals, it was looking at me for guidance. But I had checked out an hour ago. I was just as lost as it was. Two confused idiots staring at a wall of red errors, except one of them was paying $200 a month for the privilege.
The Case for AI Deniers
I love working with AI. I even enjoy retrofitting old products with new technology. But I completely understand people who’ve been force-fed AI trends until they started gagging. And their reasons aren’t stupid:
They’re losing jobs over it. Their partner is leaving them because ChatGPT told them to “live their best life.” Congratulations, a language model just ended a marriage with a motivational poster quote. They look at AI-generated code and see a sloppy mess. The entire world keeps cramming AI into everything, but not everything can be solved by a probabilistic algorithm. Some things are built on feelings, on lived experience, on stuff that doesn’t reduce to token probabilities.
Or, they(actually me) wanted to buy an RTX 5080 Super but Nvidia decided that selling AI chips is more profitable than letting gamers have nice things. I didn’t sign up for the AI revolution to lose my GPU upgrade path. Nobody warned me about this trade-off.
Or they just hate whatever’s trending and think being permanently against things is a personality trait. We all know someone like this.
I don’t agree with all of these arguments. But I can’t blame people for feeling this way. I’ve always been the kind of person who wants to learn something new every day. But not everyone is wired like that, and for some people this pace is either scary or just plain tiring.
Where I Actually Stand
In this war I’m a centrist. Which means I can barely fit in any of these social groups. Great.
Here’s what I believe:
AGI isn’t here. The only way AGI “arrives” is if we collectively lower our IQ until AI looks smart by comparison. LLMs don’t think. They give you the most probable result from their training data. That’s a hard ceiling, not a stepping stone. A mirror can’t decide to show you a better face.
But AI is absurdly useful. It was trained on basically all the data on the internet, and now that power is at your fingertips. If a hundred developers already solved your exact problem and posted it on Reddit three years ago, you don’t have to dig through ten Stack Overflow tabs anymore. AI brings you that solution, customized for your context. If the magic box can generate 100% of my code and even offer me multiple variants to choose from, why on earth would I not use it?
The cost problem is real and nobody wants to talk about it. AI is expensive. People keep hoping costs will drop, but today, with a $200 Claude subscription, you still hit daily limits. And roughly 70% of the world can’t afford that. The promise that everyone will vibe-code their own apps, their own CMS, their own operating system? That’s fantasy. The cost doesn’t bend toward normal people. It bends toward the top 1% who can burn tokens like firewood. Telling a developer in Lagos or Dhaka that they can “just use AI” to compete is either naive or dishonest.
We’re wrong for trusting blindly. Sending all our data to Google and OpenAI without thinking twice is a real problem. But let’s be honest, we already did the same thing with GitHub years ago and nobody said a word. That ship sailed a long time ago.
Some companies are just faking it. Slapping “AI-powered” on everything to pump a stock price isn’t innovation. It’s marketing. And people are starting to notice.
Oh, and about AI Bots on social media? There is a special place in hell for those who do that. Seeing those lame posts and soulless replies online make me feel like AI servers need to burn to the ground.
AI Didn’t Fix My Burnout, It Gave Me a Better One
Here’s something nobody talks about in the whole productivity conversation: AI didn’t make my work easier. It made me capable of more work. And more complex work. Which means I now hit burnouts harder and more often than ever before.
For most people that sounds awful. For me it’s exactly where I want to be, because I like hard work and I hate boring work.
Before AI, I wrote decent bash scripts. I could set up CI pipelines and automate a process in a few days. Fine, nothing special. Now I do that in minutes and spend the extra time on the actual product, on the parts that matter to customers. What Claude generates for my GitHub Actions is better than what I was writing, and the reason is simple: my experience was below the average of what’s already been published online. AI leveled me up to what’s probably the best practice according to everything that’s been written about it. That’s not a threat to my ego. That’s a win.
The burnout doesn’t come from the AI. It comes from discovering how much ambition you were sitting on once you remove all the friction. The bottleneck was never motivation, it was all the tedious stuff eating your hours. Take that away and suddenly you’re running at a speed your body wasn’t built for. I burn out now because I’m building things I actually care about, at a pace that wasn’t possible before. It’s like being sore after a good workout versus being tired after sitting in traffic. Same exhaustion, completely different feeling.
Still. I need to get better at stopping.
But I Still Need to Outthink It
I know that in certain areas, I need to push past what AI can offer. To find solutions that nobody on the internet and no model has thought of yet. And I’ve had my moments.
There was one time I was going back and forth with Claude Opus about a security approach. I wanted to encrypt a token from the database and expose it in the browser as an Auth mechanism. Claude kept saying it was a dead end. “Whatever you do, a hacker can always steal the token and replay it via CURL.” Several rounds of me explaining, it pushing back, me refining the idea. Eventually I laid out the full approach (which I’m not sharing here, sorry) and proved it worked. That was the moment Claude told me it was a “genius solution.”
Now, technically it was complimenting the approach, not calling me a genius. But I’m going to take the win anyway because I spent enough tokens arguing with a language model to feel like I earned it.
Yes, AI Generates Slop. That’s a People Problem.
AI produces slop because people throw lazy prompts at it and accept the first output without reading it. They push the hard work to the AI and just want the thing shipped. But if luck hits them and their slop works and people find it useful? Honestly, so be it. Worse products have succeeded on better marketing alone.
What I’m NOT ok with is the cultural shift that came after it. After the whole Garry’s List scandal, a chunk of the internet decided that lines of code don’t matter anymore. That as long as you can ship a site with AI, the code behind it is irrelevant. That’s not a bold new mindset. That’s just lazy thinking dressed up as progress.
Lines of code absolutely matter. Reducing them improves performance, readability, and maintainability. This has been true for decades and AI doesn’t change that. I’m fine with vibe coding. Build your thing, ship it, have fun. But the moment we start normalizing the idea that any code is good enough as long as it runs, we’re not lowering the barrier to entry anymore. We’re lowering the floor. And the floor was already pretty low.
There’s a big difference between “not everyone needs to be a senior engineer to build something” and “engineering standards are outdated.” The first one is empowering. The second one is how you end up with a generation of products that work on demo day and fall apart the week after.
Personally, I have different standards:
I never sign off on a commit I don’t fully understand. Every AI-generated change gets reviewed as if I wrote it myself. My name is on it. When it breaks at 3 AM, the AI won’t be the one getting the call.
I’d rather be late than lost. I’ll burn through tokens in the research phase before I accept an execution path I can’t explain. Understanding comes first. Speed is a side effect of understanding, not a replacement for it.
Fast now doesn’t have to mean legacy later. The shortcut you take today is the technical debt your future self will pay for. AI doesn’t change that. It just makes it easier to pile up debt faster, with a smile on your face.
The vibe coders aren’t villains. The AI deniers aren’t dinosaurs. The truth is somewhere in the middle, where nobody gets to feel righteous and everyone still has work to do.
I just wish more people would meet me here. It’s pretty clear from this side, even if the noise from both camps makes it hard to think.
Because I’m not a designer nor a native English speaker, I’ve used AI to generate the images in it and for grammar spelling. Props to Claude!
Leave a Reply