Vibe Coding using Antigravity, part 3

Vibe Coding using Antigravity, part 3

The Dark Side of Vibe Coding: Loops, Rot, and Complacency

It’s not all sunshine and rainbows in the land of vibe coding. While my experience has been mostly positive, there are certain aspects of this new workflow that I find concerning, even detrimental, for someone who writes code for a living.

1. The Debug Error Death Loop

This is the "Groundhog Day" of AI development. You ask the model to fix an error; after a few minutes, it tells you the coast is clear. You run the server and hit a different error. You report the new bug, the model "fixes" it, and you run the server again, only to see the original error you started with.

This cycle can go on for a while. One error replaces the other, back and forth, over and over. At one point, I spent several minutes trapped in this loop.

This cycle can go on for a while, swapping one error for another in a frustrating loop. One reason this happens is context saturation. As the chat history grows longer with every failed fix and massive code block, the model’s "working memory" becomes cluttered. It loses the thread of the original architecture and starts hallucinating fixes just to keep up.

The real danger here is that by this point, I had been vibe coding so heavily that I stopped looking at the diffs. I was just clicking "Proceed" without a second thought. I was failing at my initial goal: to audit what the model was doing. This leads directly to the second negative of vibe coding: complacency. When you let the AI drive until it hits a wall, you find yourself in a very bad neighborhood.

2. The Dopamine Trap of AI Complacency

It is incredibly addicting to click "Proceed" and see a brand-new feature appear in minutes. Feature after feature, delivered instantly. I believe this is exactly the state AI companies want you in: complete addiction.

No matter what I asked for, the model was happy to oblige. It even complimented me, calling my ideas "incredible" or "genius," regardless of whether the feature was total garbage. I was vibe coding features just to test the limits, and the AI acted like a brown-nosing intern, gleefully "yes-man-ing" every bad decision. I half-expected the model to have a secret personality behind my back, whispering, "This guy’s an idiot," while it smiled to my face.

3. Architectural Rot: Code Quality over Time

The "Death Loop" eventually forced me to stop and actually read the code. I had let the AI write hundreds of lines without review, and the rot was setting in.

The model had stopped adhering to a consistent code style. A coding style is an agreement within a dev team, it’s the "how" of the project. For example, how you structure your folders:

  • Group by Feature:
    • /src/billing
    • /src/sales
  • Group by Technology:
    • /src/frontend
    • /src/backend

In Wagtail CMS (which runs on Django), you have flexibility with template locations. You might have them nested within the app: .../blog/templates/blog/blog_page.html

Or centralized: .../templates/blog/blog_page.html

By the time I checked, Gemini had mixed these styles entirely. I had asked for dozens of features, and the project had become a map of "spaghetti code." Untangling it took painful, manual effort because moving files broke the logic.

My mistake was that I stopped thinking like a programmer and started thinking like a Product Owner, just cranking out features at any cost.

If you care about your project, you need to establish a coding style in a markdown file (like a .cursorrules or similar system prompt) before the first line of code is written.

The AI didn't care about style or architecture. It only cared to do what you asked. There were even more serious architectural issues I encountered that I won’t list here. There are some things the public doesn’t need to know about how the "sausage" of a website is made.

The Final Word: Stay in the Loop

You might say, "We’re in the age of AI; who cares what the code looks like?" You might argue that in this new world, design patterns and coding styles are becoming irrelevant. I believe that is premature thinking. AI hallucinates. It isn't right 100% of the time, and it may never be. Even if AI is eventually "right" 99% of the time, what happens during that 1% when it’s caught in a death loop? If you haven't maintained a clean architecture, you won't have the map you need to find your way out.

The AI companies would love a future where software engineers simply oversee an orchestration of AI agents, only intervening when something breaks. It reminds me of a friend of mine who works in DevOps managing Kubernetes clusters. When a pod suddenly crashes, he has to dive into the container logs with kubectl to diagnose the failure. To do that effectively, he still has to be a software engineer; he has to understand what those logs are actually telling him. I see this as the likely future of our profession: we will be the 'system operators' of AI, and we will still need deep engineering knowledge to debug the mess when the agents lose the thread.

Software was built for people, and because of that, we still need to be in the loop. "Vibe coding" is a powerful engine, but you still need to keep your hands on the steering wheel.

Discussion (0)

No comments yet. Be the first to comment!

Leave a Reply