svnscha - Profile Picture

The Great Prototype Experiment

Here's what nobody tells you about AI-assisted rapid prototyping: After six months and nearly 100 projects, I had to face an uncomfortable truth.

Every prototype feels like success. But is it really?

Why, You Ask?

Picture this: Me, armed with Claude, spinning up project after project like I'm some kind of coding machine gun. A productivity suite here, a file manager there, throw in some web scrapers, automation tools, data processors - you name it, I've probably built a prototype of it in the past six months.

Nearly 100 projects. The dopamine hits were insane. Every idea that popped into my head, no matter how half-baked, could become a working prototype in hours instead of weeks. The velocity felt otherworldly.

But here's the uncomfortable truth that took me way too long to acknowledge: Out of those nearly 100 prototypes, exactly ONE made it to production and actually launched.

One. Literally. One.

That project? mcp-windbg. And guess what made it special? It's the only one where I spent those classic, grinding hours sitting there, actually understanding every line of code, debugging the edge cases, and wrestling with the gnarly implementation details.

The Dopamine Factory

Let me be real with you about what those early days felt like. I'd wake up with some random idea - "What if I built a tool that automatically organizes screenshots by content?" or "I need a better way to manage my development environments" - and by lunch, I'd have a working prototype.

Six months ago, I was writing about vibe coding like I'd discovered fire. The euphoria was real - watching AI refactor entire codebases, generate complex implementations, solve architectural problems in seconds. I was living in that perfect state where "the barrier between thinking and implementing just got so much thinner."

Me: Build me a screenshot organizer that uses OCR to categorize images
Claude: *Generates Python app with OCR integration, file management, GUI*
Me: This actually works perfectly!

The rush was incredible. Idea to implementation in hours. No tedious boilerplate. No debugging mysterious dependency issues. No wrestling with documentation. Just pure, frictionless creation.

I felt like a goddamn wizard. Every prototype that worked felt like validation that I'd cracked some secret code of productivity. Friends would ask what I was working on, and I'd rattle off a dozen projects like I was running a software factory.

That euphoric beginning? It was exactly six months ago. Time has a funny way of providing perspective.

The Reality Check: Why Nothing Shipped

But here's where the story gets uncomfortable. Months passed. My project folder grew to nearly 100 directories. And my shipped product count remained stubbornly at zero.

Why? Because prototypes aren't products.

Every time I'd come back to one of these AI-generated prototypes to actually finish it - to handle edge cases, add proper error handling, write tests, or make it production-ready - I'd hit the same wall: I had no clue how the thing actually worked.

The code looked familiar. The structure made sense at first glance. But when it came time to:

  • Debug why it crashed with certain file types
  • Add a feature that required understanding the data flow
  • Optimize performance bottlenecks
  • Handle edge cases the AI hadn't considered

I was lost. Completely, utterly lost.

The One That Made It: mcp-windbg

So why did mcp-windbg succeed when 99 others failed? It wasn't because I spent months on it - it was a small weekend project. But I approached it differently.

I started by vibe-coding with Claude, just like the other prototypes. Got the basic structure working, had it generating and parsing WinDBG commands. But then - and this is the crucial difference - I actually went back and reviewed what the AI had built. Debugged the edge cases myself. Traced through the subprocess communication to understand how it really worked.

When pytest failures popped up, I didn't just ask Claude to fix them. I sat there and figured out why they were failing. When the CDB interaction got weird with certain commands, I debugged it manually until I understood the communication protocol.

The result? A weekend project that actually solved a real problem in my daily work. Something I could confidently maintain, extend, and explain to others.

I owned the code because I'd taken the time to understand what Claude had built.

The Brutal Truth About AI-Generated Prototypes

Those 99 failed projects weren't failures because the code was bad. Most of them actually worked pretty well for their basic use cases. They failed because I'd delegated understanding to AI instead of using it as a tool to support my understanding.

Let's break down what actually happens:

  • Week 1: "Holy shit, look at this amazing prototype!"
  • Week 2: "I should probably clean this up and ship it"
  • Week 3: "Hmm, there are some edge cases I need to handle"
  • Week 4: "Why is this crashing? Let me ask Claude to fix it"
  • Week 5: "The fix broke something else. This is getting messy"
  • Week 6: "Maybe I'll just start a new project instead..."

Sound familiar? This cycle repeated so consistently I could set my calendar by it.

The Sitting-There Hours: Why They Matter

You know those classic hours every developer has experienced? The ones where you're just sitting there, staring at code, trying to figure out why something isn't working the way you expect? Those moments when you're debugging line by line, tracing execution paths, and slowly building up a mental model of how everything fits together?

Those hours aren't wasted time. They're investment time.

When you skip those hours - when you let AI generate the solution and just accept it - you never build that intimate knowledge of your own system. You become a tourist in your own codebase.

With mcp-windbg, I didn't spend weeks wrestling with it - it was just a weekend project. But during that weekend, when I hit issues with subprocess communication or weird CDB behavior, I actually debugged them myself instead of immediately asking Claude for fixes. I took the time to understand why certain commands failed and how the debugging session management worked.

That weekend of actually understanding what I was building made the difference between prototype #99 and shipped product #1.

The False Productivity Trap

Here's the thing that really gets me: I felt incredibly productive while building those 99 prototypes. The dopamine hits were constant. Every working prototype felt like a win. My GitHub commit graph looked like a productivity enthusiast's dream.

But productivity isn't about how much code you generate. It's about how much value you ship. And by that metric, I had a 1% success rate.

The issue wasn't using AI - it was how I was using it. Instead of treating it like a powerful debugger or knowledge base that could accelerate my understanding, I was treating it like a replacement for my brain. That's the trap.

How I Use AI Now

I still use AI constantly - probably more than before, actually. But I've learned to use it like any other powerful tool in my development toolkit.

  • AI as a knowledge base: "How does subprocess communication work in Python?" "What are the edge cases for file parsing?" "Show me different approaches to error handling."

  • AI as a debugging partner: "Here's my stack trace, what could be causing this?" "This function isn't behaving as expected, help me trace through the logic."

  • AI for the boring repetitive stuff: Boilerplate code, test scaffolding, documentation generation, refactoring patterns I've done a million times.

  • AI as a code reviewer: "Does this implementation handle edge cases properly?" "Are there performance issues with this approach?"

The key difference: I use AI to accelerate my understanding, not replace it.

For anything I might ship, the rule is simple: If I can't explain how the core functionality works without looking at the code, I don't ship it. AI can help me get there faster, but it can't get there for me.

This isn't about being a purist or rejecting AI assistance - it's about maintaining professional competence and shipping software you can actually support.

What Success Actually Looks Like

After six months of this experiment, here's what I've learned about successful AI-assisted development:

  • Delegate the repetitive stuff: Boilerplate generation, test scaffolding, documentation, refactoring patterns you've done before.

  • Use it as a knowledge multiplier: Research APIs, explore different approaches, get explanations of complex concepts, debug tricky issues.

  • Keep the understanding: Architecture decisions, core business logic, data flow design, system integration points.

  • Invest in the sitting-there hours: For anything you plan to ship, spend the time to truly understand how it works.

Think of AI like you'd think of a really good debugger or profiler - it's an incredibly powerful tool that makes you more effective, but it doesn't replace the need to understand what you're building.

The Uncomfortable Question

Here's the question that keeps me up at night: How many developers are building careers on AI-generated code they don't understand?

Because that's what this really comes down to. It's not just about shipping products - it's about professional competence. If you can't debug, modify, or explain the code you're responsible for, what kind of developer are you? More importantly, what happens when that AI assistant isn't available and you need to fix a critical production issue?

I don't have a good answer to that question. I just know I don't want to be that kind of developer.

Wrapping Up

After six months and nearly 100 prototypes with AI assistance, the math is brutal: 99 failed to ship because I treated AI as a replacement for understanding rather than a tool to support it. One succeeded - mcp-windbg - because I used AI the right way: as a powerful assistant that accelerated my learning instead of replacing it.

Remember that uncomfortable truth from the beginning? Every prototype feels like success, but success is measured by shipping, not building. The dopamine hits from rapid prototyping are real, but they're not the same as the satisfaction of maintaining code you actually understand.

The lesson isn't to use less AI. The lesson is to use it better.

AI is like having a brilliant research assistant, debugger, and code reviewer all rolled into one. Use it to explore ideas faster, understand concepts deeper, and eliminate the boring repetitive work that burns you out. But don't delegate the understanding itself.

What's Next?

If this resonates with you, here's what I'd suggest doing right now:

  • Audit your current projects: Look at your recent work. Can you explain how the core functionality works without looking at the code? If not, spend some time diving into those systems.

  • Set a new rule: For anything you plan to ship, implement the "explain it without looking" test. If you can't walk someone through how it works from memory, you're not ready to ship.

  • Change how you prompt: Instead of "Build me X," try "Help me understand how to build X" or "What are the key concepts I need to know for X?"

  • Embrace the sitting-there hours: When you hit a bug or weird behavior, resist the urge to immediately ask AI for a fix. Spend 15-20 minutes trying to understand it yourself first.

Because at the end of the day, someone needs to own the code. AI can help you get there faster, but it can't get there for you.