I Threw Away 1 Million Lines of Code
A Claude update made my codebase obsolete overnight. The lesson became our development philosophy.
Felipe RosaMarch 16, 2026
Guide

I'd been building for years. Not weeks, not a hackathon — years. Over a million lines of code across dozens of repositories, each one solving a problem I'd hit in production, each one shaped by late nights and ugly debugging sessions and the kind of stubbornness that makes you rewrite something three times until it feels right.
Then Anthropic released a model update.
Not a major platform shift. Not a new paradigm. A model update. And suddenly, most of what I'd built was native functionality. The orchestration patterns I'd hand-rolled? Built in. The context management I'd spent months perfecting? Handled. The routing logic that was my pride? A single API parameter now.
Joguei fora. I threw it away.
Not because it was broken. Because it was obsolete. And that's worse — broken code you can fix. Obsolete code just sits there, a monument to yesterday's constraints.
The Moment You Realize Code Is Not the Asset
Here's what nobody tells you about that moment. The first reaction isn't clarity — it's denial. You think: "My implementation is more nuanced. Mine handles edge cases they haven't thought of. I should keep it, just wrap their API with my layer on top."
Actually, let me be more honest. The first reaction is grief. You stare at your commit history — two years of decisions, each one defensible, each one now irrelevant — and you feel the weight of all that time.
But grief has a half-life in this industry. Once it burns off, you're left with a question that changes everything:
What do I actually need to carry forward?
Not the code. The code was always ephemeral — I just didn't want to admit it. What I needed to carry was the understanding: why I'd built each system, what problems it solved, what tradeoffs I'd accepted, what I'd learned from the three wrong approaches before the right one.
That's when corpo leve clicked.
Corpo Leve
"Corpo leve" is Portuguese for "light body." In capoeira, it describes the fighter who moves fastest because they carry the least. No unnecessary tension, no excess weight, no attachment to a position that's already changed.
In software, corpo leve means this: focus on fundamentals, problem context, and approach — carry only that through every technology shift. When the next model update lands, when the next framework emerges, when the next paradigm makes yesterday's code obsolete, you're not starting over. You're applying the same understanding to better tools.
Nadar junto com a baleia. Swim with the whale, don't try to outswim it.
That philosophy became more than a personal recovery strategy. As we applied it across Namastex Labs — 107 repositories, six product lines, three years of production AI systems — it crystallized into five pillars that define how we build now.
Pillar 1: Architecture Over Code Writing
Vibe coding is real. You describe what you want in natural language, an AI writes the system. I've watched junior developers ship in a day what used to take a senior engineer a week.
But here's what vibe coding alone produces: code without security, without scalability, impossible to maintain. I've reviewed PRs where the AI generated a working feature with an SQL injection vulnerability in every query. The code ran. The tests passed. It was a liability.
The paradox is this: AI making code easier to write makes fundamentals more critical, not less. You need to understand databases to know when the AI's schema design will collapse at scale. You need Git discipline to manage the dozens of PRs AI agents generate per day. You need CLI fluency and systems thinking to debug what the AI builds when it breaks at 3am — because it will break at 3am.
We built pgserve because we live this. One command — npx pgserve — and you have an embedded Postgres instance. No Docker, no config files, no DevOps detour. But pgserve exists because we understand Postgres deeply enough to compress it into a one-liner. That's architecture over code: knowing the fundamentals so well that you can make them disappear.
The same thinking produced automagik-tools: point it at any OpenAPI spec and you get an MCP server in 30 seconds. Not because we're fast coders. Because we understand API contracts well enough to automate the translation completely.
The developer who thrives now isn't the fastest typer. They're the clearest thinker.
Pillar 2: The Developer as Agent Orchestrator
My day used to look like this: open editor, write functions, debug, commit, repeat. Individual contributor work. Hands on keyboard, lines flowing.
Now it looks like this: define a wish, structure the plan, delegate to a swarm of agents working in parallel, review the results, course-correct, merge.
The shift from writing code to orchestrating agents is the most disorienting transition I've experienced in twenty years of development. You're not doing less — you're doing differently. You're operating at a higher abstraction layer, and the cognitive skills are completely different.
The critical skill? Context management. Every AI agent has a context window — a finite amount of information it can hold and reason about. Feed it too much and it suffers what we call "context rot": the agent forgets early instructions, contradicts itself, loses the thread. Feed it too little and it makes naive decisions.
We learned this the hard way building Genie, our orchestration framework. 43 agents, discovered dynamically — the file path is the identity, zero registry needed. Three collectives (Code, Create, QA), each with specialized agents. Three pluggable executors you swap via one YAML field. The agents run in isolated git worktrees so they can work in parallel without stepping on each other.
But the real lesson wasn't the architecture. It was the boundary enforcement. We have a rule — "Orchestrate, Never Implement" — and we have documented violations of that rule, because even we struggled with the discipline of staying at the orchestration layer. The temptation to drop down and just write the code yourself is strong. Resisting it is the skill.
The output: wishes go in, pull requests come out. The developer defines intent; the swarm delivers implementation.
Pillar 3: The End of SaaS (Agent First)
If AI makes creating software a commodity, what happens to the business model built on selling software?
It collapses.
Traditional SaaS gives you a generic tool and forces you to adapt your operations to fit it. That model worked when building software was expensive — the vendor absorbed the cost of development, and you absorbed the cost of adaptation. But when AI can generate custom software for a fraction of the old cost, why would anyone adapt to a generic tool?
The development model we're moving toward is Agent First: systems that mold dynamically to each company's specific operations. Not "Software as a Service" but its inversion — "Service as a Software." Don't sell a chat platform; deliver customer service already done. Don't sell a CRM; deliver pipeline management that actually manages.
We built KHAL on this principle. It's an enterprise CX platform, but calling it software undersells it. KHAL doesn't give you a dashboard and wish you luck — it deploys AI agents that handle customer interactions end-to-end, with human oversight where it matters. The CX team defines behavior through prompt management with visual diffs and one-click rollback. The tech team defines guardrails. The agents do the work.
The proof is in the setup time. Sierra, the well-funded competitor, quotes 3-6 months for enterprise deployment. KHAL: 15 days. Not because we cut corners — because agent-first architecture eliminates the customization bottleneck. The system molds to the operation; the operation doesn't contort to fit the system.
Pillar 4: Critical Thinking and the Human in the Loop
Here's the trap of full AI automation: it converges toward the mean. Every fully AI-generated website looks the same. Every AI-written blog post reads the same. Every AI-designed system architecture follows the same patterns from the same training data.
Genericidade e o inimigo. Genericness is the enemy.
The antidote is intentional human presence at specific decision points — not everywhere, but at the moments that determine quality. We call this "human in the loop," and it's not a safety net. It's a quality multiplier.
In our content engine, we enforce this through a voice guide with explicitly forbidden patterns. "AI is transforming..." — forbidden. Buzzword salads — forbidden. Influencer tone — forbidden. The AI drafts; the human ensures it doesn't sound like everything else on the internet.
In our development workflow, we enforce it through the ACE Protocol: evidence-based framework editing with semantic deduplication. Before any learning gets committed to our agent framework, we check: is this genuinely new (similarity < 0.70) or a paraphrase of something we already know (similarity > 0.85)? Token budgets are measured before every commit. File sizes are capped at 1000 lines — split or don't ship.
In KHAL, we enforce it through configurable human validation: sampling rates per conversation type, approval/rejection with structured feedback, quality metrics per agent. The human doesn't handle every interaction — they handle the ones that teach the system to be better.
AI works best as leverage for human potential, not as a replacement for human judgment.
Pillar 5: Verticalization and Small Models
The headlines are about the big models: GPT-5, Claude Opus, Gemini Ultra. Billions of parameters, general intelligence, benchmark-topping performance.
The value is in the small ones.
Every enterprise has deep, specific knowledge locked in the heads of experienced operators — the support agent who's handled 10,000 edge cases, the sales engineer who knows which integration will break, the logistics coordinator who can predict delays by reading the weather. That knowledge is the moat, and big general models can't replicate it.
The corpo leve approach to models: extract that knowledge into proprietary datasets, then train small, vertical models that are cheaper to run, faster to respond, and dramatically better at the specific task.
We built murmurai for this — self-hosted WhisperX transcription tuned for domain-specific voice. We built automagik-hive to scaffold vertical agents from YAML to running in 30 seconds, with smart CSV RAG that's 450x faster than full reloads. We built juice-router to dispatch LLM calls to the optimal model per task — Opus for complex reasoning, Haiku for fast classification — because one model for everything is the opposite of corpo leve.
The market is learning what practitioners already know: the moat isn't the model. It's the data and the domain knowledge.
What Do You Actually Need to Carry?
I still think about those million lines sometimes. Not with regret — with clarity.
Every line I wrote taught me something about databases, about orchestration, about the gap between a demo and a production system. The code is gone. The understanding isn't. And when the next model update lands — when it inevitably makes something else I've built obsolete — I'll be ready to let go again.
That's corpo leve. Carry the understanding, shed the code. Move light.
The winners of the AI era won't be the developers with the most code. They'll be the ones who can throw it all away and rebuild by Monday.
FAQ
What is corpo leve in software development?
Corpo leve ("light body" in Portuguese) is a development philosophy introduced by Felipe Rosa at Namastex Labs. It means focusing on fundamentals, problem context, and approach rather than attaching to specific code implementations. When AI model updates make existing code obsolete, developers practicing corpo leve carry their understanding forward and rebuild quickly with better tools.
Is vibe coding enough to build production software?
No. Vibe coding — using natural language to instruct AI to generate code — is powerful for speed but dangerous without fundamentals. At Namastex Labs, we've observed AI-generated code that passes tests while containing security vulnerabilities, poor schema design, and unmaintainable architecture. Database knowledge, Git discipline, CLI fluency, and systems thinking are more critical than ever to guide and review what AI produces.
What does "developer as orchestrator" mean in practice?
Instead of writing code directly, developers define intent and delegate implementation to swarms of AI agents working in parallel. The critical skill becomes context management — structuring tasks so each agent has the right information without overloading its context window. Namastex Labs' Genie framework runs 43+ agents with isolated worktrees, enforcing a strict "orchestrate, never implement" boundary.
How is agent-first development different from traditional SaaS?
Traditional SaaS delivers a generic tool that customers adapt to. Agent-first development delivers outcomes — "Service as a Software" rather than "Software as a Service." Instead of selling a chat platform, you deliver customer service already done. Namastex Labs' KHAL platform demonstrates this with 15-day enterprise deployments versus the industry standard of 3-6 months, because the system molds to the operation rather than the reverse.
Why do small models matter more than big ones for enterprises?
Big general models can't replicate the deep domain knowledge locked in experienced operators' heads. Small, vertically trained models built on proprietary enterprise data are cheaper, faster, and dramatically better at specific tasks. Namastex Labs uses this approach across its stack, routing LLM calls to the optimal model per task rather than using one large model for everything.