We were moving fast.
Features shipped every week.
Stakeholders were happy.
The backlog was finally under control.
Then, almost without noticing...
For further actions, you may consider blocking this person and/or reporting abuse
I read this and nodded… then winced a bit.
Yes, bad code is like a high-interest loan — but the uncomfortable truth is: most teams don’t realize they’re borrowing. Nobody wakes up thinking “today I’ll write something unmaintainable” — it’s usually deadlines, context switching, and just trying to ship. ()
Where I slightly disagree is this: not all “debt” is the same. Some debt is intentional — you ship fast, learn, and repay it quickly. That’s leverage. The real killer is the silent kind: the hacks that become architecture, the TODOs that become policy, the “we’ll fix it later” that nobody owns. That’s when the interest compounds and starts eating velocity sprint after sprint. ()
I’ve seen teams blame velocity drops on process, meetings, or even people — when the real culprit was a codebase nobody wanted to touch. When adding a small feature takes 3x longer than it should, you’re not slow — you’re paying interest. ()
The takeaway for me:
Debt isn’t the problem — unmanaged debt is
Speed isn’t the enemy — unpaid shortcuts are
And refactoring isn’t “nice to have” — it’s how you stop the bleeding
Good article. Just missing one harsh reality:
you don’t notice technical debt when you take it — you notice it when your best engineers start avoiding parts of your system.
Thank you @paolozero , I really appreciate your comment, especially the distinction between intentional and silent debt. That’s a nuance I didn’t fully unpack, and you’re right: not all debt is created equal.
I like how you framed “leverage vs. liability.” I’ve seen that play out too: teams making conscious tradeoffs to learn fast, then actually circling back to clean things up. When that loop exists, debt can be a tool. When it doesn’t… it becomes exactly the kind of drag I was warning about.
Your point about engineers avoiding parts of the system hits hard. That’s usually the moment when debt stops being an abstract concept and becomes a cultural problem. Once people start routing around the code instead of improving it, velocity loss is just the visible symptom.
If I were to extend my own argument after your comment, I’d say: the real danger isn’t just the “interest rate”, it’s losing the team’s willingness to engage with the codebase at all.
Thanks for adding that layer, this is exactly the kind of discussion I was hoping the article would spark.
I really like the framing of bad code as a high-interest loan, it’s one of the clearest ways to explain technical debt to non-engineers.
What stood out to me is how subtle the “interest payments” are. It’s rarely a dramatic failure, more like a constant tax on everything: slower feature delivery, harder debugging, more regressions. As you mentioned, it quietly eats away at team velocity until it becomes the norm.
One thing I’ve seen work well in practice is making that interest visible. Instead of saying “this code is messy,” framing it like:
“This shortcut is adding ~20% extra effort to every change in this area”
suddenly turns a technical concern into a business decision.
Also, I appreciate the implicit point that not all debt is bad, it’s the unmanaged, high-interest kind that kills teams. Strategic shortcuts with a repayment plan can be valuable, but most teams underestimate how fast “we’ll fix it later” turns into never.
Curious: have you found any effective ways to quantify or surface this “interest” to stakeholders without it feeling hand-wavy?
Thanks @lucaferri, you captured exactly what I was trying to get at with the “invisible tax” idea.
That’s been my experience too. The danger isn’t the big failure, it’s the normalization of friction. When everything feels just a bit slower, a bit harder, a bit riskier, teams stop questioning it. It becomes “just how things are.”
I especially like your framing of making the interest visible as a percentage cost. That shift from “messy code” to “ongoing business expense” is powerful, because it moves the conversation out of opinion and into tradeoffs.
To your question, yes, but I’ll be honest, it’s never perfectly precise, and trying to over quantify it can backfire. What I’ve found works is a mix of lightweight signals rather than a single “number”:
Individually, each of these is a bit fuzzy. Together, they tell a story that’s hard to ignore.
Sometimes I’ll even frame it narratively rather than numerically:
“This feature took 3 days. In a healthier part of the system, it likely would’ve taken 1.”
It’s not scientifically exact, but it’s concrete enough for stakeholders to grasp the cost.
The key, I think, is consistency, not proving the exact interest rate, but repeatedly showing that the same areas incur the same kind of drag. Over time, that pattern builds trust and makes the repay vs defer conversation much easier.
And you’re absolutely right, most teams don’t decide to carry high interest debt, they just underestimate how quickly “later” arrives.
Thank you Gavin for your reply, I really appreciate it
This analogy is spot on, @gavincettolo!
Coming from a Cloud Architecture background, I always think of technical debt as "architectural friction." Like you mentioned with the interest payments, eventually, the team is just burning cycles keeping the lights on rather than building new features.
I particularly liked your point about "The Knowledge Silo Tax." In distributed systems, if the code is "spaghetti" and only one person understands the service's state machine, that’s not just a velocity killer—it's a massive operational risk.
I’ve found that the "interest" is most expensive during a scaling event. If your infrastructure isn't clean, a 10x spike in traffic doesn't just slow you down; it breaks the bank (literally and figuratively).
How do you usually advocate for "debt repayment" sprints when talking to non-technical stakeholders who are focused solely on the roadmap?
Thank you @elenchen for your comment!
I love the “architectural friction” framing, that clicks immediately.
When I talk to non-technical stakeholders, I avoid the word “debt” entirely and reframe it in terms they already care about: risk, speed, and cost.
Instead of saying “we need a refactor sprint”, I’ll say something like:
I also try to tie repayment directly to the roadmap, not against it. For example:
What usually works best is pairing it with a concrete moment, like before a scaling event or a risky launch, exactly like you mentioned. That’s when the cost of not acting is easiest to understand.
Thank you @gavincettolo
I like your POV and I am curious to read your next articles on these topics. You have earned a new follower :)
The financial model framing is spot-on, and Christie's point about cognitive cost being the real blocker resonates hard.
I'll add one dimension that's magnified this for me: programmatic codebases at scale. I maintain an Astro site that generates 89K+ pages across 12 languages. When the original comparison page templates accumulated debt (thin content, bad internal linking, questionable redirects), the interest wasn't "this feature takes an extra day" — it was "Google is crawling 53,000 pages and rejecting them because the template quality is below threshold."
At programmatic scale, every template-level shortcut compounds across thousands of generated pages simultaneously. One bad decision in a stock page template affects 8,000+ tickers × 12 languages. Eventually I had to remove the entire comparison page type — not refactor it, delete it — because the debt had compounded beyond the point where incremental fixes were worth it.
The takeaway for me: in template-driven systems, the compounding interest rate is multiplied by your page count. Debt that's manageable at 10 pages becomes catastrophic at 100K.
Thanks for sharing this, @apex_stack , this is a fantastic extension of the idea, and the example makes it very concrete.
I really like how you push the “high-interest loan” analogy further into programmatic systems. At that scale, the impact of technical debt stops being linear and becomes multiplicative. It’s not just that a change is harder, it’s that every small flaw is instantly replicated across thousands of pages, as you described.
Your point about the type of impact changing is especially insightful. In a typical codebase, we tend to feel the cost as slower development. But in your case, the feedback loop is external and much harsher: search engines effectively act as an unforgiving validator of quality. When template debt accumulates, the penalty isn’t just velocity, it’s visibility and reach.
The fact that the only viable option was to delete the entire page type is telling. That’s the “default moment” of the loan, where incremental repayment is no longer enough and you’re forced into a full reset. It’s a powerful illustration of how ignoring compounding debt can eventually remove optionality altogether.
I also really like your takeaway: in template-driven systems, the interest rate scales with distribution. That’s a great mental model: it suggests we should treat templates and generators as high-leverage assets; the kind where quality standards need to be higher, not lower, precisely because of their amplification effect.
This adds an important dimension to the original argument: technical debt isn’t just about time, it’s about surface area and the larger the surface area you’re projecting onto (like 100K+ generated pages), the less forgiving the system becomes.
Really appreciate you bringing in this perspective, it makes the risks of “small” shortcuts much more tangible.
Really well said, Gavin. The surface area framing is exactly the missing piece in most tech debt discussions. In a traditional codebase, debt slows you down linearly — you ship slower. But in a template-driven system generating 100K+ pages, a single bad decision in the generator compounds across every output. The "interest rate" isn't time, it's distribution.
I learned this the hard way when I had to nuke an entire page type (comparison pages) because the template quality was too low and Google started penalizing the whole domain's crawl signals. That wasn't a refactor — it was exactly the "default moment" you described. The debt had compounded past the point where incremental fixes were viable.
The takeaway for anyone building programmatic systems: your templates are the highest-leverage code you own. Treat them like critical infrastructure, not scaffolding.
Developers don't start avoiding areas because they look bad/messy; they avoid them when the cognitive cost of understanding what's safe to change gets too high. Rebuilding the mental model from scratch just isn't worth it.
That's when people stop "leaving the code better than you found it," because they made the smallest possible change and got out. Anything more than that felt too risky.
Over time, the original author becomes the only person who really understands it, because knowledge isn't distributing across the team. Now you've got a bottleneck and a bus factor problem.
That's why I think of code readability is a performance constraint (especially with the large amounts of code we generate with AI these days).
Thanks so much for this thoughtful comment, @christiecosky . You’ve captured something really important that often gets overlooked.
I completely agree: it’s not the “messiness” of code that drives people away, it’s the uncertainty. When the cognitive cost of rebuilding a mental model gets too high, even experienced developers start optimizing for safety over improvement. At that point, “leave the code better than you found it” quietly turns into “don’t break anything and get out.”
What you’re describing is exactly the moment when technical debt compounds. The system stops being a shared asset and starts becoming territory. And as you said, once knowledge stops distributing, you don’t just have a maintainability issue, you get a coordination bottleneck and a real bus factor risk.
I also really like your framing of readability as a performance constraint. That resonates a lot, especially now. With the rise of AI assisted code generation, we’re producing more code than ever, but not necessarily more understandable code. If anything, the gap between code that works and code that can be safely evolved by a team is getting wider.
To me, this reinforces a subtle shift in how we should think about quality: readability isn’t just a nice to have or a matter of taste, it’s what enables continuous change. Without it, velocity might look fine in the short term, but it’s quietly borrowing against the future, which ties back nicely to the high interest loan analogy.
Really appreciate you adding this perspective. It sharpens the argument in a meaningful way.
The loan analogy hits different when you're in fintech — because we literally deal with loans and interest rates, and the parallels are painfully exact. In our payment platform, we had a "quick fix" in our transaction routing logic from year one. Worked fine at 100 transactions/day. By the time we were doing thousands, that one shortcut was causing cascading retries that inflated our infrastructure costs by 30%. The "interest payment" was invisible until it wasn't. The hardest part isn't identifying the debt — it's convincing yourself to pay it down when shipping new features feels more urgent. What worked for us: we stopped calling it "refactoring time" and started calling it "reducing the cost of the next feature." Same work, but suddenly leadership gets it.
Thank you @mickyarun ! That’s such a perfect real-world example, especially in fintech where the “interest” is literally measurable.
The cascading retries point is exactly it. The system looks fine at low scale, then suddenly the hidden cost surfaces all at once and hits both infra and reliability.
I really like your reframing. “Reducing the cost of the next feature” is the right mental model because it connects directly to delivery, not just code quality.
I’ve seen the same shift work well. As soon as the conversation becomes:
it stops being a tradeoff and starts being an investment.
And you’re right, the hardest part isn’t spotting the debt, it’s choosing to act before it becomes painful. Most teams wait for the spike you described, but the ones that scale smoothly are the ones that treat those early signals seriously.
Something I've been doing that's worked surprisingly well: every PR gets a "debt tag" in the description. Just a quick line like
[debt: new]or[debt: reduced]. It takes 5 seconds but after a few months you can actually graph the trend.We noticed our ratio was like 8:1 (new debt to reduced debt) during a crunch period. That single metric got leadership to approve a dedicated cleanup sprint more than any amount of "we need to refactor" conversations ever did. Numbers talk.
The other thing that really clicked for me was framing it not as "refactoring time" but as "reducing the cost of the next feature." When you tell a PM "this cleanup means Feature X ships in 3 days instead of 8," suddenly it's not maintenance anymore - it's an investment with a clear payoff timeline.