"We need to urgently refactor the entire project! This is technical debt!"
"Forget it. We don't touch code that works. Focus on features."
Two CTOs. Two opposite approaches. Both are right. And both are wrong.
The first team spent 3 months refactoring the authentication module. The code became beautiful. Tests green. Architecture—textbook perfect. The business lost $400k in opportunity cost while competitors launched two new features.
The second team ignored "technical garbage" for 2 years. Each new feature took 3 times longer. Bugs quadrupled. The project went into a complete rewrite, losing a year of development.
The question isn't whether to refactor or not. The question is—when, what, and for how much.
I've been collecting technical debt stories for 15 years. Been a consultant on 40+ projects where "debt" killed the product. Let's break down how to measure what seems unmeasurable and make decisions that won't lead you to bankruptcy (neither financial nor moral).
What is Technical Debt (and why everyone gets it wrong)
The definition that actually works
Technical debt is the difference between the current architecture and the ideal architecture for solving current business problems.
Key word: current. Not future. Not hypothetical. Current.
It's like a bank loan:
- ✅ You borrow — get to production faster
- ✅ Pay interest — each new feature costs more
- ✅ Can declare default — rewrite from scratch
- ✅ Can refinance — refactoring
Metaphor from Ward Cunningham (coined the term, 1992):
"Shipping first-time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite... The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt."
Types of technical debt (classification)
Not all debt is equal. There's a mortgage, and there's a loan for an iPhone. Let's break it down.
1. Deliberate Debt
What: Conscious decision to write "quick and dirty" for speed.
Example:
# TODO: Hardcoded for now, move to config later
PAYMENT_API_URL = "https://api.stripe.com/v1"
def process_payment(amount):
# Temporary workaround for MVP
# After MVP: add retry logic, fallback, monitoring
response = requests.post(PAYMENT_API_URL, data={"amount": amount})
return response.json()When justified:
- MVP to validate hypothesis
- Critical deadline (conference, exhibition, investor demo)
- Feature needed to measure metrics (A/B test)
Risks:
- TODO becomes NEVER
- "Temporary" workaround lives for years
- Interest accumulates unnoticed
2. Inadvertent Debt
What: Team didn't know a better solution. Naive implementation.
Example:
# Junior thought this was normal
def get_all_users():
users = []
for user_id in range(1, 100000): # Oh my...
user = db.query(f"SELECT * FROM users WHERE id={user_id}")
if user:
users.append(user)
return users
# Production a month later: 503 Server TimeoutHow it happens:
- Junior developers without mentorship
- No code review
- Team doesn't know patterns for this task
How to avoid:
- Code review mandatory
- Pair programming for complex tasks
- Team training on best practices
3. Bit Rot
What: Code was good a year ago. But libraries updated, patterns changed.
Example:
# 2020 — this was trendy
class UserViewSet(viewsets.ModelViewSet):
queryset = User.objects.all()
serializer_class = UserSerializer
permission_classes = [IsAuthenticated]
# 2025 — nobody does this anymore
# There's async views, Pydantic validation, rate limiting out of the boxCauses:
- Dependencies outdated
- Best practices evolved
- New frameworks appeared
When to touch:
- Security vulnerabilities
- Blocks new features
- Prevents scaling
4. Architectural Debt
What: Wrong architecture at system level.
Example:
# Initial architecture (MVP)
Monolith Django App
↓
PostgreSQL
# Reality after 2 years (100k+ users)
Monolith (5M lines of code)
↓
PostgreSQL (1TB, slowing down)
# Needed architecture
API Gateway
↓
├─ User Service (microservice)
├─ Payment Service (microservice)
├─ Notification Service (microservice)
↓
PostgreSQL + Redis + Kafka
When critical:
- Team grew from 3 to 30 people
- Deploy takes 2 hours
- Can't scale horizontally
Technical Debt Metrics (how to measure the unmeasurable)
You can't manage what you don't measure. Here are metrics I use on real projects.
1. Code Churn Rate
What it measures: How often a file changes. High churn = high debt.
Formula:
Code Churn = Number of file changes per period / Total files
How to measure:
# Git: top-10 most changed files in 6 months
git log --since="6 months ago" --pretty=format: --name-only \
| sort | uniq -c | sort -rg | head -10Example output:
245 src/models/user.py # ← Clear problems here
189 src/api/payments.py # ← Here too
87 src/utils/validators.py
56 src/views/dashboard.py
Interpretation:
- Churn > 100: File rewritten every week. Code unstable.
- Churn 50-100: High activity. Debt may be accumulating.
- Churn < 50: Stable code.
Actions:
# If user.py changes 245 times in 6 months:
# 1. Code review: why so many changes?
# 2. Refactoring: class might be too large (God Object)
# 3. Split: User → UserProfile, UserAuth, UserSettings2. Cyclomatic Complexity
What it measures: Number of independent execution paths in code.
Formula:
Complexity = Edges - Nodes + 2 (for control flow graph)
In simple terms: Number of if, for, while, and, or in a function.
Tool:
# Python: radon
pip install radon
radon cc -a src/ --total-average
# Output:
# src/models/user.py
# M 245:0 UserModel.validate - D (23) # ← Complexity 23 is bad
# M 112:0 UserModel.save - C (15)IDE with built-in support:
Modern IDEs show Cyclomatic Complexity right in the editor:
- PyCharm — shows complexity with highlighting (gray = ok, yellow = attention, red = problem)
- VS Code — via "Python Complexity" or "SonarLint" extension
- IntelliJ IDEA — built-in code analysis with metrics
- Visual Studio — Code Metrics (Analyze → Calculate Code Metrics)
Example in PyCharm:
def process_payment(...): # ← PyCharm shows: Complexity: 23 (gray hint)
# Hover to see: "Cyclomatic complexity too high"IDE advantages:
- See metrics while writing code (not post-factum)
- Configure thresholds (warning at > 10, error at > 20)
- No need to run separate commands
Interpretation (Software Engineering Institute standard):
- 1-10: Simple code, easy to test
- 11-20: Medium complexity, bug risk growing
- 21-50: High risk, code unclear
- 50+: Untestable code, refactoring mandatory
Bad code example:
def process_payment(user, amount, currency, method, promo_code=None):
if user.is_active:
if user.balance >= amount:
if currency in ["USD", "EUR", "RUB"]:
if method == "card":
if user.card_verified:
if promo_code:
discount = get_discount(promo_code)
if discount:
amount = amount * (1 - discount)
# ... 50 more lines of nested ifs
return True
return False
# Complexity: 42 🤯Refactored:
def process_payment(user, amount, currency, method, promo_code=None):
_validate_user(user)
_validate_currency(currency)
_validate_payment_method(user, method)
final_amount = _apply_discount(amount, promo_code)
return _charge_user(user, final_amount, method)
# Complexity of each function: 3-5 ✅3. Test Coverage
What it measures: % of code covered by tests.
Formula:
Coverage = (Lines executed in tests / Total lines of code) × 100%
Tool:
# Python: pytest-cov
pytest --cov=src --cov-report=html
# Output:
# Name Stmts Miss Cover
# ---------------------------------------
# src/models/user.py 245 89 64% # ← Low coverage
# src/api/auth.py 156 5 97% # ← GoodInterpretation:
- 80-100%: Good coverage (not a guarantee, but an indicator)
- 50-80%: Medium debt, regression risk
- < 50%: High debt, every change is Russian roulette
Important: 100% coverage ≠ 0% bugs. But 0% coverage = 100% pain when refactoring.
Real-life example:
# Code without tests (coverage 0%)
def calculate_discount(user, cart_total):
if user.is_vip and cart_total > 1000:
return cart_total * 0.2
elif user.orders_count > 10:
return cart_total * 0.1
return 0
# Junior changed logic:
def calculate_discount(user, cart_total):
if user.is_vip or cart_total > 1000: # ← Error: 'and' → 'or'
return cart_total * 0.2
# ...
# Result: VIP client got 20% discount on $50 cart
# Business lost $10k over the weekendWith tests:
def test_vip_discount():
user = User(is_vip=True)
assert calculate_discount(user, 1500) == 300 # ❌ Test failed
# Junior saw error BEFORE production4. Code Duplication
What it measures: % of repeated code blocks.
Tool:
# jscpd (works with Python, JS, etc.)
npm install -g jscpd
jscpd src/
# Output:
# Duplications: 23.4% # ← 23% of code is duplicated
# Files: 156
# Clones: 42Interpretation:
- < 5%: Normal (sometimes duplication is justified)
- 5-15%: Medium debt
- > 15%: High debt, DRY principle violated
Example:
# ❌ Duplication (same code 3 times)
def create_user(data):
if not data.get("email"):
return {"error": "Email is required"}, 400
if not re.match(r"^[\w\.-]+@[\w\.-]+\.\w+$", data["email"]):
return {"error": "Invalid email"}, 400
# ...
def update_user(data):
if not data.get("email"):
return {"error": "Email is required"}, 400
if not re.match(r"^[\w\.-]+@[\w\.-]+\.\w+$", data["email"]):
return {"error": "Invalid email"}, 400
# ...
def send_invite(data):
if not data.get("email"):
return {"error": "Email is required"}, 400
if not re.match(r"^[\w\.-]+@[\w\.-]+\.\w+$", data["email"]):
return {"error": "Invalid email"}, 400
# ...✅ Refactored:
def validate_email(email):
if not email:
raise ValueError("Email is required")
if not re.match(r"^[\w\.-]+@[\w\.-]+\.\w+$", email):
raise ValueError("Invalid email")
return email.lower()
def create_user(data):
email = validate_email(data.get("email"))
# ...5. Bus Factor
What it measures: How many people need to get hit by a bus for the project to stop.
Formula:
Bus Factor = Number of key developers who know critical code
Tool:
# Git: who authored each line?
git ls-files | xargs -n1 git blame --line-porcelain | grep "^author " | sort | uniq -c | sort -rg
# Output:
# 12456 John Doe # ← If John quits — project dies
# 3421 Alice Smith
# 987 Bob JohnsonInterpretation:
- Bus Factor = 1: Critical. One person knows all the code.
- Bus Factor = 2-3: Risk. Dependent on a few people.
- Bus Factor ≥ 5: Healthy team.
Real-life example:
Senior went on vacation for 2 weeks. Production crashed. Nobody knew how authentication worked. Lost 3 days reverse-engineering our own code.
Solution:
- Code review all code
- Pair programming
- Architecture documentation
- Rotation (everyone should work with each module)
6. Debt Ratio
What it measures: Cost of fixing debt / Cost of writing code from scratch.
Formula (SonarQube):
Debt Ratio = (Remediation Cost / Development Cost) × 100%
Example:
Remediation Cost: 120 days (refactoring time)
Development Cost: 400 days (time spent on development)
Debt Ratio = (120 / 400) × 100% = 30%
Interpretation:
- < 5%: Excellent code
- 5-10%: Good
- 10-20%: Medium debt
- > 20%: Critical, refactoring or rewrite
Tool:
# SonarQube (free version)
docker run -d --name sonarqube -p 9000:9000 sonarqube:latest
# Run project analysis7. Lead Time
What it measures: How long it takes to add a new feature.
Formula:
Lead Time = Time from commit to production
How to measure:
# JIRA/GitHub: average time "In Progress" → "Done"
# Or via git:
git log --all --format="%h %ai" | head -100Interpretation:
- Lead Time growing over time: Debt accumulating
- Lead Time stable: Debt controlled
- Lead Time decreasing: You're doing great (or features getting simpler)
Example:
| Period | Average Lead Time | Conclusion |
|---|---|---|
| Q1 2024 | 3 days | Normal |
| Q2 2024 | 5 days | Debt growing |
| Q3 2024 | 8 days | Critical |
| Q4 2024 | 14 days | Code turned into swamp |
Growth reasons:
- Code became more complex (Cyclomatic Complexity increased)
- More regressions (Test Coverage dropped)
- More merge conflicts (Weak architecture)
Decision Framework (to refactor or not?)
Now you have metrics. What to do with them? Here's a 4-step decision framework.
Step 1: Estimate Cost of Debt
Formula:
Cost of Debt = Time on workarounds × Developer hourly rate
Example:
# Your code has:
def get_user_orders(user_id):
# Workaround: no index on user_id, query is slow
# Always add LIMIT 100 to avoid crash
return db.query("SELECT * FROM orders WHERE user_id = ? LIMIT 100", user_id)
# Cost of debt:
# - Each new feature with orders requires workaround
# - Developers spend 30 minutes per workaround
# - 10 features per quarter = 5 hours
# - Developer hourly rate: $50
# Cost of Debt = 5 hours × $50 = $250/quarterIf cost > $1000/year — consider refactoring.
Step 2: Estimate Cost of Refactoring
Formula:
Cost of Refactoring = Refactoring time × Hourly rate + Regression risk
Example:
# Refactoring: add index on orders.user_id
# Time: 2 hours (migration + tests)
# Risk: low (index doesn't break logic)
# Cost of Refactoring = 2 hours × $50 + $0 (risk) = $100Conclusion: $100 investment vs $250/quarter savings = pays off in 1.5 months.
Step 3: Estimate Opportunity Cost
Question: What will you NOT do while refactoring?
Example:
You plan 2 weeks for payment architecture refactoring. During this time:
- Competitor will launch a new feature
- You will NOT release planned feature
- Customers will wait for bug fix
Formula:
Opportunity Cost = Potential gain from new features × Success probability
Example:
New feature: Subscription billing
Potential gain: $10k MRR
Success probability: 70%
Opportunity Cost = $10k × 0.7 = $7k
Payment refactoring:
Savings: $2k/year on maintenance
Opportunity Cost: $7k lost opportunity
Conclusion: DON'T refactor now. Feature first.
Step 4: Prioritization by Impact/Effort Matrix
Matrix:
High Impact │ REFACTOR NOW │ SCHEDULE │
│ (quick wins) │ (important) │
────────────┼────────────────┼──────────────┤
Low Impact │ MAYBE │ IGNORE │
│ (if free time)│ (tech vanity)│
────────────┴────────────────┴──────────────┘
Low Effort High Effort
Examples:
| Task | Impact | Effort | Decision |
|---|---|---|---|
| Add index on user_id | High | Low | ✅ REFACTOR NOW |
| Rewrite monolith to microservices | High | High | 📅 SCHEDULE |
| Rename variable x → user | Low | Low | 🤷 MAYBE |
| Rewrite in TypeScript | Low | High | ❌ IGNORE |
Full Decision Algorithm
def should_refactor(debt_item):
# Step 1: Cost of debt
cost_of_debt = estimate_debt_cost(debt_item)
# Step 2: Cost of refactoring
cost_of_refactoring = estimate_refactoring_cost(debt_item)
# Step 3: Opportunity cost
opportunity_cost = estimate_opportunity_cost(debt_item)
# Step 4: Refactoring ROI
roi = (cost_of_debt - cost_of_refactoring - opportunity_cost) / cost_of_refactoring
# Decision rules
if roi > 2.0:
return "REFACTOR NOW" # Pays off 2x
elif roi > 1.0:
return "SCHEDULE" # Pays off, but not urgent
elif roi > 0:
return "MAYBE" # Pays off, but better options exist
else:
return "IGNORE" # Won't pay offUsage example:
debt_item = {
"name": "Add index on orders.user_id",
"cost_of_debt": 250, # $/quarter
"cost_of_refactoring": 100, # $
"opportunity_cost": 0 # Doesn't block features
}
roi = (250 - 100 - 0) / 100 = 1.5
# Conclusion: SCHEDULE (pays off in 1.5 months)Real Cases: When to Refactor and When to Skip
Case 1: "How we DIDN'T refactor payment gateway and lost $500k"
Situation:
E-commerce platform. Payment gateway written in 2018. Code worked, but:
- No tests (coverage 0%)
- Hardcode everywhere
- 1 developer (Bus Factor = 1)
Debt Metrics:
- Cyclomatic Complexity: 68 (critical)
- Code Churn: 180 changes per year
- Lead Time for payment features: 2 weeks (instead of 2 days)
Team decision: "Don't touch it. Works — don't fix."
What happened:
Black Friday 2023. Traffic grew 10x. Payment gateway crashed. Developer on vacation. Nobody knows how to fix.
Result:
- 8 hours downtime
- $500k lost sales
- 2 weeks for emergency fix
- 1 month for complete rewrite
Lesson: If Bus Factor = 1 on critical module — refactoring MANDATORY.
What should have been done:
- Add tests (1 week)
- Simplify code (1 week)
- Documentation (2 days)
Total: 2.5 weeks vs 1 month emergency + $500k losses.
Case 2: "How we wasted 3 months refactoring admin panel"
Situation:
SaaS startup. CTO decided to "clean up the code". Started with admin panel.
Debt Metrics:
- Cyclomatic Complexity: 12 (normal)
- Test Coverage: 75% (good)
- Code Churn: 15 (low)
Decision: Rewrite admin in React instead of Django admin.
What happened:
- 3 months of development
- Competitors released 2 new features
- Customers unhappy (no updates)
- Refactoring brought NO business value
Result:
- $400k opportunity cost
- 2 key clients left for competitors
- Team burnout
Lesson: Don't refactor what works and doesn't block business.
Red flags:
- "Let's rewrite in trendy framework"
- "This code is ugly" (but works)
- "I don't like the architecture" (no business justification)
Case 3: "How 2 days of refactoring saved $50k/year"
Situation:
API service. Each endpoint duplicates authentication logic.
Debt Metrics:
- Code Duplication: 32% (critical)
- 150 lines of identical code in 45 files
Problem:
Bug in authentication logic → need to fix in 45 places → 2 days work → risk missing a file.
Solution:
2 days refactoring:
- Extract authentication to middleware
- Cover middleware with tests
- Remove duplicated code
Result:
- Bug fix now: 15 minutes (instead of 2 days)
- Savings: 20 bug fixes/year × 2 days × $300/day = $12k/year
-
- reduced security error risk
ROI:
Cost: $600 (2 days × $300)
Savings: $12k/year
ROI: 20x in first year
Lesson: Code duplication in critical areas — first refactoring candidate.
Tools for Monitoring Technical Debt
1. SonarQube (best all-in-one)
Features:
- Cyclomatic Complexity
- Code Duplication
- Test Coverage
- Security vulnerabilities
- Debt Ratio
Installation:
docker run -d --name sonarqube -p 9000:9000 sonarqube:latestCI/CD Integration:
# .gitlab-ci.yml
sonarqube:
script:
- sonar-scanner -Dsonar.projectKey=my-project
only:
- main2. CodeClimate (for GitHub)
Features:
- Automatic pull request analysis
- Debt trends
- GitHub Actions integration
Price: $0 (open source), $199/month (private repos)
3. Radon (Python, free)
Installation:
pip install radonCommands:
# Cyclomatic Complexity
radon cc -a src/
# Maintainability Index
radon mi src/
# Raw metrics (LOC, LLOC, SLOC)
radon raw src/4. Git analytics (free)
Code Churn:
git log --since="6 months ago" --pretty=format: --name-only \
| sort | uniq -c | sort -rg | head -10Bus Factor:
git ls-files | xargs -n1 git blame --line-porcelain \
| grep "^author " | sort | uniq -c | sort -rg5. Codecov (Test Coverage)
Integration:
# .gitlab-ci.yml
test:
script:
- pytest --cov=src --cov-report=xml
- bash <(curl -s https://codecov.io/bash)Features:
- Coverage trends
- Pull request comments
- Badges for README
Checklist: To Refactor or Not?
✅ REFACTOR NOW if:
- Bus Factor = 1 on critical module
- Cyclomatic Complexity > 20
- Test Coverage < 50% on critical code
- Security vulnerabilities (CVSS > 7.0)
- Lead Time growing each quarter
- Code Duplication > 30%
- Production incidents related to this code
📅 SCHEDULE REFACTORING if:
- Debt Ratio > 10%
- Code Churn > 100 in 6 months
- Blocks new features (but not critical)
- Team growing (newcomers drowning in code)
- Onboarding takes > 1 month
❌ DON'T REFACTOR if:
- Code works and doesn't block business
- ROI < 1.0 (won't pay off)
- Opportunity cost high (important features exist)
- "Just ugly" (no business pain)
- Module planned for deletion in next 6 months
How to Implement Technical Debt Management in Team
1. Debt Sprint (20% time on refactoring)
Rule: Every 5th sprint is a debt sprint.
What we do:
- Fix top-10 debt items (by metrics)
- Add tests to critical modules
- Update documentation
Example:
Sprint 1-4: Features
Sprint 5: Debt Sprint
- Task 1: Cover payment gateway with tests (coverage 0% → 80%)
- Task 2: Refactor UserModel (complexity 45 → 12)
- Task 3: Add indexes to top-10 slow queries
2. Boy Scout Rule
Rule: Leave code cleaner than you found it.
How to apply:
# You're fixing a bug in process_payment()
# See that complexity is 25
# Spend +30 minutes on refactoring
# Complexity becomes 10
# Commit: "Fix payment bug + refactor for clarity"Limitations:
- No more than 30% time on refactoring
- Don't change architecture (only local improvements)
3. Debt Dashboard
Metrics on dashboard:
- Debt Ratio (SonarQube)
- Test Coverage (Codecov)
- Lead Time (Jira/GitLab)
- Code Churn (Git)
Example:
┌─────────────────────────────────────────┐
│ Technical Debt Dashboard Q4 2025 │
├─────────────────────────────────────────┤
│ Debt Ratio: 12% ⚠️ (was 8%) │
│ Test Coverage: 78% ✅ (was 75%) │
│ Lead Time: 6 days 📈 (was 5) │
│ Critical Issues: 3 🚨 │
└─────────────────────────────────────────┘
Review: Quarterly with team. Discuss trends.
4. Debt Tag in Pull Requests
Rule: If PR increases debt — add [DEBT] tag.
Example:
PR #123: [DEBT] Quick fix for payment bug
Debt:
- Added hardcode for Stripe API key (should be in env)
- Skipped tests (time constraint)
- Complexity increased from 12 to 18
Action item:
- Created task DEBT-456 to refactor in next sprint
Benefits:
- Conscious debt (not accidental)
- Tracking (know where debt accumulates)
- Planning (DEBT tasks in backlog)
Final Advice: Technical Debt is Not Evil
Technical debt is a tool.
A bank loan isn't evil. A loan lets you buy an apartment now, not in 20 years. But if you take loans uncontrollably — bankruptcy.
Same with technical debt:
✅ Take debt for MVP — correct ✅ Take debt for deadline — sometimes justified ❌ Don't pay debt for years — suicide ❌ Take debt for "speed" without repayment plan — stupidity
Main rule:
"Every technical debt must have a repayment plan. Otherwise it's not debt, it's default."
Questions for yourself:
- Do we know where our debt is? (metrics)
- Do we know how much it costs? (Cost of Debt)
- Is there a repayment plan? (backlog tasks)
- When will we pay? (debt sprint)
If at least 2 answers are "no" — you have problems.
What to Do Right Now
Step 1: Measure Debt (30 minutes)
# 1. Code Churn
git log --since="6 months ago" --pretty=format: --name-only \
| sort | uniq -c | sort -rg | head -10
# 2. Cyclomatic Complexity (Python)
pip install radon
radon cc -a src/
# 3. Test Coverage
pytest --cov=src --cov-report=term-missing
# 4. Bus Factor
git ls-files | xargs -n1 git blame --line-porcelain \
| grep "^author " | sort | uniq -c | sort -rgStep 2: Prioritize (1 hour)
Create a table:
| Debt Item | Impact | Effort | ROI | Action |
|---|---|---|---|---|
| Payment gateway | High | Medium | 5.0 | REFACTOR NOW |
| Admin panel | Low | High | 0.2 | IGNORE |
| User model | Medium | Low | 2.5 | SCHEDULE |
Step 3: Schedule (10 minutes)
- Create tasks in Jira/GitLab for top-3 debt items
- Add 1-2 tasks to next sprint
- Schedule Debt Sprint in 4 weeks
Step 4: Automate (1 hour)
# .gitlab-ci.yml
debt-analysis:
stage: test
script:
- pip install radon
- radon cc -a src/ --total-average
- pytest --cov=src --cov-report=term
only:
- mainUseful Links
- SonarQube Documentation — technical debt monitoring
- Martin Fowler: Technical Debt — classic article
- Managing Technical Debt (book) — Philippe Kruchten
- Radon Documentation — metrics for Python
- CodeClimate — automatic code analysis
- Codecov — test coverage monitoring
Share Your Experience!
I shared my metrics and framework. Now it's your turn:
- What was your most expensive technical debt?
- How do you make refactoring decisions?
- Were there cases when NOT refactoring was the right decision?
Write in comments or Telegram. Let's discuss metrics, share cases.
Need help with technical debt audit? Email me at contact page — I'll analyze your project, give recommendations on refactoring prioritization. First consultation free.
Liked the article? Share with a colleague who says "refactoring is a waste of time" or "we need to rewrite everything". You'll save their project (and career).
Subscribe for updates on Telegram — I write about architecture, metrics, and development management. No fluff, only practice.



