I have deleted more microservices than I have built. Startups incinerate their runway on AWS bills and Kubernetes consultants, chasing an architecture designed for companies with GDP-sized budgets. A 2024 DZone study confirms it: teams spend 35% more time debugging distributed systems compared to modular monoliths.
Start with a well-structured monolith. Extract services only when you have proven scale problems and team coordination issues. Most companies never need microservices.
I understand why microservices appeal to architects. Independent deployment. Technology diversity. Team autonomy. Scaling individual components. The theory is compelling, and at certain scales, it's correct.
After 30 years watching architectural trends come and go—mainframes to minis to micros, two-tier to n-tier—I recognize the pattern immediately. And now, the great microservices migration. Netflix convinced everyone that their architecture was the future.
Here's the uncomfortable truth: you're not Netflix. You don't have their scale. You don't have their engineering team. You probably don't need their architecture. This is one of those architecture decisions that can kill startups if you get it wrong early.
The Netflix Cargo Cult
Netflix made microservices famous. Their engineering blog became required reading. Conference talks drew standing-room crowds. Suddenly, every startup with 10 employees decided they needed to "scale like Netflix."
But Netflix didn't start with microservices. They evolved into them out of necessity. By the time they adopted the architecture, they had:
- Over 100 million subscribers
- Thousands of engineers
- A genuine need to deploy different components independently
- The resources to build and maintain the tooling
Most companies adopting microservices have none of these. I've consulted for dozens of startups that fell into this trap—solving problems they don't have with complexity they can't afford.
Here's the dirty secret: microservices are often a solution to a political problem, not a technical one. When you have 2,000 engineers, you can't fit them in a room. You can't even fit them in a repo. You break the app so you can break the organization. Netflix didn't adopt microservices because Java couldn't handle the traffic; they adopted them so they could hire 500 more engineers without them killing each other in merge conflicts. If you're a startup with 20 people, you're adopting the organizational overhead of a Fortune 500 company without the revenue to pay for it.
The Illusion of Decoupling
Here's what nobody tells you about microservices: the decoupling is often a mirage. You trade compile-time dependencies for runtime dependencies. You trade compile-time guarantees for runtime hope. And don't talk to me about gRPC or Protobufs: yes, you get a schema, but it doesn't save you. You still traded a CPU instruction for a network packet. You traded a stack pointer for a TCP handshake. Even with the tightest binary protocol, you're introducing the fallacy of distributed computing into a loop that used to take three clock cycles. That isn't optimization; it's physics denial. You trade stack traces for distributed tracing dashboards.
The promise is that teams can move independently. The reality? Your User Service depends on the Auth Service which depends on the Config Service which depends on the Database Service. Change anything upstream, and you're still coordinating deployments. You haven't removed the coupling—you've just made it invisible until 3 AM when the pager goes off.
Real decoupling requires discipline: clear API contracts, versioning strategies, and the willingness to let services fail gracefully. Most teams don't have this discipline with a monolith. Splitting into services won't magically create it.
What You Actually Get With Microservices
Let me be specific about what microservices give you:
Network Calls Instead of Function Calls
That function that used to take 1ms now takes 50-200ms over HTTP. You've introduced latency into operations that didn't need it. Async calls help, but you've added complexity to get back to where you started. This is the layer tax in action.
The Network Tax Formula
If (Latency of Network Call) > (Benefit of Independent Scaling), you are losing money.
A function call: ~0.0001ms
A network call (same datacenter): ~1-5ms
A network call (cross-region): ~50-200ms
That's a 10,000x to 2,000,000x penalty. Independent scaling by at least that factor is required to break even on the latency tax.
Distributed Debugging Nightmares
A bug in a monolith: open the debugger, set a breakpoint, step through. A bug in microservices: correlate logs across 12 services, trace request IDs through Kafka. Wonder if the bug is in service A, service B, or the network.
I've spent days tracking down issues that would have been 10-minute fixes in a monolith. Distributed tracing helps, but it's another system to maintain.
Deployment Complexity
Monolith deployment: build one artifact, deploy it. Microservices deployment: coordinate versions across dozens of services, manage dependency graphs. Hope nothing breaks during the rolling update.
You'll need Kubernetes or something similar. According to CNCF's 2024 Annual Survey, Kubernetes production use hit 80%, but that's across companies that can afford the operational overhead. You'll need service mesh. Circuit breakers. Distributed configuration management. Each is another system to learn, operate, and debug.
Data Consistency Challenges
In a monolith, a transaction is a transaction. In microservices, you have eventual consistency, sagas, compensating transactions. You must design for partial failures. You must handle the case where service A succeeded but service B failed.
This isn't insurmountable, but it's complexity you're choosing to take on.
The Monolith Isn't the Enemy
The monolith got a bad reputation. "Monolith" became synonymous with "legacy" and "technical debt." But a well-structured monolith is a beautiful thing.
- Simple debugging - one process, one log file, standard profiling tools
- Easy refactoring - IDE support, static analysis, no API versioning
- Fast deployments - one artifact, one target, done
- Transactional integrity - your database handles it
- Lower latency - in-process calls, no serialization overhead
DHH at Basecamp has been vocal about this. Basecamp runs on a monolith. 37signals runs on a monolith. Shopify ran on a monolith far longer than most people realize. As Martin Fowler recommends, start with a monolith and only break it up when you've proven the need.
When Microservices Actually Make Sense
I'm not saying microservices are never appropriate. They make sense when:
You have independent scaling requirements. One part of your system needs 100x the resources of another. They genuinely can't share infrastructure.
You have autonomous teams. Different teams own different services and deploy on their own schedules. This is organizational, not technical.
You have proven the need. You've hit actual limits of your monolith. Not theoretical limits—measured bottlenecks that can't be solved with better code.
You can afford the overhead. You have engineers to build and maintain the tooling. You have budget for additional infrastructure. You have time for added complexity.
When My Advice Is Wrong
The "start with a monolith" recommendation fails when:
- You're acqui-hiring teams with existing services. If you're integrating acquired codebases, forcing monolith consolidation destroys value. Keep the services, improve the interfaces.
- Regulatory boundaries mandate separation. PCI compliance, data residency, or HIPAA may require genuine isolation. Compliance trumps architectural preference.
- You have genuinely different scaling profiles. If your ML inference needs GPUs and your API needs cheap CPU, a monolith creates waste. Extract what's genuinely different.
- Your team already has microservices expertise. If you're staffed with engineers who've operated distributed systems at scale, the learning curve cost disappears. Use what your team knows.
The goal isn't monolith purity. It's avoiding complexity you haven't earned the need for.
The "Majestic Monolith" Alternative
There's a middle path I recommend to every team I advise: the well-structured monolith. Clear module boundaries. Domain-driven design within a single codebase. The ability to extract services later when you need to.
This gives you:
- The simplicity of a monolith for development and debugging
- The organization of microservices through module boundaries
- The option to extract services when you've proven the need
You can always go from monolith to microservices. Going the other direction is much harder.
The Extraction Protocol
When you've proven you need to extract a service (not guessed—proven), follow this sequence:
- Measure the bottleneck. Which module is actually hitting limits? CPU? Memory? Independent deploy frequency? Get numbers, not feelings.
- Draw the boundary in code first. Create a clear interface within the monolith. All communication goes through that interface. No reaching into internals.
- Run in "shadow mode." Deploy the service but keep the monolith path active. Compare results. Find the bugs before they're in production.
- Extract data last. The service can call the monolith's database initially. Only split the data when you've proven the service works.
- Kill the old path. Once stable, remove the monolith implementation. Don't leave dead code.
Most teams do this backwards: they extract everything at once, split the database on day one, and spend six months debugging distributed transactions. Don't be most teams.
Signs You've Made a Mistake
How do you know if microservices were wrong for your organization?
- Most of your "bugs" are integration issues: Service A changed, service B broke
- Developers can't run the whole system locally: Too many services, too much setup
- You spend more time on infrastructure than features: Kubernetes configs, service mesh tuning, deployment pipelines
- Nobody understands the whole system anymore: Each team knows their services, nobody sees the big picture
- Simple changes require coordinated deployments: What should be one PR is five PRs across five repos
If this sounds familiar, you might have adopted microservices before you needed them.
The Distributed Monolith Litmus Test
Want to know if you've built a distributed monolith? Don't look at your code. Look at your database.
If Service A and Service B both reach into the same PostgreSQL instance (or worse, the same tables), you have failed. You've accepted the latency of distributed systems while retaining the tight coupling of a monolith. You took function calls that used to execute in 0.0001ms and wrapped them in latency, serialization, and failure probability. That isn't architecture. It's vandalism.
Try the "Blast Radius" test: deploy a breaking change to your User Service. How many other services start throwing 500 errors? In a true microservices architecture, the system degrades gracefully. In a distributed monolith, the lights go out. If you have to coordinate a deployment across three teams to avoid an outage, you don't have microservices. You have a monolith that's been blown apart by dynamite, held together with HTTP requests.
Here's the Conway's Law diagnostic: show me your org chart. If you have 30 engineers and 15 "microservices," you're violating physics. You need one team per service, minimum. If you don't have the headcount, you can't sustain the architecture.
Conway's Law Calculator
Enter your team size and current service count to see if you're violating physics.
The number of services you can sustain is a function of headcount, not ambition. Architecture doesn't scale on wishful thinking.
What I'd Actually Recommend
If you're starting a new project:
Start with a monolith. Build it well. Use good module boundaries. Keep your dependencies clean. Write tests. You can always extract services later.
Measure before you optimize. Don't adopt microservices because you might need scale. Adopt them when you've proven you need scale.
Consider your team size. If you have 5 engineers, microservices are probably overkill. If you have 500, they might make sense.
Be honest about your motivations. Are you adopting microservices because you need them? Or because they look good on a resume? Because you're bored with "boring" technology?
Career Looting
Let's call it what it is. An architect comes in, mandates a complex mesh of 40 services for a CRUD app, puts "Cloud Native Expert" on their LinkedIn, and leaves for a higher salary at Big Tech before the system collapses under its own operational weight. The business is left holding the bag: unmaintainable YAML and a cloud bill that scales linearly with frustration. You're paying for their education with your equity.
A monolith doesn't carry the same cachet. "I maintained a Rails app" doesn't open doors like "I architected a microservices platform on Kubernetes." The incentives are broken: engineers benefit from complexity the business pays for.
The pattern shows up in every architecture review. Proposed solutions are architecturally interesting but operationally burdensome. The engineer learns new technology. The company maintains it for five years after that engineer leaves. This isn't an accident. It's a feature of how our industry rewards complexity over outcomes.
Organizations need to recognize this dynamic. When someone proposes splitting your 50,000-line monolith into 30 services, ask them: "Will you be here in three years to maintain this?" Boring technology that works is often the right choice, even if it doesn't make for impressive conference talks.
Should You Use Microservices? Quick Assessment
| If you have... | Choose... | Why |
|---|---|---|
| <20 engineers, single product | Modular monolith | Communication overhead exceeds benefit |
| 20-100 engineers, multiple product teams | Evaluate carefully | Depends on team autonomy needs |
| 100+ engineers, distinct bounded contexts | Microservices likely appropriate | Organizational scaling requires it |
| Components with 10x+ different scaling needs | Extract those specific services | Targeted extraction, not full rewrite |
| Regulatory requirements for isolation | Service boundaries at compliance lines | External constraint, not preference |
The Bottom Line
I've been in this industry long enough to watch trends cycle. Two-tier was going to change everything. Then three-tier. Then n-tier. Then SOA. Then microservices. Each time, we found the right balance.
We're starting to see the correction on microservices. "Modular monolith" is becoming a thing. Engineers who spent years building microservices now write blog posts about moving back to monoliths. The pendulum is swinging.
The pendulum always swings back. The right answer is usually somewhere in the middle. For when microservices do make sense, see When Microservices Make Sense. For a complete decision framework, see the Microservices Decision Guide.
"You can always go from monolith to microservices. Going the other direction is much harder."
Sources
- DZone 2024 Study — 35% more debugging time in distributed systems vs modular monoliths
- CNCF Annual Survey 2024 — Kubernetes adoption data and operational requirements
- Martin Fowler: Monolith First — The case for starting with a monolith
- DHH: The Majestic Monolith — Basecamp's defense of monolithic architecture
- AWS: Monolithic vs Microservices — Official documentation on when each pattern makes sense
Architecture Review
The right architecture for your actual scale and team. Not the one that looks good on a conference slide.
Get Assessment