When investors ask me to evaluate a startup's technology before they write a check, they want to know one thing: is this real? Here's the checklist I use to find out.
Before acquiring or investing: review technical debt, check bus factor, verify deployment practices, assess security posture. Code audits reveal what pitch decks hide.
Technical due diligence isn't about understanding every line of code. It's about pattern recognition - spotting the signals that separate companies with solid foundations from those running on duct tape and optimism. After 30 years building and evaluating startups across different stages and sectors, I've learned certain patterns always emerge.
The Five-Minute Smell Test
Before diving deep, I look for immediate red flags that suggest deeper problems:
Can they explain their architecture in plain English? If the CTO can't clearly explain how the system works to a non-technical investor, that's a warning sign. Either they don't fully understand it themselves, or there's something they're hiding behind jargon.
How old is their oldest code? A three-year-old company with no code older than six months has rewritten everything at least once. That's not always bad, but it warrants questions. Why the rewrites? What was wrong with the original approach?
What's their deployment frequency? Teams that deploy daily or weekly have working CI/CD and reasonable test coverage. Teams that deploy monthly or quarterly are either overly cautious or terrified of their own codebase.
How do they handle incidents? Ask about their last outage. Good teams have clear answers: what happened, how they found it, how they fixed it, what they changed to prevent recurrence. Evasive answers suggest either no process or incidents they'd rather not discuss.
Architecture Deep Dive
The Monolith vs Microservices Question
In my experience doing due diligence, I always ask why they chose their architecture. The right answer depends entirely on context:
Early-stage monolith: Usually correct. Fast iteration, simple deployment, easy debugging. If a seed-stage startup has microservices, I want to know why they needed that complexity.
Growth-stage services: Makes sense when specific components need independent scaling or deployment. The question is whether the boundaries are clean or arbitrary.
Microservices everywhere: Often a red flag at any stage. I've seen this kill startups repeatedly. It usually means they copied Netflix's architecture without Netflix's problems or engineering team. The overhead of distributed systems rarely pays off below a certain scale, as I detailed in why microservices are a mistake for most companies. These kinds of architecture decisions can kill startups before they get to product-market fit. CohnReznick's analysis confirms that technology stack assessment is the cornerstone of any rigorous due diligence.
Database Choices
The database tells you a lot about technical decision-making:
Postgres for everything: Usually a good sign. Boring technology that works. Shows restraint.
MongoDB "because it's flexible": Often means they didn't want to think about schema design upfront. Ask how they handle data consistency and what happens when requirements change.
Multiple specialized databases: Can be appropriate (Redis for caching, Elasticsearch for search) or a symptom of resume-driven development. The question is whether each database solves a real problem.
Custom database or data layer: Unless they're a database company, this is almost always a mistake. Ask why existing solutions didn't work.
The Third-Party Dependency Audit
Every external dependency is a risk. I look at:
How many dependencies? A Node.js project with 1,500 npm packages is carrying a lot of hidden risk. Each dependency can break, have security vulnerabilities, or be abandoned.
How critical are they? Using Stripe for payments is sensible - that's their core competency. Using an obscure library for core business logic is dangerous.
What's the fallback plan? If their critical vendor disappears or raises prices 10x, what happens? Good teams have thought about this.
Code Quality Indicators
I don't read every line of code, but I look for signals:
Test Coverage
No tests: Common in early startups. Not automatically disqualifying, but it means the codebase is held together by manual testing and hope.
High coverage numbers: Can be misleading. 90% coverage of trivial code is less valuable than 50% coverage of critical paths. I ask what's tested, not just how much.
Integration tests that actually run: This matters more than unit test counts. Can they spin up the system and verify it works end-to-end?
Documentation
No documentation: Means only current engineers can work on the system. Knowledge is trapped in heads.
Outdated documentation: Often worse than none. It actively misleads.
Architecture decision records: A great sign. Shows the team thinks about decisions and records why they made them.
Code Age and Churn
Git history reveals a lot:
Files that change constantly: Either actively developed or fundamentally broken and repeatedly patched.
Files nobody touches: Either stable and working or scary and avoided.
Recent rewrites of old code: Worth asking about. Sometimes necessary, sometimes a sign of thrashing.
Infrastructure and Operations
The "What If" Questions
These reveal operational maturity:
- What happens if your primary database goes down?
- How long to recover from a complete data loss?
- Can you roll back a bad deployment? How long does it take?
- What's your process when an engineer leaves?
- How do you handle a security vulnerability in a dependency?
Good teams have clear, practiced answers. Uncertain teams reveal that they've never thought about these scenarios. According to Gartner research, using a structured technology due diligence checklist increases the likelihood of identifying critical issues by over 60%.
Cloud Spend
Cloud bills tell stories:
Surprisingly low: Either very efficient or not actually running much in production.
Surprisingly high: Either scaling well or wasting money on over-provisioned resources.
Growing faster than revenue: A unit economics problem that gets worse with success.
I ask what their cost per user or per transaction is. Good teams know. Struggling teams have never calculated it.
Team and Process
Bus Factor
How many people need to get hit by a bus before the company can't function?
Bus factor of 1: Extremely common in early startups. The solo technical founder who built everything. High risk if that person leaves or burns out.
Knowledge silos: "Only Sarah knows the billing system" is a variant of bus factor 1, distributed across multiple people.
Documented, shared knowledge: The goal, rarely achieved in startups but worth asking about.
Hiring and Onboarding
How long until a new engineer is productive? Two weeks is good. Two months suggests a codebase that's hard to understand. "We've never onboarded anyone" means they don't know.
Technical Debt Awareness
Every startup has technical debt. The question is whether they know where it is:
Denial: "Our codebase is clean" - either delusional or lying.
Awareness: "Here are the three areas that will bite us at scale" - honest and prepared.
Paralysis: "Everything is technical debt" - may have lost control of the codebase.
Security Basics
I don't do penetration testing, but I check for obvious issues:
- Are secrets in environment variables or (worse) committed to the repo?
- Is there any authentication/authorization logic, or is everything open?
- Are they running known-vulnerable versions of major dependencies?
- Has anyone ever done a security review?
- Do they have a way to rotate credentials if compromised?
The goal isn't a security audit - it's assessing whether security is on their radar at all.
๐จ Red Flags That Kill Deals
Some findings are serious enough to recommend against investment:
- Fundamental scaling limitations: Architecture that can't grow without a complete rewrite, and growth is the business plan.
- Security disasters waiting to happen: Plaintext passwords, public S3 buckets with customer data, no access controls.
- Key person dependency with no mitigation: One person who won't document anything and threatens to leave.
- Misrepresentation: Claims about technology that don't match reality. If they're lying about tech, what else are they lying about?
- Vendor lock-in with unfavorable terms: Built entirely on a platform that could change pricing or terms at any time.
Any one of these is grounds for a hard pass or significant restructuring of terms.
Yellow Flags That Need Discussion
Some issues are common and manageable:
Technical debt: Universal. The question is whether it's under control and the team knows where the bodies are buried. Unmanaged, technical debt becomes rot that compounds until it's unfixable.
Missing tests: Common in early stages. Fixable with time and discipline.
Junior team: Not automatically bad, but requires appropriate expectations about velocity and mentorship needs.
Unusual technology choices: Sometimes innovative, sometimes problematic. Warrants deeper questions about why.
The Final Report
After evaluation, I provide investors with:
- Executive summary: Can this technology support the business plan? Yes, no, or with caveats.
- Risk assessment: What could go wrong technically, and how likely is each scenario?
- Team evaluation: Is this team capable of building what they're proposing?
- Recommendations: If investing, what should be addressed in the first 90 days?
The goal isn't to find perfect companies - they don't exist. It's to understand the risks clearly so investors can price them appropriately and founders can address them proactively.
The Bottom Line
Technical due diligence isn't a test to pass - it's a conversation about risk. The best outcomes happen when both sides are honest about what they're looking at.
For founders preparing for due diligence: know your weaknesses, have your answers ready, clean up the obvious stuff, and be honest about trade-offs. Don't hide technical debt - acknowledge it and explain your plan. Evaluators will find it anyway; honesty builds trust.
The goal isn't to find perfect companies - they don't exist. It's to understand the risks clearly so investors can price them appropriately and founders can address them proactively.
"Technical due diligence isn't a test to pass - it's a conversation about risk."
Sources
- Tech Due Diligence - Complete Checklist โ M&A Science's comprehensive guide covering architecture assessment, code quality, IP ownership, and security evaluation in technology acquisitions.
- Technical Due Diligence Checklist for a Successful Startup Acquisition โ VeryCreatives' detailed breakdown of technology stack evaluation, technical debt assessment, and team capability analysis.
- Technology Due Diligence in Mergers & Acquisitions โ Duedilio's guide on evaluating scalability, infrastructure, and operational readiness during M&A transactions.
Need Technical Due Diligence?
Evaluating a startup investment or acquisition? Get an honest technical assessment from someone who's done dozens of them.
Get Assessment