SQLite: Software Done Right

One trillion devices. Zero dependencies. Twenty-five years of stability. This is what software can be.

Illustration for SQLite: Software Done Right
sqlite-software-done-right SQLite runs on more devices than any software in history. After 45 years watching technology disappoint, here's why SQLite is the rare project that delivered on its promises. sqlite, databases, software engineering, simplicity, public domain, embedded databases, local-first

One trillion active databases. 92 million lines of test code. Zero licensing fees. SQLite is the most deployed software in history, running on every smartphone, browser, and major operating system. After three decades watching technology disappoint, I count on one hand the projects that delivered on their promises. SQLite is one of them.

TL;DR

SQLite proves software can be simple, stable, and generous. One trillion devices trust it because the testing is exhaustive, the design is disciplined, and the philosophy is public domain.

I've used SQLite in production systems since the mid-2000s: mobile apps, embedded devices, desktop software. Most software disappoints eventually. The marketing overpromises, the complexity accumulates, the maintenance burden grows. SQLite does the opposite. It gets better. It gets faster. It stays simple. And it asks for nothing in return.

This isn't skepticism or contrarianism. Having been burned by countless technologies that promised simplicity and delivered complexity, this is genuine appreciation. Some technology actually works.

The Numbers Are Absurd

According to SQLite's own documentation, it's the most widely deployed database in the world, by a margin that makes the comparison meaningless. Every smartphone runs it. Every major browser embeds it. It ships with macOS, Windows, and most Linux distributions. The conservative estimate is over one trillion active databases.

Discord uses SQLite for critical infrastructure handling billions of messages. Airbnb, Dropbox, and parts of Netflix rely on it in production. The list of well-known users reads like a who's who of technology: Adobe, Apple, Facebook, Google, Microsoft, Mozilla.

But raw adoption isn't what makes SQLite special. MySQL is popular. MongoDB is popular. What distinguishes SQLite is the quality of that adoption: engineers who could choose anything keep choosing it, decade after decade, for applications where failure isn't an option. I've seen this firsthand: when reliability matters, experienced engineers reach for SQLite.

The Philosophy That Made It Work

D. Richard Hipp created SQLite in 2000 while working on software for a Navy destroyer. The ship's existing database, Informix, worked fine when it was running. The problem was when it stopped. Imagine being in a combat situation and seeing: "Cannot connect to database server."

That frustration shaped everything. As Hipp put it in interviews about SQLite's origins: the goal was to eliminate the server entirely. No network. No configuration. No administrator. Just a library that reads and writes files.

Three design principles have stayed constant for 25 years:

  • Serverless. SQLite embeds directly into your application. There's no separate process to manage, no socket connections, no authentication handshakes. The database is a file.
  • Zero configuration. No setup. No tuning. No DBA. It just works.
  • Self-contained. A single C file. No dependencies. As Hipp has said: "I don't like dependencies. I really like to statically link things."

This philosophy runs counter to almost everything in modern software development, where the trend is toward more moving parts, more services, more abstraction layers. In my experience, SQLite went the other direction and won. Decisively.

The Testing That Proves It

SQLite has 156,000 lines of source code. It has 92 million lines of test code. That's not a typo. The test suite is 590 times larger than the codebase.

Hipp adopted aviation-grade testing standards (specifically DO-178B, used for flight-critical software. The result is 100% modified condition/decision coverage (MC/DC), meaning every possible branch and condition in the code has been exercised by tests. Over 2 million tests run before every release.

This level of rigor is why SQLite shows up in airplanes, medical devices, and weapons systems. It's why companies trust it for data they cannot afford to lose. The testing isn't marketing. It's engineering.

Public Domain: The Ultimate Simplicity

SQLite isn't open source. It's public domain. There's no license to comply with. No attribution requirements. No copyleft concerns. You can use it, modify it, sell it, embed it: anything. The code belongs to humanity.

Hipp has explained his reasoning simply: "I wrote SQLite because it was useful to me and I released it into the public domain with the hope that it would be useful to others as well."

This decision eliminated an entire category of friction. Companies that can't use GPL software can use SQLite. Projects that need to keep their modifications proprietary can use SQLite. The legal department has nothing to review. This isn't a feature that shows up on benchmarks, but it's part of why adoption spread so completely.

The irony: by giving up all control, Hipp ensured SQLite's influence would be maximized. Every dependency is debt, except when the dependency is so simple and so stable that it subtracts complexity instead of adding it.

The Loopback Latency Gap

Here's the physics that makes SQLite faster than client-server databases for most use cases:

PostgreSQL is a server. SQLite is a library. This difference isn't architectural preference. It's physics.

PostgreSQL Query Path: App → Serialize → Network (Localhost) → Deserialize → Execute → Serialize → Network → Deserialize → App. Cost: ~0.5ms minimum.

SQLite Query Path: App → Function Call → Execute. Cost: ~0.005ms.

SQLite is 100x faster per query because it removes the network entirely. Even localhost networking has overhead: TCP handshakes, serialization, context switches. SQLite bypasses all of it.

Try It Yourself: The Latency Test

Run this on your machine to see the difference:

import sqlite3, time, psycopg2

# SQLite: direct function call
conn_sqlite = sqlite3.connect(':memory:')
conn_sqlite.execute('CREATE TABLE t (id INTEGER PRIMARY KEY, val TEXT)')
conn_sqlite.execute('INSERT INTO t VALUES (1, "test")')

start = time.perf_counter()
for _ in range(1000):
    conn_sqlite.execute('SELECT * FROM t WHERE id = 1').fetchone()
sqlite_time = time.perf_counter() - start

# PostgreSQL: localhost network round-trip
conn_pg = psycopg2.connect(host='localhost', dbname='test')
cur = conn_pg.cursor()
cur.execute('CREATE TABLE IF NOT EXISTS t (id SERIAL PRIMARY KEY, val TEXT)')
cur.execute('INSERT INTO t (val) VALUES (%s) ON CONFLICT DO NOTHING', ('test',))
conn_pg.commit()

start = time.perf_counter()
for _ in range(1000):
    cur.execute('SELECT * FROM t WHERE id = 1')
    cur.fetchone()
pg_time = time.perf_counter() - start

print(f"SQLite: {sqlite_time*1000:.1f}ms for 1000 queries")
print(f"PostgreSQL: {pg_time*1000:.1f}ms for 1000 queries")
print(f"SQLite is {pg_time/sqlite_time:.0f}x faster per query")

Typical result: SQLite ~5ms, PostgreSQL ~500ms for 1000 simple queries. The gap is physics.

The "N+1 Problem" that plagues ORMs doesn't exist in SQLite because N+1 function calls are essentially free. What would tank a PostgreSQL application barely registers with SQLite. This isn't optimization. It's physics.

The Renaissance Nobody Expected

For years, the conventional wisdom was that SQLite was "just for mobile" or "just for testing." Serious applications needed serious databases: PostgreSQL, MySQL, something with a server.

That's changing. Tools like Turso, Litestream, and Cloudflare D1 have made SQLite viable for distributed, edge, and local-first applications. The SQLite-at-the-edge pattern is now a legitimate architecture choice.

What drove this shift:

  • Latency requirements tightened. Sub-5ms response times are hard when your database is across the ocean. SQLite on the edge delivers single-digit milliseconds.
  • Serverless exposed database pain. Cold starts and connection pooling make traditional databases awkward in serverless environments. SQLite is just a file, with no connections to manage.
  • Offline-first became important. Mobile apps, IoT devices, and edge computing all need databases that work without network connectivity.

Turso's embedded replicas let you sync a local SQLite file with a remote database, getting zero-latency reads while maintaining durability. It's the best of both worlds, and it only works because SQLite's core is so solid that you can build on it with confidence.

What SQLite Teaches Us

SQLite succeeds for reasons that run counter to most of what the industry celebrates:

Boring is underrated. SQLite doesn't have impressive benchmarks. It's not distributed. It doesn't scale horizontally. It just solves real problems reliably, year after year. As I've argued about PostgreSQL and database architecture generally, the boring choice is often the right choice.

Constraints enable innovation. By refusing to add a server, SQLite forced itself to solve problems in creative ways. Limits drive design.

Testing is the feature. 92 million lines of tests is an investment most projects would never make. But that investment is why SQLite can be trusted where trust matters.

Simplicity compounds. Every feature SQLite didn't add is maintenance it doesn't pay. Every dependency it avoided is an upgrade it doesn't need. After 25 years, this discipline shows.

When SQLite Isn't The Answer

I'm not saying SQLite is always right. It's not:

  • High-write concurrency. SQLite uses file-level locking. If you have many processes writing simultaneously, you'll hit contention. (Turso's libSQL fork is addressing this with MVCC.)
  • Multi-server access. If multiple servers need to hit the same database file, SQLite's not designed for that. Use PostgreSQL.
  • Massive datasets. SQLite handles gigabytes fine. Terabytes get awkward. Data warehouses need different tools.

But for embedded applications, mobile apps, desktop software, edge computing, IoT, prototyping, testing, and single-server web apps? I've shipped products using SQLite in all these contexts. It's simpler than whatever you're using, and likely just as capable.

The Bottom Line

In an industry obsessed with complexity, scale, and the next big thing, SQLite is a reminder that software can just work. It can be simple. It can be stable. It can serve a trillion devices without demanding attention.

Richard Hipp built something useful and gave it away. Twenty-five years later, it runs on more devices than any other database in history. The testing is exhaustive. The design is disciplined. The philosophy is generous.

If you're building something and wondering whether to add another dependency, another service, another layer: consider whether a single file might be enough. The answer might surprise you.

"In an industry obsessed with complexity, scale, and the next big thing, SQLite is a reminder that software can just work."

Sources

Architecture Review

Choosing the right database for your architecture matters. Technical advisory from someone who's built production systems.

Let's Talk

Simpler Than I Think?

If there's a straightforward solution I'm overcomplicating, I genuinely want to hear it.

Send a Reply →