C Was the Last Good Language

Modern languages prioritize developer experience over runtime efficiency.

Illustration for C Was the Last Good Language
c-was-last-good-language Why modern languages prioritize developer experience over runtime efficiency. The trade-offs we accepted and forgot. Where Rust tries to have it both ways. C programming, Rust, programming languages, memory safety, performance, systems programming

I wrote my first C program in 1978 on a TRS-80. Nearly 50 years later, it remains the only language where I feel like I'm getting everything the machine can give. Every language since C has made trade-offs I disagree with - prioritizing developer convenience over runtime performance, safety over control, abstraction over understanding.

TL;DR

Accept that C won the systems programming war. Learn it, understand it, appreciate it. New languages are for new problems, not C's problems.

Here's the truth: modern software is slow not because computers are slow - computers are unimaginably fast. Software is slow because we've chosen convenience over performance at every level. Web apps that take seconds to load. Mobile apps that drain batteries. Backend services that need horizontal scaling. C represents values about programming that we've lost, and that loss has costs we don't acknowledge.

What C Got Right

C is a thin layer over the machine. When you write C, you know what the computer is actually doing. There's no hidden allocation, no garbage collector in the background, no virtual dispatch, no runtime reflection.

No hidden costs. In C, every operation has a predictable cost. A function call is a function call. A memory access is a memory access. The correspondence between what you write and what executes is direct.

// What you write is what runs. No surprises.
typedef struct {
    float x, y, z;    // 12 bytes, contiguous
    uint32_t flags;   // 4 bytes, total 16 bytes aligned
} Point;

Point points[1024];   // 16KB, cache-friendly, predictable

// This loop does exactly what it looks like
for (int i = 0; i < 1024; i++) {
    points[i].x *= scale;  // One load, one multiply, one store
}

In C, that struct is exactly 16 bytes. The array is exactly 16KB. The loop does exactly 1024 iterations with exactly 3 memory operations each. No allocator decides to fragment your data. No runtime reorganizes your memory layout. No garbage collector pauses to scan your heap.

Full control. You decide when memory is allocated and freed. You decide how data is laid out. You decide what happens at every step. Nothing is automatic unless you make it automatic.

Minimal runtime. A C program needs almost nothing to run. No interpreter, no virtual machine, no massive runtime library. Just the OS (or not even that, if you're writing bare metal). According to Fortune Business Insights, the embedded systems market is projected to grow from $116 billion in 2024 to $177 billion by 2032 - and most embedded firmware is still written in C.

Portable assembly. C is often called "portable assembly language," and that's accurate. It gives you something close to machine-level control while remaining portable across architectures. The IEEE Spectrum 2025 rankings show C remains in the top tier precisely because this low-level control is non-negotiable for OS kernels and performance-critical systems. As I've written before, assembly never really left - it's still there when you need it.

The Developer Experience Era

Everything since C has prioritized "developer experience" over these properties.

Garbage collection means you don't have to think about memory. But you lose control over when memory is freed. You pay for GC pauses you can't predict.

Object-oriented programming means you can model domains naturally. But you pay for virtual dispatch, for hidden allocations when you create objects, for data scattered across the heap.

Dynamic typing means faster iteration - but you lose compile-time guarantees and pay for type checking at runtime.

High-level abstractions mean cleaner code - but you lose visibility into what's actually happening, and often pay in performance.

Each of these trade-offs is reasonable for some use cases. But the trend has been relentlessly in one direction: make things easier for developers, accept the runtime costs. It's the layer tax compounding at every level of abstraction.

Here's a concrete example. Building an array of numbers in JavaScript:

// JavaScript: Simple, but what's actually happening?
let numbers = [];
for (let i = 0; i < 10000; i++) {
    numbers.push(i * 2);  // Hidden: reallocation, GC pressure
}

Behind that innocent push(), the engine is reallocating memory as the array grows, copying data to new locations, and scheduling garbage collection for the abandoned memory. According to research on JavaScript memory management, "JavaScript will automatically allocate memory when values are initially declared"—and deallocate it sometime later, when the garbage collector decides to run.

The same operation in C:

// C: Explicit control, predictable behavior
int* numbers = malloc(10000 * sizeof(int));
if (!numbers) {
    perror("malloc");
    return -1;  // Handle failure explicitly
}
for (int i = 0; i < 10000; i++) {
    numbers[i] = i * 2;  // Direct memory write
}
free(numbers);  // Freed exactly when you say

One allocation, 10,000 direct memory writes, one deallocation. No hidden copies. No GC pause waiting to happen. No mystery about when memory is freed. The C version is more dangerous—forget that free() and you leak memory. But it's also transparent. You see exactly what the machine is doing.

The Hidden Cost Visualizer

Hover over the "simple" JavaScript to reveal what's actually happening underneath:

What You Write (JavaScript)
let numbers = [];
for (let i = 0; i < 10000; i++) {
    numbers.push(i * 2);
}
What Actually Happens
// Engine allocates heap for array object // Creates hidden class for empty array // Sets up GC tracking for this allocation // Loop: 10000x bounds checks on i
// Check array capacity // IF full: malloc(2x current size) // Copy all existing elements to new memory // Mark old memory for GC // Write i*2 to new slot // Update array length property // Maybe trigger GC pause here...
Hover over JS code lines to reveal hidden work
~20 hidden allocations
~200KB memory churn
? GC pauses

The Performance Costs We Ignore

Modern software is slow. Not because computers are slow - computers are unimaginably fast. Software is slow because we've chosen convenience over performance at every level.

Web apps that take seconds to load - not because the network is slow. We're shipping megabytes of JavaScript that needs to be parsed and executed.

Mobile apps that drain batteries - not because phones are underpowered. Apps are doing unnecessary work in inefficient languages.

Backend services that need horizontal scaling - not because the load is inherently too high. Each request does 10x more work than necessary.

We've papered over these inefficiencies with faster hardware and more servers. But that's not free. It costs money, energy, and user experience.

The Memory Safety Argument

The strongest argument against C is memory safety. Buffer overflows, use-after-free, null pointer dereferences - C lets you shoot yourself in the foot in ways modern languages prevent.

This is real. C programs have bugs that can't exist in memory-safe languages. Security vulnerabilities in C code have cost billions and compromised millions of systems.

But the solution hasn't been to make C better. It's been to accept massive performance costs for safety. A garbage-collected language trades predictable performance for safety. A runtime with bounds checking trades speed for safety.

Is that the right trade-off? Sometimes. For a web app that's IO-bound anyway, sure. For a database kernel processing millions of queries per second, probably not.

Where Rust Fits

Rust is interesting because it tries to have both: memory safety without garbage collection, high-level ergonomics with low-level control.

It partially succeeds. Rust code can be as fast as C while being memory-safe. That's a genuine achievement.

But Rust isn't C. The borrow checker adds cognitive overhead. The type system is more complex. Compile times are longer. The language surface area is vast.

I use Rust, and I appreciate it. But I don't find myself writing Rust and thinking "this is exactly what programming should be." I find myself fighting the borrow checker, reasoning about lifetimes, sometimes wishing for C's simplicity. When I was building high-performance systems at MSNBC, we didn't have these guardrails - we had discipline and understanding instead.

Rust's safety comes at a cost - not runtime cost, but complexity cost. For certain domains, that cost is worth it. For others, I'm not sure.

What We Lost

The languages that came after C mostly ignored what C got right:

Predictability. In C, you can reason about what the code does by reading it. In languages with garbage collection, runtime dispatch, or implicit allocation, you can't. There's always magic happening you can't see.

Simplicity. C is a small language. You can hold all of it in your head. Modern languages have massive surface areas - features, libraries, idioms, best practices. There's always more to learn.

Closeness to the machine. C programmers understand computers. They know about cache lines and memory layout and branch prediction. I learned this debugging assembly in the 1980s. Programmers in higher-level languages often don't. The abstraction hides the machine.

Performance as default. C programs are fast unless you make them slow. Programs in most other languages are slow unless you make them fast. Defaults matter.

The Languages I Actually Use

Despite this essay, I don't write everything in C. That would be impractical.

For performance-critical paths: C or Rust, depending on the safety requirements.

For systems programming: Rust, for the safety guarantees in complex code.

For tooling and scripts: Python, for development speed.

For web services: Go, for the balance of simplicity and performance.

I pick tools based on the job. But I always know what I'm giving up. When I write Python, I accept it will be slow. When I write Go, I accept garbage collector pauses. When I write Rust, I accept borrow checker battles. When we built ECHO at ZettaZing to handle 30 million concurrent connections, understanding these trade-offs wasn't optional - it was survival.

Only when I write C do I feel like I'm getting everything the machine can give.

The Bottom Line

My point isn't that everyone needs to write C. It's that we've traded away things worth acknowledging.

Developer experience has value. Safety has value. Abstraction has value. But so do performance, simplicity, and understanding.

C represents a philosophy: the programmer is smart, the compiler should be transparent, the machine should be respected. Modern languages represent a different philosophy: the programmer is fallible, the runtime should protect them.

Both philosophies have merit. But I think we've swung too far toward the second one. We've created generations of programmers who don't understand computers. They accept that software is slow. They think performance is something you buy with more servers.

C was the last language that assumed you knew what you were doing and got out of your way. Everything since has assumed you need protection from yourself.

Sometimes you do. But sometimes you just need a thin layer over the machine that does exactly what you tell it.

That's what C gives you. Nothing more, nothing less. And I still think that's right.

"Only when I write C do I feel like I'm getting everything the machine can give."

Sources

Performance Architecture

Sometimes you need to go back to basics. Systems design from first principles.

Discuss

Simpler Than I Think?

If there's a straightforward solution I'm overcomplicating, I genuinely want to hear it.

Send a Reply →