The Test That Never Ran

Let me be clear upfront: this isn’t about “Rust is better than C.” It’s about recognizing patterns in how we ensure safety. Some languages require discipline and process. Others provide guarantees through their type system. Understanding the difference helps us make better choices about where to invest our effort.

I was working on a linear algebra library in Rust (yes, the 1000th one) when I wrote a test for matrix addition with mismatched dimensions. cargo build compiled fine, but cargo test surprised me: the test never ran. Instead:

error[E0308]: mismatched types
   --> src/matrix.rs:122:29
    |
122 |         let result = mat1 + mat2;
    |                             ^^^^ expected `2`, found `3`

My first thought was: “Okay, I need to add a check in the library…” Then I stopped. Rust had already caught the error. At compile time. I didn’t need to check anything.

This is Rust’s famous safety, I thought. In C or C++, I would need error handling to catch this case. But then I realized: is that really true? In C, definitely. But in C++, maybe not? This made me think about how different languages handle errors, and more importantly, when those errors are caught. The answer reveals fundamental design philosophies.

The Three Philosophies of Error Handling

Before diving into specific languages, let’s establish what we’re actually talking about. Error handling isn’t just about preventing crashes. It’s about where and how we detect problems.

Compile-Time Detection means errors are caught by the compiler before the program runs. Type mismatches, invalid operations, memory safety violations: these never make it into production.

Runtime Detection means errors are detected during execution. Invalid inputs, resource failures, unexpected states: these require explicit handling in code.

No Detection is the most dangerous category. Undefined behavior, silent corruption, race conditions: errors that slip through and cause unpredictable failures.

Each language chooses a different balance. That choice defines not just how we write code, but how we think about safety, performance, and maintainability. Let me show you what this means in practice.

The C Reality Check

Professionally, I write a lot of C for automotive ECUs. The approach in C to error handling is straightforward: it gives you nothing. No exceptions, no type system enforcement, no safety nets. Just return codes and discipline. So how would I solve the same matrix problem there?

typedef struct {
    int* data;
    size_t rows;
    size_t cols;
} Matrix;

int matrix_add(Matrix* result, const Matrix* a, const Matrix* b) {
    // Check 1: Null pointer?
    if (!result || !a || !b) {
        return ERR_NULL_POINTER;
    }
    
    // Check 2: Null data?
    if (!a->data || !b->data) {
        return ERR_INVALID_DATA;
    }
    
    // Check 3: Dimension mismatch?
    if (a->rows != b->rows || a->cols != b->cols) {
        return ERR_DIMENSION_MISMATCH;
    }
    
    // Check 4: Result buffer allocated?
    if (!result->data) {
        return ERR_NO_MEMORY;
    }
    
    // Finally: The actual addition
    for (size_t i = 0; i < a->rows * a->cols; i++) {
        result->data[i] = a->data[i] + b->data[i];
    }
    
    return SUCCESS;
}

Four defensive checks for a simple operation. And this is just the function itself. Every caller must also check:

Matrix mat1, mat2, result;
// ... initialization ...

int err = matrix_add(&result, &mat1, &mat2);
if (err != SUCCESS) {
    // Error handling
    switch(err) {
        case ERR_NULL_POINTER: /* ... */
        case ERR_DIMENSION_MISMATCH: /* ... */
        // ...
    }
}

This is defensive programming in its purest form. I must anticipate every conceivable error and catch it. Every function is half error handling, half logic.

What this means in practice is straightforward but demanding. Every function that can fail returns an error code. Every caller must check that code. Forgetting a check? Undefined behavior. Passing wrong types? Undefined behavior. Race condition? Undefined behavior.

In the field of automotive software, this becomes critical. MISRA C exists precisely because C requires perfect discipline. We compensate with strict coding guidelines, static analysis tools, and extensive code reviews. But these are processes trying to replace what the language doesn’t provide.

The Automotive Reality: AUTOSAR, MISRA, and ISO 26262

Let me be clear about something: I’m simplifying. The automotive software world is more structured than “just use C everywhere.”

We have AUTOSAR Classic for traditional ECUs, built on C with strict architectural patterns. We have AUTOSAR Adaptive for high-performance computing platforms, which allows modern C++. These aren’t just coding standards. They’re complete software architectures that define how components communicate, how resources are managed, and how safety is achieved.

The defensive checks I described aren’t random. They’re systematically required by these architectures. AUTOSAR’s Runtime Environment expects certain error codes. Safety standards like ISO 26262 require certain failure detection mechanisms. MISRA C and MISRA C++ define exactly which language features we can use.

So when I talk about defensive programming versus type-driven design, I’m not suggesting we abandon these standards tomorrow. I’m asking: within these frameworks, where could better language features reduce the burden?

Could AUTOSAR runnables benefit from safer type systems? Could the communication between software components use types instead of runtime validation? Could we use modern C++ features where AUTOSAR Adaptive allows them? Features like std::optional, std::variant, or constexpr validation could reduce defensive checks while staying within the standard, instead of staying in the C-style comfort zone.

The Daily Cost of Defensive Code

This pattern appears everywhere in typical C codebases. Whether I look at open source projects, tutorial code, or professional embedded software, the ratio is similar.

Let me show you a more automotive-specific example. Consider CAN message handling, something every ECU developer knows:

typedef struct {
    uint32_t id;
    uint8_t data[8];
    uint8_t length;
} CANMessage;

int validate_and_send_can_message(const CANMessage* msg) {
    // Check 1: Message exists?
    if (!msg) return ERR_NULL_POINTER;
    
    // Check 2: Valid CAN ID?
    if (msg->id > 0x1FFFFFFF) return ERR_INVALID_ID;
    
    // Check 3: Valid length?
    if (msg->length > 8) return ERR_INVALID_LENGTH;
    
    // Check 4: Standard vs Extended ID consistency?
    if (msg->id > 0x7FF && !(msg->id & 0x80000000)) {
        return ERR_ID_FORMAT_MISMATCH;
    }
    
    // Finally: Send the message
    return can_driver_send(msg);
}

Every CAN stack has similar functions. Most of the code is validation, not communication. And every project implements these checks slightly differently, leading to subtle bugs across teams.

In my experience with typical embedded C codebases, defensive checks and cleanup logic often make up more than half of the code. The actual business logic gets buried under layers of safety validation. And here’s the dangerous part: forget a single check or a cleanup path, and you have a bug. Every code review must verify these paths. Every test must cover them. This is the cost of safety in C.

Now imagine if the type system could enforce this:

struct StandardCANId(u16);  // 11-bit, 0x000-0x7FF
struct ExtendedCANId(u32);  // 29-bit

enum CANId {
    Standard(StandardCANId),
    Extended(ExtendedCANId),
}

struct CANMessage<const N: usize> {
    id: CANId,
    data: [u8; N],  // Length is part of the type
}

impl<const N: usize> CANMessage<N> {
    fn send(self) -> Result<(), CANError> {
        // No validation needed - invalid messages can't be constructed
        can_driver_send(self)
    }
}

// This compiles:
let msg = CANMessage { 
    id: CANId::Standard(StandardCANId(0x123)),
    data: [0u8; 8] 
};

// This doesn't - length is wrong:
let msg = CANMessage { 
    id: CANId::Standard(StandardCANId(0x123)),
    data: [0u8; 9]  // Compiler error!
};

The validation isn’t gone. It’s moved to the constructor of StandardCANId and ExtendedCANId:

impl StandardCANId {
    fn new(id: u16) -> Result<Self, CANError> {
        if id > 0x7FF {
            return Err(CANError::InvalidStandardId);
        }
        Ok(StandardCANId(id))
    }
}

Once you have a valid StandardCANId, the compiler guarantees it stays valid. The type system enforces CAN protocol constraints.

This is conceptual. In practice, you’d still need unsafe code to interface with the CAN hardware driver. But the type safety lives in the application layer, where most bugs actually happen.

Now, I need to be realistic here. Rust in automotive production is still early days. There are no ISO 26262 qualified Rust compilers yet. Tool qualification is expensive and time-consuming. The automotive industry moves slowly, and for good reasons.

Rust also has unsafe blocks where you explicitly opt out of safety guarantees. You still need discipline there. And Rust doesn’t prevent logic errors or deadlocks. It’s not magic.

But here’s what matters: Rust demonstrates that compile-time safety and zero-cost abstractions are possible. Even if we can’t use Rust in production tomorrow, it shows us what modern C++ could be if we used it more fully within AUTOSAR Adaptive contexts.

The Shift in Thinking

This is the moment that hit me. In C, I think: “What can go wrong? What checks do I need?” In Rust, I think: “What do I want to express? What type describes that?”

This is not just less code. It’s a fundamental difference in mental model. With defensive programming in C, I catch errors at runtime. Every developer must think about every edge case. I write a lot of code for safety, and testing proves correctness.

The difference is profound. In defensive programming, safety is a property I must actively maintain through discipline and process. In type-driven design, safety is a property the language guarantees through its type system. With type-driven design in Rust and modern C++, I make certain classes of errors impossible at compile time. The compiler prevents type mismatches, null pointer dereferences, and memory safety violations. Safety comes through design, and types prove these specific properties. They don’t catch logic errors or algorithmic mistakes, but they eliminate entire categories of bugs that plague C codebases.

The Automotive Perspective

In typical automotive ECU code, we have thousands of defensive checks scattered throughout the codebase. Everywhere you look, you see the same patterns. Null pointer checks, bounds checks, range checks, state validation checks. All necessary, all manually written, all manually verified in code reviews.

With MISRA C and MISRA C++, code reviews, and static analysis, we try to ensure that no check is forgotten. This is process as a replacement for language features. We build elaborate systems to catch what the language cannot.

In Rust or modern C++, many defensive checks would take different forms:

Null pointer checks could become Option<T> (explicitly marking that a value might be absent) or references (guaranteeing a value exists). Bounds checks could become compile-time const sizes or explicit .get() calls that return Option. State machines could use the type-state pattern, where each state is a different type and transitions are enforced by the compiler. Value ranges could use newtype patterns, where a type like Speed0to100(u8) can only be constructed with valid values.

These aren’t just different syntax. They shift validation from runtime checks scattered throughout the code to compile-time constraints or explicit construction points.

The Practical Takeaway

My realization is not “Rust is better than C.” It is this: I am spending time writing defensive code that would be unnecessary in modern languages.

This doesn’t mean I should rewrite all our C code tomorrow. But it changes how I think about new components. For the hardware layer, C remains essential for direct register access. There’s no choice there. For the safety layer, I must invest in defensive checks because the language requires it. But for business logic? There, Rust or modern C++ could replace defensive checks with types.

This distinction matters. It means I can choose where to invest effort. Where I must use C, I accept the cost of defensive programming. Where I have flexibility, I can leverage type systems to reduce that cost.

The Elephant in the Room: Why Not Switch Tomorrow?

Let me address what I’m not saying. I’m not suggesting we rewrite our ECU software in Rust tomorrow. That would be naive for several reasons:

First, there’s certification. Our tools are qualified according to ISO 26262. Our processes are qualified according to ASPICE. Introducing a new language means re-qualifying everything. That’s years of work and significant investment.

Second, there’s legacy. We have millions of lines of proven C code. It works. It’s tested. It’s certified. Rewriting it would be expensive and risky.

Third, there’s the team. We have engineers trained in C and AUTOSAR. Learning Rust or modern C++ takes time. Not everyone will embrace it.

Fourth, there’s the ecosystem. Our suppliers deliver AUTOSAR components in C. Our customers expect C interfaces. The entire automotive software supply chain is built around C.

So why am I writing this article? Because I believe we should ask different questions. Not “should we switch to Rust” but “where are we paying unnecessary costs with current tools?”

Are there new components where we could use AUTOSAR Adaptive with modern C++ features? Are there application layers where type safety would prevent entire bug categories? Are there areas where the process overhead of defensive C programming outweighs the migration cost?

These are the questions this compiler error made me ask.

The Question I’m Left With

How much of my daily work is actually just manual type checking? How many bugs have I fixed that wouldn’t have compiled in Rust? How much code review time do we spend verifying that all defensive checks are present?

These are the real questions this compiler error raised. Not “which language is better,” but “how much of my work could be automated?”

For new projects, I will think more about this: Can I model the problem so that erroneous states are excluded at compile time? For legacy code: Can I at least move the defensive checks into clearly bounded layers instead of scattering them everywhere?

Looking back, this small moment with a matrix library taught me something fundamental. The goal is not to eliminate error handling. The goal is to move it to where it’s most effective. Sometimes that’s runtime checks with discipline and process. Sometimes that’s compile-time guarantees with types and compilers.

Knowing the difference and choosing deliberately makes all the difference.

What I’m Taking Away

This compiler error taught me to recognize a pattern in my daily work. When I write defensive checks, I should ask: is this a runtime constraint that could fail (like “file not found”), or is it a type constraint that should never occur (like “wrong matrix dimension”)?

Runtime constraints need runtime checks. That’s fine. But type constraints masquerading as runtime checks? That’s where we’re wasting effort.

In new code, I’m trying to push more constraints into types. Not because it’s fashionable, but because every type constraint I encode is one less thing I need to check, test, review, and debug.

In existing code, I’m thinking more about boundaries. Can I at least isolate defensive checks at system boundaries, rather than scattering them throughout? Can I create a safe inner core where certain classes of errors are impossible by construction?