top of page

#9 – Verification and Validation Planning

  • vigneshvenkatesh10
  • Nov 24
  • 7 min read

Introduction: Moving From Design to Proof

Up to this point in the series, most of our work has been conceptual. We defined the world the system lives in, set its boundary, decomposed its functions, and shaped its architecture. We then stabilised interfaces and structured subsystem responsibilities. These are foundational steps, but they only describe what the system should be.

Verification and Validation (V&V) is where we begin proving what the system actually is.

In practice, V&V is not a late-stage activity, and it certainly isn’t a test phase tacked on at the end. It is a planning discipline that starts as soon as requirements stabilise. It is how we ensure the system we are about to build can produce evidence of correctness, safety, and fitness for purpose. More importantly, it ensures that when we stand in front of a client with a completed system, nothing relies on interpretation or persuasion. Everything is anchored in measurable, traceable fact.

In this dispatch, I want to walk through how I approach V&V planning on real systems, and how the same thinking applies to our working example, the OTP Locker System.


1. Verification and Validation - Two Questions, One Purpose

Although they’re often spoken about together, verification and validation serve different purposes and answer different questions. Understanding this distinction is crucial because teams that blur the two inevitably produce tests that measure activity rather than confidence.


1.1 Verification - Did we build the system correctly?

Verification deals with conformance. It is anchored in requirements, designs, ICDs, and measurable specifications. When we verify, we compare what we built against what we committed to in our system definition.

In practical terms, verification is evidence-driven. It produces results you can point to, repeat, archive, and defend. This is where we tie requirements to clear acceptance criteria and decide how each requirement will be proven by inspection, analysis, demonstration, test, or sometimes a combination.


1.2 Validation - Did we build the correct system?

Validation deals with intent. It is anchored in stakeholder needs, operational usage, and expectations that often exist outside formal requirements. Validation is where we ask whether the system behaves meaningfully in its real environment.

This is why validation is not confined to engineered artefacts. It includes the experience of interacting with the system, how it performs in real workflows, whether it supports operational tempo, and whether it eliminates, rather than introduces, burdens for its users.

When both verification and validation come together, they answer the two questions that govern any engineering delivery:

Is it correct? And is it useful?


2. Why V&V Planning Happens Before Anything Is Built

One of the early mistakes I made in my career was assuming V&V was something that followed design. Experience corrected that quickly. The more mature the project, the earlier V&V planning begins.

Here’s why:

A requirement that cannot be verified is not a requirement, it is an opinion.

An interface that cannot be observed or instrumented cannot be integrated confidently.

A behaviour that cannot be validated against context will surface late, when it is expensive to fix.

Planning V&V early forces clarity. It exposes ambiguous requirements, unjustified assumptions, missing constraints, and boundaries that have been defined too loosely. I’ve had projects where writing the V&V plan uncovered more weaknesses than any design review.

In our locker example, the moment we define that “Lock shall actuate within 300 ms”, the next question is:

How will we measure 300 ms? Where is the test point? What resets the timer? Who owns the evidence?

This pushes the architecture, the ICD, and the subsystem design to reveal their maturity long before anything is built.


3. How Verification Connects Back to Requirements, Architecture, and Interfaces

One of the best indicators of a healthy engineering effort is whether the verification approach naturally “falls out” of earlier decisions.


3.1 From Requirements (Dispatch #6)

Every FR and SR is written with the assumption that it will eventually be verified. This means it must be:

measurable, unambiguous, bounded by specific conditions, and suitable for one or more verification methods.

When I write or review requirements, I always ask:

“What would the evidence look like?”

If I cannot answer, the requirement is not ready.


3.2 From Architecture and Allocation (Dispatch #7)

Architecture determines where verification happens. Allocation determines who is responsible. The structure gives us natural verification points each subsystem becomes a verification domain, and each interface becomes a measurable boundary.


3.3 From ICDs and Interface Control (Dispatch #8)

Interfaces define the observable signals, data structures, timing expectations, behaviours, and physical access points that make verification possible.

  • If an ICD is vague, verification becomes vague.

  • If an ICD is precise, verification becomes predictable.

For example, verifying the LTE retry logic in the locker demands a stable, behavioural definition in the ICD not just a sentence that says “System shall retry SMS three times”.

Verification is a mirror. It reflects the clarity or ambiguity of everything that came before it.


4. The Four Verification Methods And When They Actually Make Sense

Verification methods are often described in textbooks as if they are interchangeable. They are not. Their strength varies with the type of requirement and the maturity of the system.


4.1 Inspection

Inspection is about confirming that something exists, is present, or is correctly constructed without invoking function. In practice, inspection is the method that catches manufacturing and documentation issues early.

In the locker system, this includes checking enclosure gaskets, connector types, or presence of mandated labels. These seem small, but I have seen acceptance delayed for want of a missing label or incorrect connector orientation.

4.2 Analysis

Analysis is where engineering judgement combines with modelling. Reliability predictions, thermal analysis, power budgets, and timing calculations all belong here. Analysis is especially valuable because it often reveals defects before integration.

Our locker battery-life requirement (“Backup ≥ 6 hours”) is a classic example. The evidence comes from a combination of power modelling and eventual physical testing, but the first level of confidence comes from analysis.

4.3 Demonstration

A demonstration is a simple statement of behaviour: when I do this, the system does that. It supports intuitive behaviours and stakeholder-facing expectations.

For the locker, a demonstration might show that a user can retrieve a parcel, or that an admin can export logs. It’s not about numbers it’s about confirming the behaviour exists.


4.4 Test

This is the most rigorous method, and the one most engineers imagine when discussing verification. It requires instrumentation, repeatability, controlled conditions, and precise measurement.

Verifying solenoid actuation time, HTTPS handshake behaviour, or SMS retry logic all fall here.

A test is not simply “try it and see”. A test is a designed experiment with conditions, measurements, and evidence.


5. The V&V Matrix How We Build Traceability and Confidence

If earlier dispatches translate ideas into structure, the V&V Matrix translates structure into proof. It is the single artefact that links:

  • Requirements

  • Verification methods

  • Test descriptions

  • Acceptance criteria

  • Evidence repositories

  • Responsible teams

  • Relevant ICDs

When a V&V matrix is well-crafted, a reviewer can understand the entire verification strategy by reading down a single row.

In the locker system, “Lock shall actuate within 300 ms” maps directly to a test that specifies: the measurement method, the conditions, the acceptable threshold, the subsystem responsible, and where evidence will be stored.

A good V&V matrix does not hide ambiguity, it exposes it.


6. Mapping Test Plans to Contractual and Regulatory Needs

Tests are not written in isolation. They must demonstrate conformance not only to requirements but to contractual obligations, referenced standards, and operational expectations.

In real projects, I’ve seen test plans succeed technically but fail contractually because they didn’t show the specific measurements the client care about. This is why V&V Planning includes a mapping between:

  • Contract clauses

  • Standards (e.g., environmental, security)

  • Regulatory expectations

  • Safety constraints

  • Operational conditions

For our locker system, this means aligning tests with clauses such as:

“Operate in semi-sheltered conditions” → environmental testing

“Installer shall complete installation in 2 hours” → timed installation trial

“Secure admin access” → TLS verification

This alignment prevents the “we thought you meant X, you actually meant Y” disagreements that often appear late in delivery.


7. Safety, Failure Handling, and High-Risk Scenarios

Even a modest system like our locker includes safety-relevant behaviour: the lock, the actuator, the fallback modes during outage, the handling of edge conditions.

In V&V Planning, safety is not handled generically. It is handled by walking through each hazard identified earlier in the lifecycle and ensuring there is:

a mitigation, a corresponding requirement, a verification method, and a defined piece of evidence.

Failure modes matter just as much. LTE outages, controller resets, solenoid faults, low battery, all must be described, tested, and demonstrated as controlled behaviours.

In my experience, this is where V&V often prevents field failures before they occur. It allows us to test “what happens when things go wrong” in a controlled environment, not after deployment.


8. How V&V Planning Integrates With Dispatch #7 and #8

Dispatch #7 provided the structure: subsystems, boundaries, internal responsibilities. Dispatch #8 provided the interaction model: interfaces, sequencing, integration dependencies.

V&V Planning draws from both.

Subsystem requirements dictate what must be verified at each layer.

ICDs dictate how tests are executed and where signals can be observed.

Interface maturity dictates when integration testing is permitted.

This is why V&V cannot be isolated from architecture or interface definition. It is an extension of both, not a parallel activity.


Closing Reflection

Verification and Validation Planning is where engineering shifts from “what we intend to build” to “what we can prove we’ve built”. It is a balancing act between technical rigour and practical deliverability. The strongest V&V plans are not the ones with the most tests; they are the ones with the clearest reasoning about what matters, why it matters, and how it will be measured.

A good V&V plan transforms integration from a risky phase into a controlled progression of evidence. And when done correctly, it ensures that acceptance later in the project is not a negotiation but a demonstration.

 
 
 

Comments


bottom of page