Use of trusted/verified software elements

2025-08-31

Use of trusted/verified software elements

What is it?

This technique enables safe reuse of pre-existing software elements—commercial, open-source, or legacy in-house—without repeating full life-cycle verification. Reuse is justified either by operational evidence that the element behaves correctly under a representative demand profile (proven-in-use) or by assessing existing verification artifacts such as test reports, reviews, and a Safety Manual for compliant items. The justification is specific to the intended application, configuration, and environment. The aim is to leverage prior investment and field experience while retaining confidence that the element will not undermine the safety function.

When to use

  • Reusing an in-house library, RTOS, comms stack, or control block deployed across multiple products and environments.
  • Integrating supplier software that comes with a Safety Manual (or equivalent) and maintained verification evidence.
  • When project timelines or SIL targets make “build from scratch” impractical but adequate evidence exists for the intended use.
  • When only a subset of a complex element (e.g., OS services) is needed and can be strictly bounded/configured.
  • During product refreshes or migrations where the element version is unchanged and field history remains applicable.

Inputs & Outputs

Inputs

  • Exact identification of the software element: name, version/build, configuration, interfaces, options.
  • Evidence set: operational history (exposure, failures, demand profile) and/or verification artifacts (test results, reviews, coverage, issue logs).
  • Intended-use definition: safety requirements, environment, demand rates, interfaces, and constraints.
  • Supplier Safety Manual (or created equivalent) describing capabilities, assumptions, and limits of use.

Outputs

  • Reuse justification dossier (proven-in-use argument and/or verification-evidence assessment).
  • Documented limits of use, required configurations, disabled/isolated features, and integration constraints.
  • Assurance activities for identified gaps (additional tests, reviews, defensive measures).
  • Safety-case contributions and configuration-management records (version freeze, patch policy).

Procedure

  1. Scope the intended use. Define the safety function(s), SIL target, operational profile (rates, inputs, environment), interfaces, and constraints for the reused element.
  2. Identify and baseline the element. Capture exact version, build options, dependencies, platform, and any dormant/optional features.
  3. Collect evidence. a) Proven-in-use: field hours/demands, failure records, application diversity, stability of specification; b) Verification evidence: development/verification records, test coverage, defect history, Safety Manual.
  4. Match evidence to context. Confirm the demand profile, environment, and usage mode in the new system are within the demonstrated/verified envelope; document any deltas.
  5. Bound the element. Disable or isolate unused functions; fix configurations; add wrappers, monitors, or plausibility checks to prevent undefined behavior propagating into the safety function.
  6. Close gaps. Where evidence is incomplete, perform targeted tests, static/dynamic analysis, or reviews focused on safety-relevant behavior and identified failure mechanisms.
  7. Produce/validate the Safety Manual. Ensure assumptions of use, constraints, required diagnostics, failure modes, and integration requirements are explicit and auditable.
  8. Integrate with change control. Freeze the qualified baseline; define patch/update assessment criteria and re-qualification triggers.
  9. Argue in the safety case. Present the justification, limits of use, and results of gap-closing activities; obtain independent assessment as required.

Worked Example

High-level

Coder reusing an old library: An engineering team maintains a legacy numerical library used for control calculations in ten product families over 15 years. The library’s specification and API have remained stable across versions. Operations logs show millions of service hours in multiple industries with no safety-related failures attributed to the library. For a new SIL-rated controller, the team compiles a proven-in-use dossier: precise version and configuration mapping, exposure data (hours and demand counts), failure collection methods, and application diversity. They check that the new demand profile (calculation rate, input ranges, timing) and environment match the historical envelope. Unused functions (e.g., advanced math modes) are disabled; a wrapper enforces argument ranges and monitors timing. Supplier/maintainer records and internal test reports are reviewed to confirm edge-case handling. Gaps (e.g., behavior under power-up transients) are addressed with targeted stress tests. A concise Safety Manual for the library is produced, documenting assumptions, limits of use, and required configuration. The library is baselined under change control with a defined patch assessment process.

Result: The project reuses a well-understood element with bounded behavior, reducing re-verification effort while maintaining confidence that systematic faults are controlled for the specific safety function and operating conditions.

Quality criteria

  • Traceable identification: Exact element version/configuration and dependency chain are recorded and frozen.
  • Evidence relevance: Operational history or verification results demonstrably match the intended demand profile, environment, and interfaces.
  • Limits of use defined: Safety Manual (or equivalent) states capabilities, assumptions, disabled features, diagnostics, and integration requirements.
  • Gap closure: Any differences from historical/verified conditions are addressed by targeted analysis/tests with recorded outcomes.
  • Change control: Clear criteria for when updates require re-qualification; patch assessments are auditable.

Common pitfalls

  • “Popular = safe” fallacy. Mitigation: accept only evidence tied to the intended use; demonstrate comparable demand/environment.
  • Hidden or dormant features causing side effects. Mitigation: disable/lock down unused functionality; sandbox or wrap interfaces.
  • Specification drift across versions. Mitigation: freeze qualified versions; re-justify any change via impact analysis and targeted re-tests.
  • Incomplete failure collection in the field. Mitigation: require documented detection/registration processes; supplement with focused testing.
  • Over-claiming SIL based on generic history. Mitigation: scale claims to evidence strength; narrow the operating envelope; add defensive measures.

References

FAQ

Is proven-in-use alone sufficient for complex elements like an OS?

Rarely. For multi-function elements, only specific, well-bounded services may be sufficiently demonstrated. Expect additional verification and strict configuration to limit behavior to what the evidence supports.

What if the supplier cannot provide a Safety Manual?

Create an equivalent document from available evidence and your own qualification activities. If key assumptions or failure mechanisms remain unclear, adopt conservative limits or avoid reuse for safety-relevant functions.

How do updates and patches affect the justification?

Any change can invalidate assumptions. Define re-qualification triggers (e.g., API change, compiler/toolchain change, timing alteration, defect fixes touching safety-relevant code) and reassess against the Safety Manual and evidence.

Can open-source software be reused under this technique?

Yes, provided you can assemble adequate evidence (test history, defect tracking, field usage) and enforce configuration and version control matching your intended use and SIL claim.

This article explains Use of trusted/verified software elements (IEC 61508-3 C.2.10) in general functional-safety practice. Always consult applicable standards for normative requirements.

::contentReference[oaicite:0]{index=0}

Back to all news

We use cookies
Cookie preferences
Below you may find information about the purposes for which we and our partners use cookies and process data. You can exercise your preferences for processing, and/or see details on our partners' websites.
Analytical cookies Disable all
Functional cookies
Other cookies
We use cookies to personalize content and ads, to provide social media features and to analyze our traffic. Learn more about our cookie policy.
Accept all Decline all Change preferences
Cookies