Use of Trusted and Verified Software Elements — IEC 61508

31 August 2025 · Dr. Michel Houtermans · 7 min read

Use of trusted or verified software elements is a structured approach to reusing pre-existing software — COTS, SOUP, or in-house legacy — by demonstrating either sufficient proven-in-use history or a robust body of verification evidence for the intended use. It controls systematic failures by preventing unverified behaviour in reused elements from compromising the safety function.

What is it?

This technique enables safe reuse of pre-existing software elements — commercial, open-source, or legacy in-house — without repeating full lifecycle verification.

Reuse is justified either by operational evidence that the element behaves correctly under a representative demand profile (proven-in-use) or by assessing existing verification artifacts such as test reports, reviews, and a Safety Manual for compliant items.

The justification is specific to the intended application, configuration, and environment. The aim is to leverage prior investment and field experience while retaining confidence that the element will not undermine the safety function.

How it supports functional safety

The measure addresses systematic failures by requiring credible evidence that design-related faults have been controlled in prior use or by verification activities. It also surfaces constraints — disabled features, required diagnostics, configuration limits — so integration does not introduce new systematic errors.

While not a hardware diagnostic per se, extensive field history can reveal manifestations of random or common-cause hardware faults at the software boundary (e.g. timing, memory stress), reducing the chance that the safety function silently acts on corrupted behaviour.

The key question is: does your evidence actually cover the intended use — or are you assuming that "popular" equals "safe"?

When to use

  • Reusing an in-house library, RTOS, comms stack, or control block deployed across multiple products and environments
  • Integrating supplier software that comes with a Safety Manual and maintained verification evidence
  • When project timelines or SIL targets make "build from scratch" impractical but adequate evidence exists for the intended use
  • When only a subset of a complex element (e.g. OS services) is needed and can be strictly bounded and configured
  • During product refreshes or migrations where the element version is unchanged and field history remains applicable

Inputs and outputs

Inputs

  • Exact identification of the software element: name, version/build, configuration, interfaces, options
  • Evidence set: operational history (exposure, failures, demand profile) and/or verification artifacts (test results, reviews, coverage, issue logs)
  • Intended-use definition: safety requirements, environment, demand rates, interfaces, and constraints
  • Supplier Safety Manual (or created equivalent) describing capabilities, assumptions, and limits of use

Outputs

  • Reuse justification dossier (proven-in-use argument and/or verification-evidence assessment)
  • Documented limits of use, required configurations, disabled or isolated features, and integration constraints
  • Assurance activities for identified gaps (additional tests, reviews, defensive measures)
  • Safety-case contributions and configuration-management records (version freeze, patch policy)

Procedure

  1. Scope the intended use. Define the safety function(s), SIL target, operational profile (rates, inputs, environment), interfaces, and constraints for the reused element.
  2. Identify and baseline the element. Capture exact version, build options, dependencies, platform, and any dormant or optional features.
  3. Collect evidence. For proven-in-use: field hours/demands, failure records, application diversity, stability of specification. For verification evidence: development and verification records, test coverage, defect history, Safety Manual.
  4. Match evidence to context. Confirm the demand profile, environment, and usage mode in the new system are within the demonstrated or verified envelope. Document any deltas.
  5. Bound the element. Disable or isolate unused functions. Fix configurations. Add wrappers, monitors, or plausibility checks to prevent undefined behaviour propagating into the safety function.
  6. Close gaps. Where evidence is incomplete, perform targeted tests, static/dynamic analysis, or reviews focused on safety-relevant behaviour and identified failure mechanisms.
  7. Produce or validate the Safety Manual. Ensure assumptions of use, constraints, required diagnostics, failure modes, and integration requirements are explicit and auditable.
  8. Integrate with change control. Freeze the qualified baseline. Define patch/update assessment criteria and re-qualification triggers.
  9. Argue in the safety case. Present the justification, limits of use, and results of gap-closing activities. Obtain independent assessment as required.
Align the reuse argument with the SIL claim. The higher the SIL, the narrower your allowed operating envelope and the stronger the tie between field or verification evidence and the intended demand profile — document these bounds explicitly in the Safety Manual.

Worked example — legacy numerical library

An engineering team maintains a legacy numerical library used for control calculations in ten product families over 15 years. The library's specification and API have remained stable across versions. Operations logs show millions of service hours in multiple industries with no safety-related failures attributed to the library.

For a new SIL-rated controller, the team compiles a proven-in-use dossier: precise version and configuration mapping, exposure data (hours and demand counts), failure collection methods, and application diversity. They check that the new demand profile (calculation rate, input ranges, timing) and environment match the historical envelope.

Unused functions (e.g. advanced math modes) are disabled. A wrapper enforces argument ranges and monitors timing. Supplier and maintainer records and internal test reports are reviewed to confirm edge-case handling. Gaps (e.g. behaviour under power-up transients) are addressed with targeted stress tests. A concise Safety Manual for the library is produced, documenting assumptions, limits of use, and required configuration. The library is baselined under change control with a defined patch assessment process.

Result: The project reuses a well-understood element with bounded behaviour, reducing re-verification effort while maintaining confidence that systematic faults are controlled for the specific safety function and operating conditions.

Quality criteria

  • Traceable identification: Exact element version, configuration, and dependency chain are recorded and frozen.
  • Evidence relevance: Operational history or verification results demonstrably match the intended demand profile, environment, and interfaces.
  • Limits of use defined: Safety Manual states capabilities, assumptions, disabled features, diagnostics, and integration requirements.
  • Gap closure: Any differences from historical or verified conditions are addressed by targeted analysis or tests with recorded outcomes.
  • Change control: Clear criteria for when updates require re-qualification; patch assessments are auditable.

Common pitfalls

"Popular = safe" fallacy

Wide adoption is cited as evidence without tying it to the intended use, demand profile, or environment.

Mitigation: Accept only evidence tied to the intended use. Demonstrate comparable demand and environment.

Hidden or dormant features causing side effects

Unused functionality remains active and introduces unexpected behaviour.

Mitigation: Disable and lock down unused functionality. Sandbox or wrap interfaces.

Specification drift across versions

Updates silently change behaviour, invalidating the original justification.

Mitigation: Freeze qualified versions. Re-justify any change via impact analysis and targeted re-tests.

Incomplete failure collection in the field

Field history exists but failure detection and registration processes are undocumented or unreliable.

Mitigation: Require documented detection and registration processes. Supplement with focused testing.

Over-claiming SIL based on generic history

Broad field data is used to justify a specific SIL claim without narrowing the operating envelope.

Mitigation: Scale claims to evidence strength. Narrow the operating envelope. Add defensive measures.

Frequently asked questions

Is proven-in-use alone sufficient for complex elements like an OS?

Rarely. For multi-function elements, only specific, well-bounded services may be sufficiently demonstrated. Expect additional verification and strict configuration to limit behaviour to what the evidence supports.

What if the supplier cannot provide a Safety Manual?

Create an equivalent document from available evidence and your own qualification activities. If key assumptions or failure mechanisms remain unclear, adopt conservative limits or avoid reuse for safety-relevant functions.

How do updates and patches affect the justification?

Any change can invalidate assumptions. Define re-qualification triggers (e.g. API change, compiler/toolchain change, timing alteration, defect fixes touching safety-relevant code) and reassess against the Safety Manual and evidence.

Can open-source software be reused under this technique?

Yes, provided you can assemble adequate evidence (test history, defect tracking, field usage) and enforce configuration and version control matching your intended use and SIL claim.

Related techniques

  • Proven-in-use argumentation — one route to justify reuse based on operational exposure and observed reliability
  • Safety Manual (compliant items) — defines capabilities, assumptions, and limits of use for reusable elements

References

  • IEC 61508-3 — Functional safety of E/E/PE safety-related systems — Part 3: Software requirements
  • Bishop, Clement, Guerra (2003) — "Software criticality analysis of COTS/SOUP," Reliability Engineering & System Safety
  • Lau (2004) — Component-Based Software Development: Case Studies, World Scientific

Go deeper — IEC 61508 Certification Course

Our IEC 61508 course covers software reuse, proven-in-use argumentation, safety case preparation, and the full safety lifecycle — for engineers who need to get it right.

Explore the course → Ask us a question
We use cookies
Cookie preferences
Below you may find information about the purposes for which we and our partners use cookies and process data. You can exercise your preferences for processing, and/or see details on our partners' websites.
Analytical cookies Disable all
Functional cookies
Other cookies
We use cookies to personalize content and ads, to provide social media features and to analyze our traffic. Learn more about our cookie policy.
Accept all Decline all Change preferences
Cookies