Playbook: Verifying WCET in CI for Safety-Critical Embedded Software
A practical 2026 playbook: stepwise checklist, CI pipeline templates, and auditor-ready reporting to verify WCET for safety-critical systems.
Hook: Stop late surprises from timing violations — bake WCET verification into CI
Missing a worst-case execution time (WCET) requirement late in the release cycle is expensive and dangerous for safety-critical systems. Teams building automotive ECUs, avionics flight stacks, or industrial controllers need reproducible, auditable evidence that code meets timing budgets — and they need it in their CI pipeline, not as a manual gate at release.
Why WCET in CI matters in 2026
In 2026 the landscape for timing verification has shifted: modern toolchains, acquisitions, and regulatory scrutiny elevate timing analysis from an optional check to a core part of software verification. Vector Informatik's January 2026 acquisition of StatInf's RocqStat and its planned integration with VectorCAST is a high-profile example of this trend — vendors are building unified toolchains to combine static timing analysis and software testing.
"Vector will integrate RocqStat into its VectorCAST toolchain to unify timing analysis and software verification." — Automotive World, Jan 16, 2026
At the same time, systems are increasing in complexity: multicore CPUs, aggressive caching, mixed-criticality workloads, and RISC-V adoption all push WCET analysis beyond simple heuristics. CI-based WCET verification enables continuous feedback, earlier mitigation, and auditable traceability for ISO 26262, DO-178C, and IEC 61508 evidence packages.
High-level strategy
The goal: integrate WCET verification into CI so every merge creates reproducible, auditable timing proofs or measurement artifacts. A robust approach combines:
- Static analysis (RocqStat-style / abstractions-based WCET estimators) for conservative upper bounds.
- Measurement-based verification using VectorCAST-driven test harnesses to validate assumptions.
- Hybrid verification that uses static data + measured worst-case traces to tighten bounds where safe.
- Automated reporting & retention so auditors can review results and recreate analyses.
Playbook: Stepwise checklist to verify WCET in CI
Use this checklist to design a CI workflow for timing verification. Treat each step as a gate you can automate and audit.
-
Define timing contracts
- List functions/tasks with WCET budgets (e.g., control_loop <= 2.0 ms).
- Document the target platform configuration: CPU model, frequency, cache settings, compiler flags, RTOS configuration, preemption model.
- Set acceptance criteria: pass/fail thresholds, margin of safety, and allowed evidence types (static vs. measured).
-
Standardize reproducible build & environment
- Use containerized builders (Docker) or immutable build VMs matched to the target toolchain.
- Pin compiler versions and optimization flags; commit toolchain manifest to repo.
-
Instrument and isolate test harnesses
- Create small, focused VectorCAST test cases covering the worst-case control paths.
- For measurement-based tests, enable hardware trace capture (ETM, PTM) or instruction counters (PMU).
- Use trace-to-time conversion artifacts (clock mapping) stored alongside test outputs.
-
Run static WCET analysis
- Execute RocqStat or equivalent with the exact binary and map file; record configuration files and assumptions.
- Store exported proof artifacts (XML/JSON) that list analyzed paths, infeasible paths, and final WCET estimates.
-
Run measurement campaigns
- Execute VectorCAST tests on hardware or on a cycle-accurate simulator/virtual platform; collect traces for candidate worst-case inputs.
- Automate runs with stress patterns designed to provoke cache misses and pipeline stalls (e.g., pointer-chasing, context switches).
-
Hybrid reconciliation
- Compare measured max execution time to static WCET. If measured exceeds static estimate, treat as a toolchain problem or misconfigured assumptions.
- Use measured traces to refine WCET by proving infeasibility or tightening bounds where allowed by standards.
-
Automated report generation
- Emit machine-readable artifacts: WCET-report.json, wcet-proof.xml, trace-archive.tar.gz, build-manifest.txt.
- Render human-readable auditor reports (PDF/HTML) summarizing assumptions, tooling versions, and pass/fail status.
-
Retention, signing & reproducibility
- Upload artifacts to an immutable artifact store (e.g., S3 with Object Lock or an enterprise Artifactory) and sign with CI signing keys.
- Record reproducibility metadata: job-id, git-sha, tool versions, container digest.
-
Auditor-friendly packaging
- Bundle all artifacts into a single evidence package with an index: evidence-package.zip, index.json.
- Include human-readable instructions for re-running analysis and a minimal environment (e.g., Dockerfile or VM manifest).
-
CI gating and notifications
- Fail the merge when WCET > threshold or when artifacts are missing.
- Notify responsible engineers and attach artifacts to the failed CI job for triage.
CI pipeline templates
Below are pragmatic pipeline templates you can adapt. Replace placeholders with your tool and environment specifics. These examples use VectorCAST and RocqStat concepts; vendors integrating RocqStat and VectorCAST in 2026 will provide tighter command line integrations, but the templates remain relevant as architecture.
GitHub Actions: wcet.yml (example)
name: WCET Verification
on:
push:
branches: [main]
pull_request:
jobs:
build-and-analyze:
runs-on: ubuntu-22.04
env:
TOOLCHAIN_VERSION: 'gcc-11.3'
steps:
- uses: actions/checkout@v4
- name: Restore build container
run: docker pull myrepo/wcet-builder:${{ env.TOOLCHAIN_VERSION }}
- name: Build firmware
run: |
docker run --rm -v ${{ github.workspace }}:/work myrepo/wcet-builder:${{ env.TOOLCHAIN_VERSION }} \
/bin/bash -lc "make clean && make all"
- name: Run VectorCAST tests (measurement)
run: |
docker run --rm -v ${{ github.workspace }}:/work myrepo/vectorcast-runner \
/bin/bash -lc "vectorcast-cli run --project=ControlLoop --target=hw --collect-traces" \
# artifacts: traces/*.etm -> ./artifacts/traces
- name: Run RocqStat WCET analysis
run: |
docker run --rm -v ${{ github.workspace }}:/work myrepo/rocqstat \
/bin/bash -lc "rocqstat-cli analyze --binary=build/target.elf --map=build/target.map --config=tools/wcet-config.yaml --output=artifacts/wcet-report.json"
- name: Compare and publish report
run: |
python3 tools/compare_wcet.py --static artifacts/wcet-report.json --measured artifacts/traces/max_time.json --out artifacts/audit_report.html
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: wcet-evidence
path: artifacts/**
GitLab CI: .gitlab-ci.yml (snippet)
stages:
- build
- wcet
build:
stage: build
image: myrepo/wcet-builder:latest
script:
- make clean && make all
artifacts:
paths:
- build/target.elf
- build/target.map
wcet_analysis:
stage: wcet
image: myrepo/rocqstat:latest
dependencies:
- build
script:
- rocqstat-cli analyze --binary build/target.elf --map build/target.map --config tools/wcet-config.yaml --output wcet-report.json
- python3 tools/generate_audit_package.py --wcet wcet-report.json --traces traces/ --out evidence-package.zip
artifacts:
paths:
- wcet-report.json
- evidence-package.zip
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
Jenkins Declarative Pipeline (snippet)
pipeline {
agent any
environment {
DOCKER_IMAGE='myrepo/wcet-builder:latest'
}
stages {
stage('Build') {
steps {
sh 'docker run --rm -v $PWD:/work $DOCKER_IMAGE /bin/bash -lc "make clean && make all"'
stash includes: 'build/**', name: 'binaries'
}
}
stage('WCET Analysis') {
steps {
unstash 'binaries'
sh 'docker run --rm -v $PWD:/work myrepo/rocqstat:latest /bin/bash -lc "rocqstat-cli analyze --binary build/target.elf --map build/target.map --output artifacts/wcet.json"'
sh 'python3 tools/check_wcet_threshold.py artifacts/wcet.json 2000 || exit 1'
archiveArtifacts artifacts: 'artifacts/**', fingerprint: true
}
}
}
}
Automated reporting for auditors: content and format
Auditors need clear, reproducible evidence. Include both machine-readable artifacts and concise human summaries.
- Index file (index.json) — lists artifact filenames, git commit, tool versions, container digests, and timestamps.
- WCET proof (wcet-proof.xml / wcet-report.json) — static analysis output with assumptions, call-graph, and final bound.
- Measured traces (trace-archive.tar.gz) — raw ETM/trace data with clock mapping file.
- Build manifest — compilers, flags, linker map, and binary hashes.
- Human-readable report — summary page with pass/fail status, key figures, and reproduction steps (PDF/HTML).
- Signature & evidence package — sign the final evidence package with CI signing key and record signature metadata.
Sample index.json (minimal)
{
"git_sha": "abc123...",
"pipeline_id": 4567,
"tool_versions": {
"rocqstat": "1.2.0",
"vectorcast": "2026.01",
"gcc": "11.3"
},
"artifacts": {
"wcet_report": "artifacts/wcet-report.json",
"trace_archive": "artifacts/traces.tar.gz",
"evidence_package": "artifacts/evidence-package.zip"
}
}
Troubleshooting: common failure modes and fixes
When WCET verification fails in CI, follow this triage flow:
-
Measured > Static
- Check toolchain mismatch: compiler flags or CPU frequency drift. Reproduce locally with same container image.
- Inspect traces: find path where execution time peaked; add test vector to capture it in CI.
-
Static tool reports infeasible assumptions
- Verify model inputs: memory layout, linker map, interrupt model. Update RocqStat configuration to match platform.
-
Non-deterministic CI measurements
- Use dedicated hardware rigs or cycle-accurate simulators. Disable non-essential background services and pin CPU frequency.
-
Artifacts missing
- Ensure test jobs upload artifacts and fail the pipeline if artifacts are not present. Use retry logic for intermittent storage errors.
Practical example: control loop WCET pipeline (2 ms budget)
Scenario: A control task must complete within 2 ms on an ARM Cortex-R running at 400 MHz.
- Define contract: control_loop <= 2ms ± 0.2ms margin.
- Static analysis: configure RocqStat with the CPU pipeline model, cache model, and map file. Generate wcet-report.json.
- Measurement: run VectorCAST test vector set A (worst-case input) on hardware, collect ETM traces, and convert to time using cycle-to-time mapping (clock 400 MHz).
- Compare: if static wcet = 1.85ms and measured max = 1.78ms — pass and attach both artifacts. If measured = 2.12ms — investigate the trace and either update static model or fix code.
Advanced strategies and 2026 trends
Expect these practices to mature in 2026 and beyond:
- Toolchain consolidation: Vendors (e.g., Vector + RocqStat) will deliver integrated flows that reduce manual artifact translation.
- Cloud-assisted timing analysis: Static analyses running in cloud scale timelines for whole-system analysis (multicore interference models).
- AI-assisted path selection: Machine learning that suggests candidate worst-case inputs to measurement campaigns, reducing test combinatorics.
- Standardized evidence formats: Industry moves toward common machine-readable evidence packages for timing, easing audits across OEMs and certifying authorities.
Checklist recap — what to automate first
- Reproducible builds (containerized toolchain).
- Automated static WCET analysis per merge.
- Automated measurement test harness execution and trace capture.
- Artifact packaging, signing, and retention.
- Pipeline gating and meaningful notifications on failure.
Actionable takeaways
- Start small: Add WCET static analysis to CI for one critical function before scaling to the full codebase.
- Make evidence reproducible: store container digests, tool versions, and exact commands used in CI job logs.
- Combine static and measured evidence: auditors favor conservative static proofs complemented by measurement traces.
- Automate packaging for auditors: index.json + signed evidence ZIP should be generated by CI without human intervention.
- Use gating strategically: fail merges when WCET budgets are exceeded, but provide quick triage links to artifacts for developers.
Final notes on compliance and risk
Integrating WCET verification into CI aligns technical practice with compliance requirements for safety standards. Conservative static analysis plus well-documented measurement campaigns create defensible evidence. With the increasing maturity of tools — highlighted by 2026 vendor consolidation — teams that adopt CI-first WCET verification will reduce costly rework, speed audits, and ship safer releases.
Call to action
Ready to make WCET verification part of your CI pipeline? Start with a 2-week spike: containerize your toolchain, add a static WCET job, and produce the first evidence package. For a templated starter kit (Dockerfiles, VectorCAST harness examples, RocqStat config templates, and CI YAMLs), download our WCET-in-CI starter repo and evidence-packager scripts — or contact our team for a hands-on workshop to integrate VectorCAST + RocqStat into your CI workflow.
Related Reading
- Software Verification for Real-Time Systems: What Developers Need to Know About Vector's Acquisition
- News: Major Cloud Provider Per‑Query Cost Cap — What City Data Teams Need to Know
- Optimize Android‑Like Performance for Embedded Linux Devices: A 4‑Step Routine for IoT
- Ephemeral AI Workspaces: On‑demand Sandboxed Desktops for LLM‑powered Non‑developers
- Japanese Customer Service Phrases for International Hotels and Ryokans
- WhisperPair & USB Security: Why You Shouldn’t Assume Any Device Is Private
- Clinic Tech Review: Wearable Blood Pressure Monitors in 2026 — Integration, Validation & Workflow Impact
- South Asian Soundscapes for Meditation: How Kobalt x Madverse Could Shape Mindful Music
- How to Negotiate Data Licensing Deals: A Playbook for Creators After Human Native
Related Topics
mytest
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Edge-First Test Environments in 2026: Cache-First RAG, Lightweight Telemetry, and MetaEdge Playbooks
Operationalizing Edge Observability in 2026: Canary Rollouts, Cache‑First PWAs, and Low‑Latency Telemetry
Enabling Citizen Developers: Sandbox Templates for Rapid Micro-App Prototyping
From Our Network
Trending stories across our publication group