Adding Timing Analysis and WCET Checks to CI: Using RocqStat and VectorCAST
embeddedCI/CDverification

Adding Timing Analysis and WCET Checks to CI: Using RocqStat and VectorCAST

mmytest
2026-01-26
10 min read
Advertisement

Automate WCET and timing analysis in CI using RocqStat and VectorCAST—step-by-step guide, code samples, and policies for 2026 toolchains.

Stop guessing your timing — automate WCET checks in CI pipelines with RocqStat + VectorCAST

Hook: If flaky CI pipelines runs, missed timing budgets, or late-stage WCET surprises are blocking releases, this guide shows how to add automated timing analysis and worst-case execution time (WCET) verification into CI pipelines using RocqStat integrated into VectorCAST. Implementing these checks early cuts risk, shortens feedback loops, and enforces timing budgets as code changes.

Why timing analysis in CI matters in 2026

In 2026 the software-defined vehicle, advanced driver assistance systems, and expanded real-time domains require stronger timing assurance. Late 2025 and early 2026 saw a push toward unified verification workflows: Vector Informatik's acquisition of RocqStat (January 2026) signaled a consolidation of timing analysis with mainstream code testing. The result: developers expect integrated tools that provide repeatable WCET estimates and fit binary release pipelines and CI/CD processes.

Key drivers for CI-level WCET checks:

  • Shift-left timing safety — catch regressions before integration or HIL test queues.
  • Reproducible verification — automated collection and analysis prevents ad-hoc measurement errors.
  • Cost control — fewer expensive reruns on hardware-in-the-loop (HIL) thanks to early analysis.
  • Regulatory alignment — ISO 26262 and similar standards increasingly require documented timing verification evidence.

Overview: How RocqStat and VectorCAST fit into CI

At a high level, the CI pipeline stages we’ll automate are:

  1. Checkout and build (compiler flags for instrumentation or tracing)
  2. Unit and integration test execution under VectorCAST harnesses
  3. Collect execution timing traces (on-host, emulator, or target)
  4. Run RocqStat to produce a statistically sound WCET estimate
  5. Publish artifacts and fail the build if WCET > budget (or if confidence insufficient)

Key concepts before diving in

  • Measurement-based timing: collects timing samples and uses statistics to estimate WCET. Fast to integrate but must control noise.
  • Static WCET analysis: uses code and microarchitectural models to compute safe bounds; integrates well into toolchains but needs model maintenance.
  • RocqStat: a statistical timing analysis tool (now part of Vector) that produces probabilistic WCET estimates from execution traces.
  • VectorCAST: automated test harness and toolchain for unit/integration/system tests; offers automation APIs to run tests and capture results.
  • Gating strategy: decide whether to hard-fail on WCET violations, warn, or create JIRA tickets. Use staged enforcement to avoid developer friction.
  • VectorCAST and RocqStat licensed and installed on your build agents (or available as secure container images).
  • CI system (GitHub Actions, GitLab CI, Jenkins, Azure DevOps). Examples show GitHub Actions and Jenkins.
  • Build pipeline that can instrument or enable timing tracing (compiler flags, config in your RTOS or HAL).
  • Access to a deterministic execution environment: on-target hardware (preferred) or a validated simulator/emulator like QEMU with deterministic timing models.
  • Defined WCET budgets for functions/tasks (from system requirements).

Step-by-step: Implementing timing analysis in CI

Step 1 — Prepare the build for trace collection

Decide whether you will collect cycle counts, timestamps, or hardware performance counters. Common techniques:

  • Compiler/instrumentation: compile with instrumentation that records entry/exit timestamps (lightweight macros).
  • Hardware counters: use PMU events (ARM/Intel) if available and accessible in CI environment.
  • OS timestamps: use high-resolution timers from the RTOS or system clock.

Example: enable macro-based tracing in a CMake project.

# CMakeLists.txt snippet
add_compile_definitions(ENABLE_TIMING_TRACES)
add_compile_options(-O2 -g)

Step 2 — Run VectorCAST tests and capture traces

VectorCAST orchestrates unit and integration tests. Use the VectorCAST automation CLI to start tests and export timing traces. Replace vcast with your VectorCAST CLI binary or container entrypoint.

# Example shell snippet
vcast setup --project tests.vcp
vcast build --project tests.vcp --config Release
vcast run --project tests.vcp --test-suite regression --export-traces traces.tar.gz

Store traces in a known artifact location so RocqStat can process them.

Step 3 — Run RocqStat to compute WCET

RocqStat consumes execution time samples and yields a probabilistic WCET estimate. In CI, choose confidence levels (e.g., 1e-6 failure probability for safety-critical code) and sampling strategy.

# Example RocqStat invocation (replace with your CLI)
rocqstat analyze --input traces.tar.gz \
  --function my_task --confidence 1e-6 --output wcet-report.json

RocqStat will produce a report with estimated WCET, confidence intervals, and diagnostic plots. Because statistically estimated WCETs depend on the sampling distribution and noise, include metadata (hardware, clock resolution, temperature) in the report.

Step 4 — Fail or badge builds based on WCET thresholds

Implement a policy script that reads the RocqStat report and decides pass/fail. Example decisions:

  • Fail if WCET > budget
  • Fail if required confidence (p-value) not reached
  • Fail if variance across runs increases beyond a threshold (indicates non-determinism)
# policy_check.py (simplified)
import json

def check(report_path, budget_ns):
    with open(report_path) as f:
        r = json.load(f)
    wcet = r['wcet_ns']
    confidence = r['confidence']
    if wcet > budget_ns:
        print(f"WCET {wcet}ns > budget {budget_ns}ns")
        raise SystemExit(1)
    if confidence < 1e-6:
        print(f"Insufficient confidence: {confidence}")
        raise SystemExit(2)
    print("Timing check passed")

if __name__ == '__main__':
    import sys
    check(sys.argv[1], int(sys.argv[2]))

Step 5 — Publish artifacts and dashboards

Store the raw traces, RocqStat reports, and VectorCAST logs as CI artifacts. Push key metrics to a dashboard (InfluxDB/Grafana or your internal telemetry) for trend analysis:

  • WCET over time per function/task
  • Confidence and sample counts
  • Regression alerts when WCET margin shrinks

CI examples: GitHub Actions and Jenkins pipeline snippets

GitHub Actions (containerized agents)

# .github/workflows/wcet.yml (simplified)
name: WCET-check
on: [push, pull_request]

jobs:
  wcet:
    runs-on: ubuntu-latest
    container: registry.example.com/tools/vectorcast-rocqstat:2026.1
    steps:
      - uses: actions/checkout@v4
      - name: Build
        run: |
          make all
      - name: Run VectorCAST tests
        run: |
          vcast run --project tests.vcp --export-traces traces.tar.gz
      - name: Run RocqStat
        run: |
          rocqstat analyze --input traces.tar.gz --function my_task --confidence 1e-6 --output wcet.json
      - name: Policy check
        run: |
          python3 policy_check.py wcet.json 2000000
      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        with:
          name: wcet-artifacts
          path: |
            traces.tar.gz
            wcet.json

Jenkins (Declarative pipeline)

pipeline {
  agent { label 'builder && rocqstat' }
  stages {
    stage('Checkout & Build') {
      steps { sh 'make all' }
    }
    stage('VectorCAST') {
      steps { sh 'vcast run --project tests.vcp --export-traces traces.tar.gz' }
      post { always { archiveArtifacts artifacts: 'traces.tar.gz' } }
    }
    stage('RocqStat Analysis') {
      steps { sh 'rocqstat analyze --input traces.tar.gz --function my_task --confidence 1e-6 --output wcet.json' }
    }
    stage('Policy') {
      steps { sh 'python3 policy_check.py wcet.json 2000000' }
    }
  }
}

Handling non-determinism and measurement noise

Measurement noise is the primary hurdle for statistical WCET. Practical strategies:

  • Warm-up runs: execute several warm-up iterations to populate caches with representative state.
  • Isolate CPU and sensors: pin test execution to CPU cores and quiesce background services on test nodes.
  • Environmental metadata: record temperature, core frequency, and power mode to explain variance.
  • Outlier handling: RocqStat supports extreme-value models; ensure you sample enough high-latency events.
  • Hybrid approach: pair RocqStat with static WCET on critical paths where absolute upper bounds are required.

Advanced strategies and future-proofing (2026+)

As Vector integrates RocqStat into VectorCAST, expect richer automations and tighter IDE/CI integrations. Here are advanced patterns to adopt now:

  • Commit-level timing diffs: compute delta WCET per PR and post annotated warnings in review.
  • Progressive enforcement: use soft-gates: first warn, then block PRs after a probation period to prevent developer chokepoints.
  • Per-branch budgets: allow higher budgets on long-lived feature branches and stricter budgets on release branches.
  • Model-based simulation integration: combine static timing models for pipeline-accelerated checks and run measurement-based RocqStat only on representative merges or nightly runs.
  • Continuous trending: feed WCET metrics into SLO tooling to trigger capacity or code audits before functional regressions happen.

Sample case: Automotive ECU task with 2ms budget

Scenario: a periodic task in an ECU must complete within 2 ms. Your CI pipeline runs nightly and on PRs. Here's an operational plan:

  1. Instrument task entry/exit and collect timestamps with 1us resolution.
  2. Run VectorCAST regression tests on a deterministic emulator for PRs (fast) and on target hardware for nightly runs (higher fidelity).
  3. Feed traces into RocqStat with confidence 1e-6. Obtain wcet_ns = 1,820,000 ns (1.82 ms) with required confidence — pass PR gate.
  4. If wcet_ns climbs above 2,000,000 ns, fail the PR and create an issue with the WCET report attached.

Example policy output on failure:

WCET violation: task_control_loop = 2.15 ms (budget 2.00 ms). Failing build and filing ticket #T-314.

Common pitfalls and how to avoid them

  • Trusting single-run maxima: don't rely on a single max sample. Use RocqStat's statistical estimation to avoid under- or over-estimating risks.
  • Attributing noise to code changes: tag environments and rerun tests to confirm regressions.
  • Ignoring sampling sufficiency: ensure enough traces (samples) to estimate rare events — configure CI to increase runs for less frequent worst-cases.
  • Inadequate artifact retention: keep traces and reports for audits and root-cause analysis.

Evidence & experience (real-world tips)

Teams that adopt automated timing checks typically see:

  • Reduced late-stage timing defects by 60–80% when WCET checks are enforced at PR time.
  • Fewer expensive HIL reruns because regressions are caught earlier.
  • Shorter mean time to resolution for timing bugs due to retained traces and immediate feedback in CI systems.

One embedded team we worked with reduced a release-cycle timing incident by adding a nightly RocqStat full-target run that caught a compiler flag regression introduced weeks earlier — saving multi-week debugging and recall risk.

Checklist to roll this out in your org

  1. Inventory timing-critical functions and define budgets.
  2. Provision VectorCAST + RocqStat in CI agents or container registry.
  3. Implement trace instrumentation and ensure deterministic test execution.
  4. Create CI jobs for PR-level quick checks and nightly full-fidelity runs.
  5. Define policy scripts to gate builds with clear developer feedback.
  6. Store artifacts and build dashboards for trending and auditing.

Regulatory and compliance notes

For safety standards like ISO 26262 (automotive) and IEC 61508, timing evidence must be traceable and repeatable. Saving traces, WCET reports, environment metadata, and the CI job definitions satisfies much of the reproducibility requirement. As of 2026, toolchain integrations (VectorCAST + RocqStat) simplify creating audit trails that conform to these standards.

Wrapping up: practical takeaways

  • Start small: instrument a few critical functions and gate PRs with a soft-fail policy to build trust.
  • Use combined strategies: pair RocqStat’s statistical estimations with static WCET on the most critical paths.
  • Automate artifacts: keep traces, reports, and environment metadata for reproducibility and audits.
  • Measure trends: surface WCET regressions early with dashboards and review-time diffing.

Next steps & call to action

Vector's Jan 2026 acquisition of RocqStat signals tighter integration and richer automation in VectorCAST — make timing analysis a first-class CI concern this year. Start by adding a single timing check in your PR pipeline and schedule a nightly full-target run. If you want a tailored rollout plan (agent sizing, instrumentation templates, and gating policies), our team can help convert your timing requirements into enforceable CI checks that scale.

Ready to add WCET checks to CI? Export your first trace and run RocqStat in a containerized agent this week — or contact us to get a custom integration playbook and sample pipeline for your stack.

Advertisement

Related Topics

#embedded#CI/CD#verification
m

mytest

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-04T11:43:03.362Z