Hi everyone,
I’ve been exploring RIOT’s test infrastructure recently and noticed there’s no automated code coverage reporting in CI. I found the 2020 forum thread on this topic which raised some valid concerns — aggregate coverage numbers are misleading, optimized builds complicate line mapping, and RIOT’s modular compilation means a single percentage hides a lot. I think those concerns are right, and they point toward a more focused scope than “measure everything.”
What I’m proposing
Rather than a codebase-wide coverage number, generate per-module HTML/LCOV reports scoped specifically to tests/unittests on BOARD=native. The approach would be:
-
Compile with
-fprofile-arcs -ftest-coverage -O0 -fno-inline -
Run the unit tests as a Linux process (no hardware needed)
-
Generate per-module reports with lcov/genhtml
-
Publish as a Murdock artifact or nightly job output
Scoping to tests/unittests and native sidesteps the hard problems — no cross-compilation, no hardware, no misleading aggregate. Per-module reports answer a concrete question: “is this function ever called by any test?” That alone would give contributors clear targets for improving test coverage in core/ and sys/.
I am not proposing:
-
A coverage gate on PRs
-
Coverage for
drivers/,periph/, orcpu/(hardware-dependent, not useful without emulation) -
A single percentage target to chase
Before I put together a proof-of-concept PR, I have a few questions:
-
Is there a known issue with gcov on the native port specifically — for example, interaction with the
ucontext/signal-based threading that might produce misleading results? -
Would a separate
make coveragetarget be the right place for this, or does Murdock have a better integration point? -
Is there a preference on where reports get published — Coveralls, Codecov, or self-hosted HTML artifacts?
-
Has anyone tried this before and run into a specific wall that killed the effort?
Would love to hear if this is worth pursuing or if there are blockers I’m not aware of.