RIOT CI megathread

heyya, how to we make RIOT CI better?

(collecting up here, dependencies in parens)

  1. flag a commit / PR as "confirm it doesn’t change any compiled code at all” (5.)
  2. unified rendered output
  3. XML/jUnit output for murdock
  4. provide proper size comparison (5.)
  5. have latest master built as reference
  6. expand on benchmark collection
    • monitor for regressions (5.)
    • add option to re-run single benchmarks without reflash
  7. code coverage
  8. prevent semantic conflicts.

Spontaneous idea: I’d like to flag a commit / PR as “doesn’t change any compiled code at all”.

I could set that in commits simplifying macros and such, but also when const-clarifying code (where I don’t expect changes, but changes might happen because the compiler can now be smarter, and once detected I’d just remove the flag on the PR).

(should have come through mail, going through web while it’s not working)

1 Like

Should this then check that the binary doesn’t change, and error if it does?

Yes. (And then skip any further tests because the binary is the same.)

this could be hacked into the test result cache, which already provides infra to store previous/other
build info.

I would like it to have a common, rendered output. But as far as I am aware, there is already someone working on that. However, what about the jUnit XML output of tools such as Murdock? Maybe https://github.com/kyrus/python-junit-xml can be used for that. I used that to add XML support to compile_and_test_for_board.

1 Like

See artifacts in e.g. https://github.com/RIOT-OS/RIOT/actions/runs/252967164 for example out put and compile_and_test_for_board.py: add optional JUnit XML support · RIOT-OS/RIOT@4cc6963 · GitHub for the actual code change.

using junit, I had trouble representing actual node output, due to control characters in there. is that solved somehow?

Special characters are escaped by that library.

For some stuff, like ping6 -f in the release specs (not using that library but the in-build support for junit XML in pytest) I also had problems, because the sheer number of output with that, basically made the XML huuuuge. I solved it by piping away the chars in question.

Also nice would be to expand on the benchmark collection feature. E.g. if we notice that performance for some benchmark did drop recently, it would be nice to let the CI run a specific application on a specific commit with a given number of repetitions. (No re-flash between repetitions, just re-run the test.)

PRs can already be flagged as such using the “CI: skip compile tests” label.

yeah, we have to rephrase, @chrysn meant “assert that PR does not change code”

Ah sorry, should have read the whole discussion. I think “assert” is also the wrong word (I was thinking of assert(code does not change) first). “Check that PR does not change code”?

Isn’t that equivalent? :slight_smile:

Not really. An assertion is a statement made beforehand about some pre-condition. A check is a test of a certain condition. What you described is a check, not an assertion (which is more what the “CI: skip compile test” label does, it asserts there is no need to recompile)

8 posts were split to a new topic: CI: code coverage

So to rephrase my suggestion in light of the disussion: I’d like to have a “CI: assert identity after compilation” flag.

(This entails that any later test, especially HIL, can be skipped).

1 Like

I got another one: prevent semantic conflicts.

@aabadie suggested collecting code coverage.

I’ve moved the resulting discussion to a new topic:

Another one for the wishlist, I would like Murdock to comment on PRs with results. The message content should be helpful to both regular contributors and newcomers. Among other things this could contain the warnings from the static tests such as the codespell warnings.

I think this could help by making it easier for the regulars to not forget the static test warnings. For newcomers it pulls their attention to the CI and the message content can explain or hint to what should be fixed.

1 Like