RIOT & not invented here

@benpicco wrote:

RIOT often leans towards NIH but the result is that things break in unexpected ways once you try to do something other than the one use case they have been tested with.

I see myself as someone who often gets the NIH opposition to solutions I propose. My impression is that contrary to “RIOT often leans towards NIH”, RIOT has a strong and healthy opposition to solutions that are developed “here”.

@benpicco where do you feel we went NIH and it was a bad choice?

A (non-exhaustive) list of examples from the top of my head:

  • sock
  • nanocoap
  • nanocbor
  • GNRC
  • Murdock

you think all of those were a bad choice?

I did not say that. Some have a good reason to be there, some, in hint-sight, I personally would have approached differently, and some I probably nowadays would have done not at all and rather put my expertise into fixing the drawbacks of other projects to use them with RIOT.

On the latter I only can say from my experience that I think some standard solutions that look unfit at first can sometimes be made fit, with less or equal effort of a complete rewrite from scratch. And that oftentimes you can not make this judgement call on effort at first. The advantage of making it fit: Standard solutions benefit from our work. The disadvantage: It is often less fun and more a task of reading and trying to understand and adapt to other people’s work than actually doing hands-on work.

In engineering there often is a “yes their solution is good, but I can make it better”-trap, that we all need to internalize as a potential trap. Of course it is not always a trap, but we should be aware that it can be and learn from past mistakes (and successes). And learn why they were mistakes (or successes beyond the “it is just better”-paradigm).

(the question was specifically where we think the choice was bad, because otherwise, there wouldn’t be an issue)

Yeah, you pretty much summed up the theory. Can you be more concrete?

For the given examples, do we remember the reasons for a home-grown solution? What’s the hind-sight position on those?


Still think it was good idea. Maybe we should have gone for a more generic version from the start though.


Given that its direct compatitor microcoap basically died, I think its a good idea. I personally would have taken better care to abstract the gcoap access to it better.


Don’t have enough information on its inception story, to say something of value on that.


I would have probably gone with lwIP nowadays and put most of my effort into improving their 6LoWPAN layer.


Maybe not with Jenkins (the de-facto standard CI still), but I still believe that with the right deployment configuration any other CI could yield similar performances.

1 Like

IIRC, code size and clumsy API with TinyCBOR were the main motivators at the time. Not something that can be easily “made fit” unless either upstream breaks compatibility or the project is forked.

wouldn’t that have been a choice that gcoap’s devs should have taken? initially, gcoap was a case of in-tree nih.

if you replace “any” with “some”, I agree. But somehow that was not a choice back then? The CI world was a lot smaller.

I think gcoap and nanocoap were designed with two very different goals in mind. For nanocoap (like microcoap) it was minimalism, for gcoap it was user accessibility. We need both IMHO. A very lean CoAP implementation for the smallest of devices and a very dev-friendly implementation for bigger ones so that anyone can deploy their CoAP-apps with ease (side-note: I don’t think gcoap is fully there yet). Sure, instead of gcoap we could have also go for libcoap, but that has (or maybe by now had) its own drawbacks such as use of malloc, dependence on POSIX socket, etc that do not really make it fit for our use cases. Sure we could have made it fit, but then again, we now have two CoAP implementations: a very lean one and a more user-friendly one that builds on top of the lean one :slightly_smiling_face:. So all in all, in this case in my opinion good decision to go NIH here.

The RIOT community is small and mostly compromised of people from academia. That means home-grown solutions don’t receive as much testing as established libraries and the original author will often graduate towards other things after a couple of years.

code size and clumsy API with TinyCBOR were the main motivators at the time.

The problem was that NanoCBOR only implemented what was needed to get SUIT running. If some unsuspecting user then thought to use the library for something else so they don’t need to pull in two CBOR parser / encoders, they were in for a surprise.

It’s kind of an afflication of youth. Specifically, lack of knowledge/respect of history, and lack of perspective.

It’s why people keep re-inventing IPsec, and each time, they learn why it wasn’t so trivial. But it takes them a decade to find all the corner cases.

This is ironic, since an argument for using CBOR in SUIT was that the CBOR code was “free”, i.e. already present, ported, tested and paid for.

wasn’t the main argument that it’s “free” in terms of flash/RAM costs, when already used elsewhere in the stack?

Yes, that’s the point. If the CBOR being used in SUIT is not being used elsewhere, then it’s no longer free.

Yeah, so there were bugs in “our” CBOR implementation. Your reasoning to avoid those bugs would be to no have “our” implementation but re-use what’s out there.

IMO there are strong arguments to always re-use:

  • upstream code is not bug free either
  • upstream code might just not hit the right tradeoffs

Try to imagine RIOT with all of “our” solutions replaced with the closest library equivalent. IMO, it would be a mess, at least in terms of APIs. And we’d have to make the list quite a bit longer.

True. I smirked at that whenever the discussion about the “heaviness” of CBOR for SUIT turned into “I managed to do SUIT/CBOR in only (little number of lines or bytes)”.

given this initial statement, and the non-conclusive survey, it seems like @benpicco was not happy with NanoCBOR having bugs for a use-case it was not tested for. But some (most) of our home grown solutions were, in hind sight, not a bad choice.

Maybe we can, in the future, add some expectation management for homegrown solutions? As in, more clearly document what something was tested for?

That means home-grown solutions don’t receive as much testing as established libraries

Hm, except for the example of Murdock, I don’t see a lot of established libraries when it comes low-power IoT.

As for the general discussion, I often had the feeling of a pretty strong NIH syndrome in RIOT - and to be honest RIOT itself could have probably been a fork of FreeRTOS… So actually I’m pretty much a fan of rather use, integrate, and improve existing solutions than inventing the wheels again. However, experience showed that the code we were trying to re-use was often of very poor quality and integration and fixing took more time than doing it ourselves - or sometimes at least that’s what we thought (GNRC <-> LWIP).

To me it seems it is more NIT (not invented there) syndrome. Anything (libraries, non-standard solutions, ideas) that come from “here” (our community) are usually from the start biased towards “inferior”. Not very welcoming for new ideas, or innovation, or just better software.