I'm not very experienced on the driver development side, but enough as a user to see some issues.
- Should a driver be as complete as possible, which of cource produces
more code, or should it be kept simple to produce small code? One option
would be to use the pseudomodule approach to enable additional features.
Part of keeping it small is omitting conversion code (see answer below).
How often does it happen that one runs out of flash space? I'm asking because I honestly don't know. I do know that it's probably easier for the user to remove stuff if he runs out of flash than to read the device manual and add the missing functions if the driver is incomplete.
We have to differentiate two dimensions here:
1. code size, as in "How often does it happen that one runs out of flash space?" -> IMHO we should not even be asking this question, but always expect that people do. Having a low ROM fingerprint is one of the most important differentiators of RIOT!
2. feature richness (or poorness): in an ideal world, all drivers would support everything. But as manpower is limited, there is a good reason for having only basic implementations for many drivers, as we can not force contributors to only provide drivers that are feature-complete... So I think the current approach: merged 'baseline' drivers and 'complete' them by demand works fine.
On some platforms unused code is not linked into the binary.
Unused functions, where the linker can determine the function is not used. If you have a big function for configuring device modes, but you never call it with certain parameters and a bit chunk goes unused, it may not be optimized away (I'm not sure if LTO changes this).
yes, thats why my preferred method would be to use submodules for certain things. But as always, use common sense: having e.g. a read_raw() and a read_converted() function that uses the former, than there is no need to 'sub-module' the read_converted(), as it will only be compiled in in case it is used. But for other features, there are often blocks of code in functions that are used in any case (e.g. initialization), which are easily made configurable using submodules.
- Should a driver support at least data-ready interrupts (if possible at
all) to realize event-driven data retrieval?
Yes. Totally yes. Polling is dumb:
I'd say 'dumb' depends very much on the use case. In general I agree that IRQ based approaches definitely are to be preferred, but there are also very valid use cases where polling is be desirable... So not having interrupts in an initial driver is no blocker!
* Goes against low power goals.
* The data is not polled with a clock that is synchronized with the sensor clock (if the sensor has an ADC), meaning unpredictable jitter.
- Should a driver always return normalized/converted data, or rather
return the raw data and the application needs to convert them? The
conversion is sometimes quite complex. I saw both approaches of them for
* Conversion usually results in loss of precision, especially if one limits the word length to something like 16 bits (see answer below).
* Doing conversion "right" (in an unbiased way) is non trivial. You cannot just go around truncating digits.
* Is is beyond the scope of the driver, which should handle device communication/configurations only.
* If the converted value is not needed, the conversion cannot be undone.
* In SAUL, conversion to and from the base-10 floating point format used is really painful.
I think the measurement should be raw, and there should be a way to query the conversion constant. This way the user can choose, and there are not unnecessary computations done.
In control applications, for example, the conversion is totally not necessary, as the conversion constants can be folded into the control system constants.
Again, not black and white, but very much depending on the use case. If the conversion is short and its overhead is negligible, I see no need for additional _raw() functions. For all other cases we already provide differentiated APIs (or at least we should).
The design rules that are clear to me are:
- Drivers have to provide an interface for polling with init and read
that is compatible with SAUL.
Yes. It makes all interfaces consistent. That being said, it is sad that there is no unified way for configuring and for interrupt driven measurements.
I think our 'dual' approach (driver specific interface and SAUL on top of that) is the way to go. We know, how hard it is to map all the tiny device specific modes into a generic interface (see e.g. periph_), so having something device specific as base is very powerful. SAUL was never meant to map all these specific modes etc, but rather provide something slim but generic, for the most obvious device functions.
Further, SAUL is still more or less a 'prove-of-concept' and in no means complete. Extending it with something that is able to handle events has been on the TODO list for a long time, but nobody ever got to propose something...
- Output are always 16 bit integers.
I think it is a bad idea to limit output to 16 bits. ADCs meant for scales, for example, usually have 24 bit . Other applications also demand higher that 16 bits. Keep in mind that 16 bits is equivalent to 4,8 decimal digits, take 1 bit for the sign and you are left with 4,5.
We don't in general - just SAUL does it. And SAUL is also not designed to handle RAW values, but converted values with a accuracy sufficient for 90+% of typical use cases. If one needs more accuracy, use the driver interface directly.
Maybe off topic, but I think we need a IO layer (think SAUL, but more complete) so that the user does not have to directly interact with drivers. I would answer many of your questions, as in that case there would be a well defined interface that device drivers would have to expose. It is an OS, after all.
Feel free to propose something