Question about expected timer semantics

Hi everyone,

I think everyone agrees that if a relative timer is set, it is expected to run *at least* the amount of time that is specified as interval to timer_set().

That means if now() is e.g., 0 ticks and the timer is set to wait 1 tick, the timer must trigger at 2 ticks, as even if now()==0, the actual ("real") time is anywhere between ticks 0 and 1. If ticks where 1ms each and now_us() is actually 999 (one us before now()==1), triggering at now(1) would trigger only 1us later and not 1ms. Thus it needs to trigger at 2 ticks (2ms), in order to have the timer wait "at least 1ms". Correct so far?

Now for the case where a 1ms timer is using a 1us timer as backend. If a 1ms timer is set at now_ms()==0 and now_us()=500 to trigger after 1ms, it would be possible to trigger the timer at now_ms()==1 and now_us()==1500, thus at a "half_tick" of now_ms(). Should that be done, or should the conversion be implemented in a way that the 1ms timer behaves the same regardless of the lower timer having 1ms or 1us precision?

Kaspar

Hi Kaspar,

As a user/developer I would expect that sleep_ms(1) sleeps at least 1 ms and a maximum of 2 ms. But I would also expect it to be on the lower end. With the first implementation it should - on average - sleep for 1.5 ms which already results in a 3 ms sleep on two invocations of sleep_ms(1). Therefore I think the lowest granularity should be used wherever possible. If I really care about exact timers (or in doubt) I would use nanosecond timers anyway. Also it should be consistent. Any system either should always use the granularity of the function or always rely on the lowest possible granularity for a given architecture. imho the later.

I hope this was helpful.

Regards, Robin

Hi all,

thank you Kaspar for putting up this question. I think there are many more things that should be discussed about timers on this abstract conceptual level and there is a need to somehow document what we agreed on (maybe an RDM at some point?).

> I think everyone agrees that if a relative timer is set, it is expected > to run *at least* the amount of time that is specified as interval to > timer_set().

Yes. I think it should be clear, but it is worth to explicitly note that this is also the only thing we can actually guarantee at all. E.g. just think of the case where you get descheduled while calling timer_set() or the case where the timer fires while another interrupt is being served.

> That means if now() is e.g., 0 ticks and the timer is set to wait 1 > tick, the timer must trigger at 2 ticks, as even if now()==0, the actual > ("real") time is anywhere between ticks 0 and 1. > If ticks where 1ms each and now_us() is actually 999 (one us before > now()==1), triggering at now(1) would trigger only 1us later and not > 1ms. Thus it needs to trigger at 2 ticks (2ms), in order to have the > timer wait "at least 1ms". Correct so far?

I agree, but I have some side notes: 1) To be clear: this is only valid if your actual hardware timer is clocked at these discrete intervals (ms in your example). If the hardware is actually counting in µs steps I wouldn't expect the timer to sleep for ~2ms just because the API takes ms ticks.

2) While this statement doesn't really apply to timers with ms ticks, it is also important to take into account the overhead here. E.g. the minimal time passing between a timer reaching its value and executing the first instruction that was meant to be delayed by the timer. In (more common) scenarios of a faster running timer, incrementing the target by 1 would be a rather academic solution considering it takes e.g. 10 timer ticks to "jump" to the instruction after the timer expires.

> Now for the case where a 1ms timer is using a 1us timer as backend. > If a 1ms timer is set at now_ms()==0 and now_us()=500 to trigger after > 1ms, it would be possible to trigger the timer at now_ms()==1 and > now_us()==1500, thus at a "half_tick" of now_ms(). Should that be done, > or should the conversion be implemented in a way that the 1ms timer > behaves the same regardless of the lower timer having 1ms or 1us precision?

As already indicated above, yes that should definitely be done. Even if the API takes ms values I'd still expect that the implementation tries to match the exact value as close as possible while not firing to early. But adding to that, an API taking a time value should IMO be semantically different to an API that sets an explicit target value in discrete clock ticks. Or to be more precise: the information describing a timeout duration should be decoupled from the information describing the (required, or wished for) resolution and accuracy of that timeout.

cheers,

Michel