ztimer - a new high-level timer for RIOT

Hi Michel,

Hi Kaspa

Would it make sense to make a micro conference? Get everyone interested in improving timers in a room and lock it until solutions are presented?

Not convinced about the "lock in a room" :wink: - but otherwise: absolutely yes!

What do you think about an RDM PR? We could just use your design document as a starting point.

Let me propose the following: we make merging ztimer *as a distinct and optional module* independent of changing RIOT's default timer implementation. The latter can be done within the RDM.

IMO, ztimer is quite usable already, and even if only used as multiplexer for the RTT, it provides a lot of benefit. I don't see a reason not to merge it (when it's been reviewed properly), as an optional module.

We can, in parallel, work on the RDM. If it turns out there's some better option than ztimer, no harm was done, I'll of course happily accept that. I already have a basic xtimer benchmark application (for timing set()/remove()/now() in pathetic cases), which can provide at least some numbers. I'll PR that today.

Regarding "fixing" xtimer vs a rewrite from scratch, I'd like to point out that #9503 alone changes (well, removes) ~450 lines of *logic code*. That is three quarters of xtimer's total code, including definitions and prototypes. IMO, we need to acknowledge that changing that amount of code does not result in the same code base. We should call it "ytimer". The amount of reviewing, validation and testing should be the same as for a rewrite. Or maybe just be measured in "amount of lines changed".

Regarding whether a "clock" parameter makes sense, this is something we should explore within the RDM. I think you need to prototype a function that chooses a suitable frequency from multiple options (without relying on an explicit parameter for that). (I'd actually suggest you use ztimer as basis, as there you have multiple, multiplexed backends using the same API. :slight_smile: ). You might even be successful. At that point, an RDM can decide if that functionality should move down the layers.

More details following:

periph_timer IMO should be the slimmest layer of hardware abstraction that makes sense, so users that don't want to do direct non-portable register based applications get the next "closest to the metal".

Agree, but there are some things that we should add to the periph_timer. E.g. adding support for dedicated overflow interrupts together with an API to read the corresponding IRQ status bit. The high level timer would benefit from that on many platforms. E.g. Ztimer wouldn't require the code for the time partitioning mechanism then. But thats yet another part of the story...

Yes. Also, do all platforms support that overflow interrupt? I don't think so, in which case this cannot be relied upon to be available.

Also the term "frequency conversion" is a bit misleading I think. With a discrete clock you won't be able to just precisely convert a frequency to any other frequency in software. Especially if you want to increase the frequency - it will just be a calculation.

Yup. Frequency conversion makes sense if an application wants to sleep 12345ms, but the timer is clocked at e.g., 1024Hz.

That is one of the main issues with an API that doesn't have the clock parameter, but a fixed (probably high frequency) frequency, as xtimer has.

Of course there is a difference. Here I just wanted to point out that the quality defect of xtimer not mapping to multiple peripherals is not directly tied to its API.

Further, adding a convention to the xtimer API would allow to for automatic selection of an appropriate low-level timer. E.g. think of something like "will always use the lowest-power timer that still ensures x.xx% precision".

That's what ztimer does.

Again, this is just a simple example to explain what I think we should also consider as part of the solution. Forcing the application / developer to select a specific instance also has it's downsides.

With convention ("ZTIMER_MSEC provides ~1ms accuracy"), the application developer chooses the intended precision. Without that, and with a fixed API time base, 1s (1000000us) cannot be distinguished from 1000ms or 1000000us. Maybe it can, this is where you can maybe come up with a prototype.

I mostly agree. But as I tried to clarify before: ztimer is mixing "relevant and valid fixes" and "introducing new design concepts". We should strive for being able to tell what is done because it fixes something and what is done because the concept is "considered better". Next to that the "considered better" should then be put to the test.

Ok. That might be necessary to choose one implementation over xtimer. -> RDM

For timeouts that are calculated at runtime do we really want to always add some code to decide which instance to use?

If there are multiple instances, there is code that selects them. The question would be, do we want

a) to provide an xtimer style API that is fixed on a high level, combine with logic below that chooses a suitable backend timer

or

b) add a "clock" parameter that explicitly handles this distinction.

Yeah, that is one key thing. I think that (a) would in most cases be preferable.

To elaborate, think about this: -some low-level instances may not be present on all platforms -> an application that uses a specific instance then just doesn't work?

Compile-time error.

\-> ztimer then always adds a conversion instance that just maps to

another one?

... if configured to do at compile time, and if necessary.

-handling of dynamic time values that can be a few ms to minutes (eg. protocol backoffs) -> you always need "wrapping code" to decide for a ztimer instance -i.e. sleeping a few ms need to be done by the HF backend to be precise -sleeping minutes would optimally be done by an LF backend

Both can be handled by a millisecond timer.

\->wouldn't it make sense to move this \(probably repeated\) code down,

from the app, to the high level timer

If the range needed exceeds e.g., 32bis of milliseconds, which can represent more than 49 *days*, such code might make sense.

It may be better to not tell the API "use this instance", but instead something like "please try to schedule this timeout with xx precision".

That "precision" is exactly what the instance provides, with convention. For specifying that, alternatives are "xxtimer_set_usec(t, value)", or "xxtimer_set(t, value, 1000000LU)", .... Which actually map nicely on ztimer's api.

If no instance is available that can do that, the timer just "does its best".

That is what a compile-time configured ZTIMER_USEC on 32kHz would do, if that is desirable.

If it is available, it uses the most-low-power instance available that covers the requirement.

In ztimer's design, the most low power timer capable of doing e.g., millisecond precision is provided as ZTIMER_MSEC. No need for runtime logic.

You maybe already got that form the above statements, but that's not what I meant. I'm referring to "runtime requirements of one specific timeout" that may differ based on the actual value. Example: A protocol backoff that is 200ms probably requires some HF timer. Then, because of whatever this may increase to 10 seconds and using an LF timer becomes practical.

milliseconds can usually be set on low-power timers (e.g., a 32kHz RTT). 32bit range at 1kHz means 49 days. No need to switch to a different clock.

Other examples would be a 200 - 2000us timer. That cannot be set on a millisecond clock, as now=500us + 2000us is 2500us, whereas now=0.5ms (==0ms) + 2ms is somewhere between 2 and 3 ms. With an "at least" semantic, the timer would choose 3ms as target time, which is up to 1ms (1000us) off. This *could* be solved by synchronizing twice, e.g., set a timer to the next ms tick, sleep 1ms, then sleep the remainder of the timeout in us precision. Doable, but tricky. Especially while keeping accuracy guarantees over two clock domains.

If an application schedules <1000us...>2**32us, well yeah, it must be using 64bit values already. In that case, we might need a 64bit API, or *really* let the application handle that.

Wouldn't it be nice if ztimer then automatically allows to go to power down because it is practical? (all that without adding the wrapping code to decide on the instance in the application)

Sure. But honestly, that is *one* function, which would in any sane architecture be completely on top of something like ztimer.

"let our high level timer do it's high level stuff to automatically map it to what is at hand" is maybe possible.

Now we are talking!

Please prototoype!

I think that could, if successful, be the heart of a ztimer 64bit extension.

Please don't tell us this is not-fixable by design. If so, what is it that makes these unfixable?

What means *fix*? If I rename ztimer to xtimer, would that could as "fix"?

If the API wouldn't change and the provided functionality stays the same, we could come to an agreement :stuck_out_tongue:

Now that's easy. :wink:

The IoT hardware? Our requirements? Your opinion? Can we write that down? What are the assumed conditions and scenarios for this to be true? What are the measurable ups and downs here?

We are talking about the implementation, right? How many us are one 32768kHz tick? Something around 30.517578125. when used as internal normalization base, this is weird.

I don't understand this.

Nevermind, I was doing the math backwards.

If now the same thing happens with ztimer, we didn't learn from the past.

If what happens? If in 5 years, we have learned where ztimer doesn't cut it and come up with ztimer+ (for lack of letters) that improves the situation substantially, again?

No, I mean if "having a non functional timer for many years" happens again. I think the way how xtimer did is job over these years is not something we want to repeat.

So an RDM would have prevented this? I doubt it. xtimer's ISR buggyness was known *for years* even without an RDM. We were just inexperienced (or ignorant, or incompetent, your pick).

If ztimer solves all the problems, we didn't learn either: We weren't capable of isolating, fixing and documenting the core problems. We weren't agile. We didn't improve and evolve our code, we just replaced it without really knowing if and why this is required. "Because xtimer didn't work" is obviously not what I'm searching for here, and the "64 bit ain't nothin' for the IoT" discussion is independent of "making a high level timer functional".

We don't improve and evolve code, we improve and evolve a system.

A system that is made of code...

Sure, but in the end, we should prioritize on the system, not its components. If the system consists of a component that has two alternative imlementations, blah blah blah ... :wink:

At some point, we need to be pragmatic.

Yes, but at some point we should also take a step away and recap instead of only implementing.

That is valid. Please acknowledge that while recapping, I can already write applications that have both HF and LF timers, while the recappers might be stuck, literally, because xtimer is broken.

Kaspar

Hi,

here are my thoughts on the discussion.

# Not Getting Lost in Requirement Analysis and Problem Specifications

A good requirement analysis is a valuable tool for both development and evaluation of a software component. But once a solid understanding of the problem to solve is reached, additional effort put into a requirement analysis doesn't yield enough additional benefit to justify the work. And every requirement analysis is flawed: Assembling a complete list of all current requirements in RIOT's code base is hard. Predicting how future developments influence the requirements we have is impossible. There has to be a point when we just stop on collecting more requirements and consider the current list as good enough; a perfect, complete and definite result cannot be reached.

High level timer API are no new concept and the basic goals and requirements are well understood. On top of these basic requirements, benchmarks could be a good tool to quantify how well specific timer implementations perform. To me, writing a set of benchmarks would be more useful than additional requirements collection. Not only would it allow to see how good ztimer/xtimer are performing. They will also be useful for development and reviews of future PRs targeting RIOTs timer framework.

In the end, RIOT will be judged upon the features it provides. Not on the features on RIOT's to do lists. Not on how tough and rigorous the requirements are we have formulated on some yet-to-be-implemented feature. And not on how much documents and emails have been written about an yet-to-be-implemented feature.

# Complete Rewrites can be the Best Option

@Michel: You seem to dismiss the development approach of a complete rewrite fundamentally, apparently for ideological reasons. While most of the time a complete rewrite is not the best option, there are good reasons for doing so in some cases: When fundamental architectural changes are needed, the effort of an rewrite can be less compared to iterative transform the architecture. In such cases, a rewrite is the better option.

I don't think that there is a reason to see a complete rewrite as some kind of failure that we should try very hard to prevent for the future. And ultimately, any attempt to prevent rewrites can fail. Any design decision is based on the current experience, the available tools, and the current requirements. But with time, more experience is gained, better tools become available, and completely new use cases with potentially fully distinct requirements pop up. Perfectly solid design decisions can be rendered obsolete by this. And sometimes, fundamental architecture changes are needed in response. In that cases, a rewrite can be the best option. And this could happen very well without any mistakes being made in the original implementation.

# Suggestion of a Path Forward

There seems to be an agreement, that an additional parameter is needed in the API. How about we specify an implementation independent high level timer API based on xtimer with an additional parameter. The type of the first argument could just by some `foo_t` that is provided by the implementation of that API. The implementation could be freely chosen as module, and xtimer could be the default backend when the user has not specified a backend.

I bet it is trivial to extend xtimer in a way to comply with that API by just adding an ignored parameter. Everyone with the motivation, knowledge and dedication to fix the discussed issue in xtimer has still the opportunity to do so. And providing ztimer as alternative implementation of the same API would not hurt anyone.

Not defining the semantics and contents of the first parameter sounds a bit crazy. But I bet that 90% of all use cases will only use two different values there: One low power low resolution setting for long sleeps and one high power high precision setting e.g. in drivers. Just providing these two settings as global variables (or preprocessor macros) under defined names would be enough to cater 90% of the use cases in an implementation agnostic way. (And the other use cases could provide their own global variables for the additional settings, so that only the assignment of those variable depend on the used implementation.) And if there is indeed enough interest in fixing xtimer, this would allow them to freely decide if they want this parameter to be a pointer to a clock struct as in ztimer, or maybe a bitmask holding a set of flags, or something completely different.

It might be very well possible that the future will bring use cases will mutually exclusive requirements. A implementation independent API would allow to just let the users choose what they need.

Kind regards, Marian

Hi Kaspar, Marian,

thanks for responding.

Let me propose the following: we make merging ztimer *as a distinct and optional module* independent of changing RIOT's default timer implementation. The latter can be done within the RDM.

IMO, ztimer is quite usable already, and even if only used as multiplexer for the RTT, it provides a lot of benefit. I don't see a reason not to merge it (when it's been reviewed properly), as an optional module.

We can, in parallel, work on the RDM. If it turns out there's some better option than ztimer, no harm was done, I'll of course happily accept that.

I'd favor to not merge it before we worked on the RDM as it may show that there are things that should be done differently. But with our current state of xtimer there is not much harm to be done anyway :wink: Also ztimer is providing solutions to problems we currently have no alternatives for - so I won't be blocking this :wink:

I already have a basic xtimer benchmark application (for timing set()/remove()/now() in pathetic cases), which can provide at least some numbers. I'll PR that today.

perfect!

Regarding "fixing" xtimer vs a rewrite from scratch, I'd like to point out that #9503 alone changes (well, removes) ~450 lines of *logic code*. That is three quarters of xtimer's total code, including definitions and prototypes. IMO, we need to acknowledge that changing that amount of code does not result in the same code base. We should call it "ytimer". The amount of reviewing, validation and testing should be the same as for a rewrite. Or maybe just be measured in "amount of lines changed".

Agree. (Just to be sure: I understand "calling it ytimer" in a metaphorical way)

Regarding whether a "clock" parameter makes sense, this is something we should explore within the RDM. I think you need to prototype a function that chooses a suitable frequency from multiple options (without relying on an explicit parameter for that). (I'd actually suggest you use ztimer as basis, as there you have multiple, multiplexed backends using the same API. :slight_smile: ). You might even be successful. At that point, an RDM can decide if that functionality should move down the layers.

Sounds reasonable. To clarify a bit: I don't say we shouldn't have an additional parameter. I'm just saying that it may be more flexible to decouple the precision parameter from the instance parameter. Explicit values instead of conventions might also help the usability.

periph_timer IMO should be the slimmest layer of hardware abstraction that makes sense, so users that don't want to do direct non-portable register based applications get the next "closest to the metal".

Agree, but there are some things that we should add to the periph_timer. E.g. adding support for dedicated overflow interrupts together with an API to read the corresponding IRQ status bit. The high level timer would benefit from that on many platforms. E.g. Ztimer wouldn't require the code for the time partitioning mechanism then. But thats yet another part of the story...

Yes. Also, do all platforms support that overflow interrupt? I don't think so, in which case this cannot be relied upon to be available.

We are currently working on transforming "I don't think so" into numbers. It will still take some time to work thru all of those, *cough* lovely written data sheets. I'll provide the collected information to the RDM. Currently it looks like *most* platforms have this option. It would absolutely make sense to use that feature if available and only work around if needed. Especially if this can be decided at compile time like with the ztimer instances.

Also the term "frequency conversion" is a bit misleading I think. With a discrete clock you won't be able to just precisely convert a frequency to any other frequency in software. Especially if you want to increase the frequency - it will just be a calculation.

Yup. Frequency conversion makes sense if an application wants to sleep 12345ms, but the timer is clocked at e.g., 1024Hz.

Why? This won't change the fact that the timer is clocked at 1024 Hz so there is no frequency conversion. It's a conversion from period to ticks. You won't suddenly get accurate discrete ms ticks because of this calculation.

Again, this is just a simple example to explain what I think we should also consider as part of the solution. Forcing the application / developer to select a specific instance also has it's downsides.

With convention ("ZTIMER_MSEC provides ~1ms accuracy"), the application developer chooses the intended precision. Without that, and with a fixed API time base, 1s (1000000us) cannot be distinguished from 1000ms or 1000000us. Maybe it can, this is where you can maybe come up with a prototype.

I'm not saying we should stay with a fixed time base API. Regarding 1s cannot be distinguished from 1000000 µs: That's not required if you specify the required accuracy as percentage. Coincidentally, that's also how the accuracy of the clocks feeding the timer peripherals are described.

 \-&gt;wouldn&#39;t it make sense to move this \(probably repeated\) code down,

from the app, to the high level timer

If the range needed exceeds e.g., 32bis of milliseconds, which can represent more than 49 *days*, such code might make sense.

Yeah agree, the example with a µS timeout that may move to an ms range is more likely to actually happen.

If no instance is available that can do that, the timer just "does its best".

That is what a compile-time configured ZTIMER_USEC on 32kHz would do, if that is desirable.

Understood, though this somehow becomes a problem with convention based guarantees then. But as this is done at compile time we could at least print a warning. Not sure if there are use cases for also accessing this information at run time (?).

You maybe already got that form the above statements, but that's not what I meant. I'm referring to "runtime requirements of one specific timeout" that may differ based on the actual value. Example: A protocol backoff that is 200ms probably requires some HF timer. Then, because of whatever this may increase to 10 seconds and using an LF timer becomes practical.

milliseconds can usually be set on low-power timers (e.g., a 32kHz RTT). 32bit range at 1kHz means 49 days. No need to switch to a different clock.

If your alarm can be set with that precision yes. Also consider that using higher resolution for the sub-second counting on RTC increases power consumption.

Other examples would be a 200 - 2000us timer. That cannot be set on a millisecond clock, as now=500us + 2000us is 2500us, whereas now=0.5ms (==0ms) + 2ms is somewhere between 2 and 3 ms. With an "at least" semantic, the timer would choose 3ms as target time, which is up to 1ms (1000us) off. This *could* be solved by synchronizing twice, e.g., set a timer to the next ms tick, sleep 1ms, then sleep the remainder of the timeout in us precision. Doable, but tricky. Especially while keeping accuracy guarantees over two clock domains.

Yes that's tricky. Se should add some thoughts on this to the RDM.

If an application schedules <1000us...>2**32us, well yeah, it must be using 64bit values already. In that case, we might need a 64bit API, or *really* let the application handle that.

Agree.

Wouldn't it be nice if ztimer then automatically allows to go to power down because it is practical? (all that without adding the wrapping code to decide on the instance in the application)

Sure. But honestly, that is *one* function, which would in any sane architecture be completely on top of something like ztimer.

Not sure on this, after all it is a high level timer and deciding if going to low-power can be very different on different hardware.

"let our high level timer do it's high level stuff to automatically map it to what is at hand" is maybe possible.

Now we are talking!

Please prototoype!

I think that could, if successful, be the heart of a ztimer 64bit extension.

Good, I might play around with this after the more important stuff is resolved.

If now the same thing happens with ztimer, we didn't learn from the past.

If what happens? If in 5 years, we have learned where ztimer doesn't cut it and come up with ztimer+ (for lack of letters) that improves the situation substantially, again?

No, I mean if "having a non functional timer for many years" happens again. I think the way how xtimer did is job over these years is not something we want to repeat.

So an RDM would have prevented this? I doubt it. xtimer's ISR buggyness was known *for years* even without an RDM. We were just inexperienced (or ignorant, or incompetent, your pick).

Yes I honestly think it could have helped to on that end to thoroughly discuss the key problems with a broader audience of interested and competent people. That' why I think it is worth doing a bit more of this thinking upfront.

At some point, we need to be pragmatic.

Yes, but at some point we should also take a step away and recap instead of only implementing.

That is valid. Please acknowledge that while recapping, I can already write applications that have both HF and LF timers, while the recappers might be stuck, literally, because xtimer is broken.

Yes I totally acknowledge that and I'm really grateful that you spend your time on that. I also never meant to say the full replacement or the overall idea is flawed. I'm just saying it would be awesome to not repeat the process of introducing a new implementation and then trying to fix it for years till we start over from scratch again^^

# Not Getting Lost in Requirement Analysis and Problem Specifications

A good requirement analysis is a valuable tool for both development and evaluation of a software component. But once a solid understanding of the problem to solve is reached, additional effort put into a requirement analysis doesn't yield enough additional benefit to justify the work. And every requirement analysis is flawed: Assembling a complete list of all current requirements in RIOT's code base is hard. Predicting how future developments influence the requirements we have is impossible. There has to be a point when we just stop on collecting more requirements and consider the current list as good enough; a perfect, complete and definite result cannot be reached.

High level timer API are no new concept and the basic goals and requirements are well understood. On top of these basic requirements, benchmarks could be a good tool to quantify how well specific timer implementations perform. To me, writing a set of benchmarks would be more useful than additional requirements collection. Not only would it allow to see how good ztimer/xtimer are performing. They will also be useful for development and reviews of future PRs targeting RIOTs timer framework.

In the end, RIOT will be judged upon the features it provides. Not on the features on RIOT's to do lists. Not on how tough and rigorous the requirements are we have formulated on some yet-to-be-implemented feature. And not on how much documents and emails have been written about an yet-to-be-implemented feature.

(..) But once a solid understanding of the problem to solve is reached, additional effort put into a requirement analysis doesn't yield enough additional benefit to justify the work.

Yes, *once*...

High level timer API are no new concept and the basic goals and requirements are well understood

Yeah, that' why now we have the 4th (?) high level timer implementation and the previous three failed!? This time everything is going to be different *fingers crossed*...

In the end, RIOT will be judged upon the features it provides. Not on the features on RIOT's to do lists. Not on how tough and rigorous the requirements are we have formulated on some yet-to-be-implemented feature. And not on how much documents and emails have been written about an yet-to-be-implemented feature.

I never said we should do this RDM discussion forever and add things that are infeasible or we can't implement.

# Complete Rewrites can be the Best Option

I agree. As said above I never meant to question this because of "ideological" reasons.

I don't think that there is a reason to see a complete rewrite as some kind of failure that we should try very hard to prevent for the future.

I even agree here. Maybe I didn't express myself well. I guess anyone who ever wrote code knows rewrites are sometimes the absolutely best thing to do :wink: The failure is not a rewrite itself. More the process that we didn't focus on fixing the problems. We lived with a broken implementation for ages an now we mix required fixes with introducing a new architecture.

But I bet that 90% of all use cases will (...)

It would be good If - for the RDM - we can work together on converting such statements into at least some quantifiable data on how (we expect) timers are commonly used.

Regarding adding an implementation independent API ontop of xtimer: Spending work on just making xtimer as is functional feels like mostly wasted, lets not make it worse^^ Wrapping a non functional timer in another layer of abstraction wont really help us now. Making it functional would be way more important.

Marian: would you also be interested in attending a timer meeting?

I probably won't really have time to work on any of those things till the beginning of next year. So how about scheduling a meeting in January? Feel free to contact me off-list for that.

cheers

Michel

As far as I understand, the new timer implementation would not use 64 bit for the timer and the user is responsible for not overrunning the timer? Note that I haven't looked a the implementation yet, so forgive my ignorance.

Over the years my experience is that it's no good idea to burden the user with the knowledge of timer overflow. The latest example was a bunch of HPE SSDs that stop working after 32.768 hours (a little less than 4 years). Bad when you have several of them in a RAID for redundancy :slight_smile: https://www.techradar.com/news/hpe-ssd-drives-could-fail-at-this-critical-moment

So I'd vote for the (small) additional overhead, even on 8-bit µCs due to safety reasons. Unless the implementation can produce the correct timer representation with, say, C-preprocessor magic at compile-time.

Ralf

Hi Ralf,

As far as I understand, the new timer implementation would not use 64 bit for the timer and the user is responsible for not overrunning the timer? Note that I haven't looked a the implementation yet, so forgive my ignorance.

I think you got it right. In ztimer, the clocks' now() value will overflow after 32bits.

E.g., `t1 = now(); t2 = now(); assert(t1 < t2)` might fail.

Setting timer values is always relative (target = now() + X), and X can be any 32bit value, the user does not need to handle any overflow there.

Over the years my experience is that it's no good idea to burden the user with the knowledge of timer overflow. The latest example was a bunch of HPE SSDs that stop working after 32.768 hours (a little less than 4 years). Bad when you have several of them in a RAID for redundancy :slight_smile:

Failing after 32768h strongly points to using a signed 16bit variable for the hours count. That's an odd choice to begin with.

HPE SSD drives could fail at this critical moment | TechRadar So I'd vote for the (small) additional overhead, even on 8-bit µCs due to safety reasons. Unless the implementation can produce the correct timer representation with, say, C-preprocessor magic at compile-time.

You mean, you'd vote for all timer values to be always 64bit? I don't think the overhead is small enough to justify that, but that needs to be evaluated. The increased CPU usage needs benchmarks, in RAM we can estimate easily.

IMO, users should just not use these timer values as timestamps. That maybe needs to be stated at a prominent place (or multiple) in the documentation. For that, we can (should?) re-introduce a 64bit API in ztimer.

It might make sense to add a (very large or possibly randomized) constant offset to all 32bit timers, maybe limited to development builds, that causes the timer value to overflow early(er), so applications would break early in tests.

Kaspar

Hi,

in order to push this forward I just opened a PR for an RDM at [WIP, RFC] doc/memos: Added RDM on high level timer API requirements and common features by maribu · Pull Request #12970 · RIOT-OS/RIOT · GitHub. This PR is in an early state and both feedback and help will be greatly appreciated.

IMO, users should just not use these timer values as timestamps. That maybe needs to be stated at a prominent place (or multiple) in the documentation. For that, we can (should?) re-introduce a 64bit API in ztimer.

I don't think that 64 bit is needed for every feature of a high-level timer API; especially for delaying the execution of the calling thread or for software timers. A 64 bit system clock would however be quite nice in some use cases.

> E.g. adding support for dedicated overflow interrupts together with an > API to read > the corresponding IRQ status bit. > The high level timer would benefit from that on many platforms. > E.g. Ztimer wouldn't require the code for the time partitioning > mechanism then. > But thats yet another part of the story...

Yes. Also, do all platforms support that overflow interrupt? I don't think so, in which case this cannot be relied upon to be available.

This would not strictly be needed. In case either an overflow interrupt or a alarm interrupt is supported, the other could be implemented by the timer driver. (If only an overflow is supported, the driver could set the current timer value so that the overflow interrupt happens exactly when the alarm should happen. And if only an alarm is supported, the driver could multiplex two alarms with one being the overflow.) It needs however to be determined if this would indeed be better.

Marian: would you also be interested in attending a timer meeting?

If I can find the time, sure! Some weekend in the first half of January might work for me. How about Kaspar or Michel can organize a room and use one of the many online scheduling services to find out a date most interested developers could join?

Kind regards, Marian