Scheduler: Supporting Cooperative Threading

Hello Rioters,

I'm trying to figure out how could cooperative multithreading / fibers / coroutines be used in RIOT. There is something already in the scheduler, but right now it looks more like an artifact of the implementation than a proper feature.

Why

Hi Juan,

The current RIOT scheduler will only switch between threads with the same priority if there is an explicit yield, or if there occurs a preemption by a higher priority thread.

RIOT's scheduler used to work like that: any preemption by a higher priority thread would advance the current priority's circular runqueue. That has been fixed years ago. Nowadays, only if a thread calls explicit thread_yield() or it blocks, it's "friends" will be scheduled.

Coroutines make it possible to program asynchronous code in a blocking style - see "await". This is more natural and easier that using callbacks.

How does it compare to sending / receiving messages?

This is almost cooperative multithreading (within the same "priority group"), except there is no guarantee that after a thread in a group is preempted, it will be that thread an not another with the same priority that will get resumed. Maybe that is the current behavior, but I'm having some trouble understanding the scheduler code.

It *should* be current behaviour, but I'm sure there are both platforms and modules that call "thread_yield()" instead of "thread_yield_higher()".

Assigning the same priority to two or more threads is usually not a good idea.

Can't really argue with that, can you? :wink: More seriously, that note can definitely be improved.

A good starting point would be to guarantee threads with the same priority get cooperatively scheduled 100% of the time. This means that if one thread is preempted by a higher priority task, then no other thread but that one will get resumed. In other words, the only way to switch between threads with the same priority is explicitly yielding from one.

As said, that guarantee *should* be in place, if calling a blocking function can be considered "yielding".

We might consider to make even blocking of a thread not advance that thread priority's run queue, but I'd be reluctant to change the semantics that much.

Coroutines / Fibers do not have to be full fledged threads. The TCB can be simpler and some objects can be shared with other fibers.

I count 12 bytes minimum for the TCB on 32bit architectures. There's not much more that can be shaved off, and it would not have significant impact, as already the stack space necessary for a thread's registers when storing the context, apart from a stack itself, is the largest part of a thread's overhead.

See the PR on a thread-safe implementation of newlib (newlib: add thread safe implementation by vincent-d · Pull Request #8619 · RIOT-OS/RIOT · GitHub) for an idea of the overhead that thread-safety imposes.

This is not the overhead of thread-safety per se, but of the C library functions that were designed to save state in global variables, when there was no multi-threading. There are thread-safe alternatives to each one of them in the C library, which do not impose the overhead.

Kaspar

Hi Kaspar,

RIOT's scheduler used to work like that: any preemption by a higher priority thread would advance the current priority's circular runqueue. That has been fixed years ago. Nowadays, only if a thread calls explicit thread_yield() or it blocks, it's "friends" will be scheduled.

Excellent!

Coroutines make it possible to program asynchronous code in a blocking style - see "await". This is more natural and easier that using callbacks.

How does it compare to sending / receiving messages?

Using messages is OK, but not everything is built that way. Even code which uses a separate thread that processes messages must use locks.

Messages require some decoding logic if one is expecting messages from many senders. I'm thinking of a loop in which one waits for messages and acts according to the type: what would conceptually be multiple threads gets squashed into a single chunk of code.

Assigning the same priority to two or more threads is usually not a good idea.

Can't really argue with that, can you? :wink: More seriously, that note can definitely be improved.

I think there's no reason to discourage people from assigning many threads the same priority. I would do it if it means I can avoid managing locks on my threads.

In fact, I don't see why an application (user-code) can't run entirely in one priority group.

We might consider to make even blocking of a thread not advance that thread priority's run queue, but I'd be reluctant to change the semantics that much. >

No, that's fine, blocking of a thread should advance the queue. That's a core feature: a thread can block on something that depends on another thread in the group.

Regards,

Juan.

Hi Juan,