Thanks @Kaspar@emmanuelsearch for sharing the latest overview/details on Rust integration efforts for RIOT:
Because we, IETF ANIMA Minerva Project (@mcr@j-devel), have also been working on our custom “cross-board oriented” async Rust runtime for RIOT, we particularly got interested in “Prototype RIOT over async Rust framework” part, which is assuming the Embassy ecosystem [1].
Related to the topic, we would like to share some additional insights that we gained though implementing our version [2] of async Rust runtime for RIOT.
In our case, we didn’t start with the existing 3rd party Embedded Rust components (and having to grapple with complexity for adaptation/integration); rather, we took a minimalistic/bottom-up approach:
port the (bare-metal minimalistic) blog_os async Rust executor [3] to a no_std RIOT-compatible crate [4],
bind existing RIOT C APIs (e.g. timer, gcoap) by Rust-C-FFI,
adapt them as Rust Future/Stream, and
expose them as Rust async function API
what we got working (thus far)
single threaded async Rust runtime (akin to JavaScript runtime), capable of
spawning Rust Future<Output = ()> instances
spawning Rust Stream processors (useful for implementing e.g. Rust-based coap servers)
executing async versions of some RIOT Timer/Gcoap API, e.g.
wrapping/adapting them as Rust Future/Stream works well, hence
CROSS-BOARD (xbd) async abstraction of RIOT API is highly feasible/practical (as opposed to Embassy going “board-specific”)
I just skimmed the code, but it looks like there are quite some heap allocations.
E.g. in xbd_ztimer_set() (which is just for testing IIUC), or in Xbd::set_timeout(). We can’t have heap allocations on non-MMU systems. I’d assume that getting rid of those is actually the hard part … What do you think?
Interesting. I did grapple with integration for a long while, then took the minimalistic/bottom-up approach of starting with the existing 3rd party Embedded Rust components and drop most of RIOT-c for now.
But looking at the approach you’re presenting here, I think it should be really simple to hook embassy’s executor(s) into RIOT-c without much integration hassle, especially if embassy-hal is not being used.
Maybe worth exploring if you’re start hitting the limits of the blog_os executor.
We can’t have heap allocations on non-MMU systems. I’d assume that getting rid of those is actually the hard part … What do you think?
I should have made clear that non-MMU systems are currently out of our scope. Our Rust code assumes heap allocations (via no_std with alloc feature enabled). no_std Rust without alloc in conjunction with RIOT is unexplored, and also non-trivial IMO.
But looking at the approach you’re presenting here, I think it should be really simple to hook embassy’s executor(s) into RIOT-c without much integration hassle, especially if embassy-hal is not being used. Maybe worth exploring if you’re start hitting the limits of the blog_os executor.
I appreciate your insights about embedding Rust-executor(s) into RIOT-c. As you suggest, we would like to consider Embassy executor(s) when overcoming limitations, especially those pointed out in the “possible-extensions” section.
That’s tricky because RIOT doesn’t support any systems with a MMU.
The issue is not that you can’t do heap allocations at all, they are well supported and even used in some rare cases in RIOT code. The issue is that you have to be careful with them because there is nothing that can safe you from heap fragmentation.
So dynamic heap allocations, especially frequent and small ones are a big issue. You can do heap allocations on init if you are expected to keep the memory for the entire run-time of the app (that’s what MTD does) or if you have multiple application states between which the memory is freed.
Doing many small/variable sized allocations / deallocations can lead to ‘holes’ in the memory where you can’t satisfy an allocation even if there is in theory still enough free heap memory.
I wanted to post about a netdev talk this fall about Rust in the Linux Kernel. The email announcing it was not public in the end.
Ah, I found the link: https://netdevconf.org/0x17/sessions/tutorial/rust-for-linux-networking-tutorial.html
It’s a tutorial. I don’t know much about the Rust integration into the Linux kernel; the talks at LPC last September was in conflict; I should look it up. I think they did not use cargo, but I am not certain.
I wanted to remote attend all of last week, but due to reasons I didn’t make much of it.
I did catch the last 5 minutes of @Kasper’s talk, and I hope to see the video posted.
This is my take on where to go with RIOT-OS + RUST.
We won’t get there overnight.
There is a very big win for having a RIOT crate that just contains all our C code with some reasonable default Kconfig. It’s okay if it uses 80% of the code space on the target platform. The target audience are (new) people writing main() in RUST. I think we had a talk about this from Japan in 2021’s conference. It doesn’t have to be for every platform, just ones that are actually available.
There are many bits of code that would benefit from being written in Rust, even if they call C functions and are called by C-functions. They would ideally be no_std, but really it’s when you need to keep track of some memory that is allocated that RUST wins. The key here (IMHO) is to find a way to just plop .rs files into our tree and have them work.
The work that @j-devel has done is probably more advanced than many are ready for, but OTH, many will get to wanting async runtime for applications very soon.
The issue is that the allocator can’t move allocated memory around to defragment things.
This is also an issue with many *nix programs. Some systems (Java poorly in my opinion, Ruby much better, Emacs…) have garbage collection that can move allocated memory around. And it’s even possible to return entirely empty pages to the OS (even creating holes). None of this possible in RIOT-OS, and it doesn’t matter what kind of allocator one uses.
I haven’t had a chance to look at the code, but I suspect that it can be taught to allocate at init time, and then maintain it’s own heap for async operations.
Thanks for sharing this … should have read this earlier!
So dynamic heap allocations, especially frequent and small ones are a big issue.
From my own work on the same topic, the pain point is that some structures need to be initialized in place (VFS directory entries, sock_udp_t), and Rust has no easy way to run a constructor into a Pin<&mut MaybeUninit<...>>.
My current worst sore point is that I don’t see easy ways for using the mutex or msg synchronization primitives in an async way; did you have any success with that?
Not sure, our approach to async API functions is to wrap RIOT-c callbacks into a async Stream processing. Maybe, this approach is limited in that we assume binding RIOT-c methods is the way to go.
All works well as long as the C API gives you callbacks; mutex and msg just do not have callbacks, but directly block one thread on the very operation that is pending.
We have just prepared a new Rust example: “async Rust runtime and applications” at ‘RIOT/examples/rust-async’. (For details, please refer to README, or PR Draft https://github.com/RIOT-OS/RIOT/pull/20705)
This example includes a very compact&extendable async-Rust runtime together with an async-Rust compatible shell module! It will be useful for any new/existing Rust programmers to quick-start developing and interactively testing async-Rust RIOT applications.
Please try it and give us your opinions and feedback.
git clone https://github.com/RIOT-OS/RIOT
cd RIOT
git fetch origin pull/20705/head:rust-async-wip
git checkout rust-async-wip
cd examples/rust-async
make
sudo ip tuntap add dev tap3 mode tap
sudo ip link set tap3 up
./bin/native/rust_async.elf tap3