~15kEur to spend on RIOT: need ideas

Okay, I see. So, what I can imagine is the following. You have one hour windows. It seems like there are enough builds to fill a lot of hours. But maybe it also just looks like it, since it takes ages. Scaleway has a limit of 1000 container. Not sure if that also applies to compute instances. I just take it as a number here. A development instance is 0.0088 EUR per hour. That would be 8,8 EUR for the 1000 instances. So, now the question is how long does the build actually takes. If you can squish 10 builds into that hour, we would look at 88 Cent per build. Which doesn’t sound too bad to me. And you can even add some fee on top of it and have a sustainable way to finance RIOT. What you could do, just an idea, you prioritze certain deployments over others. That’s a discussion you may want to have then. And the ones that are not prioritzed are able to pay for their build to be done within an hour. The question here is just: Would people be willing to pay for it? And who to prioritze and not prioritize? Make it transparent in the github comments. Something along the line: Your build is set to be done on DD.MM.YYY, but you are able to pay sum x to get it done in the next hour by service xy. Read more about it.

That would be also something to consider. Having max build times etc. I would be more interested right now to even find out if people would actually be willing to pay 2 EUR per build in order to get it done quickly. Or what the number is. We can have a test setup and give it a try how long it actually takes. Just some ideas here though. You don’t have to pick it up :smiley:

So, there’s a startup cost to get these instances productive. Like, they need the 13g build container. That could be in a template instance. I did try hetzner for this, creating a cloud instance takes time depending on the template size. A full murdock worker instance took ~5min before it would start. That’s a lot, we’d want the builds to be done by that mark. Then there’s ccache, which when cold, takes some builds to warm up. ccache could be in redis, but that might become a bottleneck at 1000 client scale. TBH, if we’re bound to one hour slots, I find using cloud instances just don’t fit. With per-second billing, it’s way more interesting (just fire them up when more than 5min builds are in the queue). Apart from that there are administrative questions - how much money should/can be spent over what time span, and how to ensure this? (AWS/GCE could easily go prepaid using some credits based billing. They don’t want, leaving the customers with the additional burden to set up safe guards. AFAIK their billing APIs aren’t even real-time…)

Well, that should be up to the employers of any RIOT contributor. My productivity goes down with too much context switching due to CI wait times. Over the year, that might already amount to 15k. :slight_smile:

XKCD

I mean we can also make it 2 hours slots in order to compensate for the setup time. Problem with AWS/GCE is just: They are pretty expensive. So it doesn’t really matter if it’s based on second when you pay 5 times the price, or even more. Hetzner or Scaleway are a more affordable. And this sounds more like a use-case where the reliability of the hardware doesn’t matter too much. Considering all the builds all day long, I don’t think filling the slots would be too much of a concern. It’s more of a user-interface issue than anything else. In order to fill the hours well enough.

Apart from that there are administrative questions - how much money should/can be spent over what time span, and how to ensure this?

All of these cloud providers have APIs to manage the instances. Wouldn’t be too hard to write software to manage all of this. That’s why I said: Why not just going all in and making it actually a paid service with margin. That margin is used in other areas of RIOT. People would pay with credit-card, Paypal etc. So, the money is upfront already there for the next timeslot. There wouldn’t be too much risk on that side. You just have to make sure the software is well tested, monitored etc. in order to make sure it doesn’t run wild. ovh also has a credit system. Not that I recommend ovh though. They are affordable, but a horrible service. I don’t know, if Hetzner or Scaleway provide a prepaid system.

But overall. Do you think it’s a realistic service we could spin up? Sure, there are some details to figure out. And even figure out, if there are enough people who would be willing to use it. If that’s not even the case, it doesn’t make much sense to even investigate it any further. What do you think?

I mean, the code can be reused for any cloud service any way. So, there could even be different tiers. More affordable ones with 1/2 hours slot, or whatever. Just faster compiled than it is the case right now. And a tier that runs on aws/gcp and can be scheudeled within minutes.

Why I suggested this in the first place: You can buy a bigger machine now. Which probably works a while, but leads eventually just to the same issues all over again. If you would go this route, you would have something that is able to scale with RIOT. Something you can use and build up on. Well designed you can take the same code and run it in a kubernetes cluster, if that eventually makes economically more sense. And also have another source of income to support the development on RIOT.

Correct me if I’m wrong, but AFAIU you don’t pay for electricity / traffic when co-locating the server at FU (given you don’t abuse it), so that is a competitive advantage compared to any hosting provider.

Mostly only for dedicated server. The vms (cloud instances) have a limited bandwidth though, and fair-use agreements. At Scaleway that’s mostly 100, 200 and 500 mbit/s. Don’t know how much is exactly needed, but I guess that shouldn’t be too much of a concern. You can also have git uploaded to a s3 bucket, before the actual instance gets up and running. The internal bandwidth isn’t caped. You can pretty much use a full gbit/s, sometimes even more. You reminded me that you may be able to save some costs by using IPv6 only. IPv4 addresses become quite expensive nowadays.

Unfortunately GitHub still does not support IPv6, so that’s not an option. Ah you can use a NAT64 gateway.

1 Like

Yes, I do that regularly for my IPv6-only hosts with a local NAT64. Github has provided IPv6 for the hosted content now, so there is hope, even if it’s really slow.

I think that this is the best plan. Maybe one could find a second empty 2U case, and glue a bunch of target systems into it with a USB hub to power them that can actually do proper power control. (I have one that the power control uhubctl says will work, and there are no errors, but power does not change…) Then the CI machine could reach out directly with new builds for actual testing. Or, a second machine could exist under a desk, on on Casper’s shelf, and it could do that part.

Speaking of dev board. I am currently onto something, but for Arduino with a ESP32. Eventually I would like to do replace the board with a RIOT OS (pro) variant. Based on the nRF52832 or maybe even the nRF52840. If the RIOT community is interested in what I am doing here, happy to have a presentation about it at the community meeting or something. Not sure, if you can hand them out that easily though. The retail will be more in the range of 100 - 120 bucks. Without the retail margin it is in the range of 50 bucks, but still a lot of money to just hand them out. The board itself is quite cheap though. The stuff around it is the biggest cost factor. Said that, if you are interested to present, just send me a pm or mail.

As a data point: A ~750 EUR notebook PC with 8 CPU cores (Ryzen 5825U) yields half the number of jobs as a beefy 48 core server that would easily exceed the 15 k€ budget. So in terms of bang for the buck with our CI workload, many small systems are much much much better than a single server.

Just an idea.
If getting hardware is the main problem then this could be a solution.

Event Infra has a giveaway of old hardware for non-profits. They normally provide the networking for CCC events and other European hacker events. You can look at the available hardware here. There are some beefy old server on the list.

The biggest hurdle would be driving to Amersfoort,NL to get them and get a location to run the servers.

But I don’t know if RIOT is a non-profit or which legal entity it works under.

Another idea.

New stickers and cards for promotion of RIOT.

2 Likes