I went off on a rant about this at the summit and it comes down to the top-down vs bottom-up approach.
I guess you can say we have a middle out approach (if anyone has watched the HBO show Silicon Valley) where the we use depends on
for the peripherals and module features and select
for the “high level” modules.
The problem with the bottom-up (depends on
) approach (which is what was initially used and what Zephyr uses) is that everything needs to be resolved somehow.
In the Zephyr code base we see lots of the apps and tests sprinkled with cpu/board specific configuration files and the process of adding a new app would require defining many different configurations.
This, however, makes a very clean dependency tree and makes modelling quite easy.
The top-down (select
) is what we currently implement in make
and allows very simple configuration, for example, if I have an app using some gnrc stack, I would just need to bring that in and RIOT will resolve it for you, for the better or worse.
This is really nice but makes things really complicated, especially with rules such as no circular dependencies and no selecting choice options in kconfig.
It also makes the menuconfig look silly as almost all modules would be selectable.
The proper way would be a SAT solver, if it is kconfig or laze or anything, we need something. Currently the recursive make works by hacks and tuning (which probably will have to exist anywhere but it is really hard to tell why some modules are being brought in).
I imagine if we are worried about CI time now… probably a SAT solver won’t help with that.
Now that I have my little boilerplate intro done we can get on to constructive conversation.
I will try to structure my response:
The process appears to be stalled for quite a while now and the current situation just causes frustration
Yes it is, especially when having to deal with a different modelling style, for example the usb stdio stuff took me a month of playing around to try to have a solution that is what we want (not just about matching what make says).
Many other issues are just annoyances which are not desired.
This causes a lot of friction and hard to debug dependency issues.
I have found that debugging with kconfig is the easy thing since you can look through the whole tree but maybe that is a tool usage issue.
There have been some issues from make that took a while to figure out, for example, bringing in periph modules that were simply not used.
There have also been some challenging kconfig issues that are occur more around the circular dependency issues, for example understanding natives periph_rtc was not so easy to read.
It is a tool that must be learned, that is for sure.
I’m seriously wondering if this may drive contributors away, since getting stuff past CI almost always gets stuck at Kconfig for any significant new development.
I would like to think that people help others through it, and usually it would only be a problem if changing already existing features as if there is no app.config.test
, probably the test will not run the kconfig check.
What are the goals of the Kconfig migration?
The goal is to have a structured way of declaring the dependencies, to provide tools to better understand the dependencies (ie. menuconfig
) and try to use more standard tooling.
To me it seems like the system will be harder to configure as dependencies are not always added automatically - making it easier to create a broken configuration.
If we go from the bottom up it is more work for sure. I don’t think I would agree with the broken configurations as kconfig can do a lot more validation. During the migration though I would agree since things may not be complete leading to broken configurations, this should be resolved if it is ever complete.
The Kconfig build also appears to be slower than with the make based dependency resolution.
Yup, currently by far, but things can be done to speed it up.
the incomplete migration state already lead to many hacks being added to the Kconfig based build resolution that makes me wonder if we won’t end up with a system at least as messy as the purely make based approach.
We do need to be careful, however, a lot of hacks that are introduced to match the make system are clearly labeled and can just be removed after the migration. We should try to focus on what behaviour we want rather than what things happen to resolve to.
So what are the goals and future plans for the Kconfig migration?
Whenever I bring it up, the answer is get more man-power to push it. It was really nice with @aabadie going strong for those few weeks we have the tracking issue but usually the last little bit is the hardest and I don’t really believe all the issues would just be solved.
Is our current approach the right one or should we reconsider this experiment?
I think we should open it up to the community what we should do, if nobody wants to put in the work it is at least easy to switch back to make but we will be loosing some nice modelling and capabilities then (ie, if I have a hardware enabled backend then use that feature).
Maybe it just needs some smart and dedicated person/persons to pick it up, maybe we switch to all select based, maybe we disable the circular dependency check, maybe we don’t compare modules or binaries and just use passing tests, maybe we find some easy to integrate sat solver and go to pure depends on, maybe some AI just solves everything for us …
All we can say for now is that nothing is moving and it is just costing us.