Separating module drivers from physical pin configuration

Dear developers, (This is probably only relevant to Cortex-M or other advanced MCUs; AVR etc. usually do not have much function muxing capability.)

I would like to hear opinions on the following model:

I would like to have a separate driver for setting up the CPU pin mux. That is, separate the CPU logic module drivers (such as SPI, I2C, UART etc.) from the actual hardware ports and pins. This is something that we are adding to our Contiki port soon and I would love to see something similar in RIOT. By improving the separation/abstraction it may make it easier use the same board directory for multiple variations of the same board, where the on board peripherals are the same, or almost the same, and only some minor additions.

Because of the hardware function muxing capabilities in advanced MCUs is usually in a separate module it is only logical that the driver for a CPU module does not need to know anything about which pin numbers on the IC is connected to its signals, the driver should only control the logic within its own module.

Best regards, Joakim Gebart Managing Director Eistec AB

Aurorum 1C 977 75 Luleå Tel: +46(0)730-65 13 83 joakim.gebart@eistec.se www.eistec.se

Dear Joakim,

I would like to have a separate driver for setting up the CPU pin mux. That is, separate the CPU logic module drivers (such as SPI, I2C, UART etc.) from the actual hardware ports and pins.

You mean introducing a central point to handle all the PIN initialisation for the other peripherals? “Mux-driver, initialise pins for UART3!”

By improving the separation/abstraction it may make it easier use the same board directory for multiple variations of the same board, where the on board peripherals are the same, or almost the same, and only some minor additions.

I see your point here, but couldn’t it be realised by using some different configuration in periph_conf.h separated by preprocessor guards? Like:

#ifdef MULLE_v1
  #define SPI0_MISO_PIN PA12
  ...
#elif MULLE_v1.2
  #define SPI0_MISO_PIN PB09
  ...
#endif

Because of the hardware function muxing capabilities in advanced MCUs is usually in a separate module it is only logical that the driver for a CPU module does not need to know anything about which pin numbers on the IC is connected to its signals, the driver should only control the logic within its own module.

The peripheral interface currently tries to exploit the greatest possible common set of functionality while minimising overhead. Since such a mux-driver would mainly be used in the other peripheral drivers it could be optional. Also it would need evaluation of the impact on more constraint platforms (cortex-M0 etc.) in terms of memory and clock rate.

I like the idea in general but could you elaborate a little bit more on the concrete use case and implementation so we can discuss this in more detail.

Best, Thomas

Dear Thomas,

Thank you for your feedback, see my response inline below.

Dear Joakim,

I would like to have a separate driver for setting up the CPU pin mux. That is, separate the CPU logic module drivers (such as SPI, I2C, UART etc.) from the actual hardware ports and pins.

You mean introducing a central point to handle all the PIN initialisation for the other peripherals? “Mux-driver, initialise pins for UART3!”

I would expect that reconfiguring pin function muxing in the middle of a running application would be a quite uncommon use case, so I would expect something like during board init, I would call something like "Mux-driver, initialize all pins according to board config X". And then the module drivers would not need to worry about which signal goes to what pin, only generating the signals.

By improving the separation/abstraction it may make it easier use the same board directory for multiple variations of the same board, where the on board peripherals are the same, or almost the same, and only some minor additions.

I see your point here, but couldn’t it be realised by using some different configuration in periph_conf.h separated by preprocessor guards? Like:

#ifdef MULLE_v1
  #define SPI0_MISO_PIN PA12
  ...
#elif MULLE_v1.2
  #define SPI0_MISO_PIN PB09
  ...
#endif

Yes, I guess you are right about the preprocessor conditionals, but there are some applications where you may want to configure something only a tiny bit different, e.g. rerouting the debug UART to another pin in the RPL border router, to let the default UART only handle SLIP traffic, and connect an extra UART cable to your PC only when you need the debug output. It is of course achievable by the traditional model as well, but if I know that all pin configs are done at a particular moment, and according to the config, then I do not have to worry about in what order I initialize drivers.

Because of the hardware function muxing capabilities in advanced MCUs is usually in a separate module it is only logical that the driver for a CPU module does not need to know anything about which pin numbers on the IC is connected to its signals, the driver should only control the logic within its own module.

The peripheral interface currently tries to exploit the greatest possible common set of functionality while minimising overhead. Since such a mux-driver would mainly be used in the other peripheral drivers it could be optional. Also it would need evaluation of the impact on more constraint platforms (cortex-M0 etc.) in terms of memory and clock rate.

This is kind of a board/cpu software architecture design decision, I am merely looking for merits and deficiencies with this model.

The main point of splitting it up is a logical division of responsibilities between software components. Since the pin function mux is its own hardware module it is only logical that it has its own driver, instead of letting all other drivers poke around inside it at will. Every driver in the current implementations is designed to handle its own hardware module, except it has to touch the pin muxing as well. Every driver also does basically the same thing during initialization, which means code duplication and risks for copy-and-paste errors or code rot. The outline of most drivers' init functions is:

1. Enable clock gate to I/O port/pin 2. Set pin mux to correct choice. 3. Enable clock gate to CPU module 4. Initialize CPU module configuration registers.

The main change would be that steps 1 and 2 are handled in a central place => possibly less ROM usage.

I like the idea in general but could you elaborate a little bit more on the concrete use case and implementation so we can discuss this in more detail.

I don't have an implementation yet, still working on the best approach to do it, and whether it is a good choice, hence this thread.

Best, Thomas

Best regards, Joakim Gebart Eistec AB www.eistec.se

Hi Joakim,

I also put some thought into this topic when designing the low-level interface. The problem I found most challenging was the heterogeneity of the different platforms. I found designing a central 'pin-muxing/config' module was too expensive in terms of resources compared to the actual gain. An important design idea was the creation of device identifiers that are portable throughout all supported platform, hence GPIO_x instead of PIN0.x, PORTA.x etc...

See some more comments inline below.

Dear Thomas,

Thank you for your feedback, see my response inline below.

Dear Joakim,

I would like to have a separate driver for setting up the CPU pin mux. That is, separate the CPU logic module drivers (such as SPI, I2C, UART etc.) from the actual hardware ports and pins.

You mean introducing a central point to handle all the PIN initialisation for the other peripherals? “Mux-driver, initialise pins for UART3!”

I would expect that reconfiguring pin function muxing in the middle of a running application would be a quite uncommon use case, so I would expect something like during board init, I would call something like "Mux-driver, initialize all pins according to board config X". And then the module drivers would not need to worry about which signal goes to what pin, only generating the signals.

Reconfiguration is actually needed for some cases (for example the CC110x transceiver needs to read a pin before a SPI transfer starts. The same pin is then used as SPI MOSI) but these are not very common.

By improving the separation/abstraction it may make it easier use the same board directory for multiple variations of the same board, where the on board peripherals are the same, or almost the same, and only some minor additions.

I see your point here, but couldn’t it be realised by using some different configuration in periph_conf.h separated by preprocessor guards? Like:

#ifdef MULLE_v1
   #define SPI0_MISO_PIN PA12
   ...
#elif MULLE_v1.2
   #define SPI0_MISO_PIN PB09
   ...
#endif

Yes, I guess you are right about the preprocessor conditionals, but there are some applications where you may want to configure something only a tiny bit different, e.g. rerouting the debug UART to another pin in the RPL border router, to let the default UART only handle SLIP traffic, and connect an extra UART cable to your PC only when you need the debug output. It is of course achievable by the traditional model as well, but if I know that all pin configs are done at a particular moment, and according to the config, then I do not have to worry about in what order I initialize drivers.

I would agree with Thomas, that handling different board configurations using conditionals would be the way to go. What we should look into is to make these conditionals available to the Makefile, so you can specifiy the needed configuration during build time without touching the code.

Redefining peripherals e.g. for testing could be done in a similar manner: currently the UART to use for stdio for example is defined in the board.h. Putting some kind of conditional around it that can be steered from the Makefile would enable the use-case you stated above.

Because of the hardware function muxing capabilities in advanced MCUs is usually in a separate module it is only logical that the driver for a CPU module does not need to know anything about which pin numbers on the IC is connected to its signals, the driver should only control the logic within its own module.

The peripheral interface currently tries to exploit the greatest possible common set of functionality while minimising overhead. Since such a mux-driver would mainly be used in the other peripheral drivers it could be optional. Also it would need evaluation of the impact on more constraint platforms (cortex-M0 etc.) in terms of memory and clock rate.

This is kind of a board/cpu software architecture design decision, I am merely looking for merits and deficiencies with this model.

The main point of splitting it up is a logical division of responsibilities between software components. Since the pin function mux is its own hardware module it is only logical that it has its own driver, instead of letting all other drivers poke around inside it at will. Every driver in the current implementations is designed to handle its own hardware module, except it has to touch the pin muxing as well. Every driver also does basically the same thing during initialization, which means code duplication and risks for copy-and-paste errors or code rot. The outline of most drivers' init functions is:

1. Enable clock gate to I/O port/pin 2. Set pin mux to correct choice. 3. Enable clock gate to CPU module 4. Initialize CPU module configuration registers.

The main change would be that steps 1 and 2 are handled in a central place => possibly less ROM usage.

I agree that logically such a central mux module would make sense. The pin configuration is included into the peripheral drivers simply for efficiency and simplicity reasons. As stated above I think its quite hard to find a clean solution, that is at least as efficient to our current solution while being portable on all our supported platforms. But if we can find such a solution I would gladly be the one to press the merge button!

Cheers, Hauke