bitfeilds

Hi,

What is RIOT's position on using named bitfields for register definitions ? I know they are frown upon as there are no endian guarantees in the C standard, personally I don't use them, but the PIC32 device files supplied by Microchip do include bitfield structures for most registers so could make life easier ? (thankfully there are #define for most register fields too, if the answer is not to use them).

Cheers,

Neil

I can't speak for RIOT, but personally I don't have a problem with them when they are used for register definitions.

And I know that SAMD21 uses them too in Atmel's CMSIS include files.

In many cases it makes the code much nicer. That doesn't mean the SAMD21 already uses them a lot. For example, we currently have code like

     dev(bus)->CTRLA.reg |= SERCOM_SPI_CTRLA_SWRST;      while ((dev(bus)->CTRLA.reg & SERCOM_SPI_CTRLA_SWRST) ||             (dev(bus)->SYNCBUSY.reg & SERCOM_SPI_SYNCBUSY_SWRST));      ...      while (!(dev(bus)->INTFLAG.reg & SERCOM_SPI_INTFLAG_DRE)) {}      ...      dev(bus)->CTRLA.reg &= ~(SERCOM_SPI_CTRLA_ENABLE);

which could use bitfields and be written like this

     dev(bus)->CTRLA.bit.SWRST = 1;      while ((dev(bus)->CTRLA.bit.SWRST) ||             (dev(bus)->SYNCBUSY.bit.SWRST)) {}      ...      while (!(dev(bus)->INTFLAG.bit.DRE)) {}      ...      dev(bus)->CTRLA.bit.ENABLE = 0;

Hi Neil, hi Kees,

though named bitfields are kind of nice when it comes to code readability, they behave very poorly when it comes to code size. This is especially true for register maps, as these are typically volatile. For this reason, we don't use them in RIOT and I strongly advice not to use those.

As example I was able to save several 100 bytes of ROM when removing the named bitfield use from the samr21s peripheral drivers.

Cheers, Hauke

are you suggesting the compiler generated code for accessing the bitfeilds is less size efficient than doing it manually? I would be suprised if that was the case ?

Hi, because this discussion came up in one of my (higher level) PRs, too.

(okay it’s not the case for MSP430 e.g.)

Hi Martine!

Hi,

Hi,

Marinte's example does not use `volatile` fields, this makes it in my experience even worse...

Anyhow, it still shows, that on Cortex based platforms, the manual approach is superior in terms of ROM usage (which fits with my experiences). So especially for register maps (which are completely tied not only to a platform but to a specific CPU) I see a negative effect from using named bitfields.

  @Neil: yes, I am suggesting (also backed by Martine's example) that compiler generated code for accessing the bitfeilds is less size efficient on Cortex-Mx based platforms. But please feel free to prove me wrong!

Cheers, Hauke

Sorry, saw Olegs mail only after I send mine...

Cheers, Hauke

Ok so they are not outright banned but not recommended unless you can prove there are not any code size penalties? I’m now very interested in testing this on MIPS and will be querying our compiler engineers if there is a difference. I wonder if the fact that volatile acts as a compiler memory barrier that the compiled code is larger?

Neil

Hi Neil, hi everybody,

are you suggesting the compiler generated code for accessing the bitfeilds is less size efficient than doing it manually? I would be suprised if that was the case ?

writing to a bitfield translates to a read followed by a write of the updated value. So if you write to multiple bitfields in a register you have multiple read-write pairs. These can't be combined when the bitfields are volatile. Similar for multiple reads of a register.

When you use shift and mask you usually do a single access for all fields of a register.

IMHO it is also better to use shift and mask because a write to a bitfield is actually a hidden non-atomic read-update-write, which may become dangerous when you have concurrent access.

Grüße Jürgen

Thank you Juergen, this was a very comprehensive answer, in my opinion. I have been watching this thread to understand bitfields better :slight_smile:

Best regards, Joakim

Hi,

Hi Neil,

Ok so they are not outright banned but not recommended unless you can prove there are not any code size penalties?

I would say yes. If the code size is the same, I would not mind using named bitfields.

I'm now very interested in testing this on MIPS and will be querying our compiler engineers if there is a difference. I wonder if the fact that volatile acts as a compiler memory barrier that the compiled code is larger?

Please share what you find out, I would be very interested in their view!

About the volatile: I think Jürgen put it very nicely in his description.

Cheers, Hauke

Most of the time it's not atomic, but the read-update-write will be more explicit.

Grüße Jürgen

Hey Hauke,

I have a hard time to believe that a modern compiler makes worse code for bitfields. Volatile or otherwise.

When I convert sam21_common spi.c to use the bitfields, the code becomes 4 bytes less.

Of course it is a different story if you want to modify several bits together. Splitting it, just to use the bitfields can increase the code size. -- Kees

When you use shift and mask you usually do a single access for all fields of a register.

Note that you shouldn't do it in two assignments (I'm seeing this in cpu/stm32l1/periph/gpio.c):

    port &= ~mask;     port |= (new_value << shift);

This will result in bigger code and the first assignment will write a spurious value to the register, which might cause problems.

    port = (port & ~mask) | (new_value << shift);

is better. Or

    port = (port & ~mask)          > (new_value1 << shift1)    ...          > (new_valueN << shiftN);

for multiple fields.

Grüße Jürgen

True.

But this would not be a good example for (named) bitfields, because in this example the bitfield is dynamic. (Depends on runtime value of pin number.)