https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89251

--- Comment #5 from Kochise <david.koch at libertysurf dot fr> ---
"The pointer I access is volatile, not the uint32_t behind"

Understand this :

((volatile MyRegDef*) ADDR)->enable; <- TYPO in the original message I couldn't
edit

Not this :

typedef union MyRegDef
{ struct
        { volatile uint32_t     enable:1; <- this is stupid and should be
killed with fire
        }
        volatile uint32_t       raw;
};

This should be avoided :

typedef union MyRegDef
{ struct
        { uint32_t      enable:1;
          uint32_t      _1:31; <- forcing this or using volatile should even be
considered
        }
        uint32_t        raw;
};

Please understand that the same structure could/should be used to access either
memory or registers. Imagine if network ip header structures were "optimized",
that would be such a mess.

I'm not even talking about ((packed)) attributes, that's another story, but
since those bitfield should be treated fairly regardless of their location, the
compiler shouldn't "optimize" them.

And no, definitively no, "compiler flags" shouldn't have to flower to thwart
those "optimizations", see how much GCC already pushed the envelope :
https://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html

If every time you get to disagree with the "standard" you create a new "gcc
extension" or new "compiler flags", then we're parting from the "standard" and
thus this shouldn't be named C anymore.

That the "standard" never really ruled about bitfield order is already baffling
and common sense (ie. current implementation) telling to start from bit 0 and
upward is a good thing.

But this bitfield datatype size "optimization" or this 'volatile' "trick" is
just mind blowing. I see nothing "elegant" to circumvent the problem. Perhaps
if I wanted the datatype to adapt, I would have used 'auto' instead of
'uint32_t'.

Reply via email to