https://gcc.gnu.org/bugzilla/show_bug.cgi?id=18487

--- Comment #30 from Federico Kircheis <federico.kircheis at gmail dot com> ---
It seems to me we are not going to agree as we tend to repeat ourselves, lets
see if we go around and around in circles or if it is more like a spiral ;)



Your view is more about the compiler, how it is interpreting the attributes and
thus why it is unneeded, mine is more about the developers writing (but most
importantly) reading it.


> The only functions GCC can warn about are those that don’t need the
attributes in the first place. The way any warning would work is to detect
whether it is pure/const, and then see how the user marked it. So anything
it can properly detect as right or wrong didn’t need an attribute to begin
with - the compiler could already tell if it was pure/const


My knowledge about how GCC (or other compilers) works, is very limited, but If
the function is implemented in another
  * translation unit
  * library
  * pre-compiled library
  * pre-compiled library created by another compiler
does GCC know it can avoid calling it multiple times?


Whole-program-optimization might help in some of those cases (I admit I have no
idea; can the linker remove multiple function calls and replace them with a
variable?), but depending on the project size it might add up a lot in term of
compile-times.
So even for simple functions, where GCC can clearly determine its purity, it
can be useful adding the attribute.


And even assuming that whole-program-optimization helps in most of those cases
(which do not depend on the complexity or length of a function) how does
someone know if adding those attributes to a function that is pure makes sense
or not?

Adding pure to `inline int answer_of_life(){return 42;}` might not make any
difference (both for programmers and compiler, because of it's simplicity and
because inline), but where should the line be drawn?

Should I mark my functions (with something else as you are suggesting too it
might do more harm than good), add for all those dummy tests, and check in the
generated assembly if GCC recognizes them as pure and elides the second call?
There must be surely be a better way, but I currently know no other.


> Rather than tell the user they got it wrong, you might as well tell the
user to remove the attribute because it isn’t necessary and won’t be
necessary.

No, removing it as unnecessary would be wrong.
Then you cannot tell anymore the difference between functions that are pure by
accident and by design.
And you cannot prevent anymore a pure-function to getting nonpure, except by
reading the code.
It is useful for programmers (yes, even they look at the code), even for those
function where GCC does not need the attribute.

> Giving a bunch of really contrived examples where users may update things
wrong doesn’t seem like a good motivation to make a warning that can only
possibly have a really high false positive rate.

Just adding a "printf" statement for debugging, or increasing/decreasing a
global counter invalidates the pure attributes.
Thus by trying to understand/analyze a bug, another is added.


> It is a tool for experts.

And I see no harm in making it more developer-friendly.
Why would that be a bad idea? As you claimed previously.

Because it is difficult to implement?
I do not know if it is, but that would not make it a bad idea.

Because of false positives?
Developers can handle them, case-by-case by documenting and disabling (or
ignoring) the diagnostic, or globally by not turning the diagnostic on.
Just like any other diagnostic.

Because it adds nothing from a compiler perspective?
I'm still not convinced that it has no added value, especially when interacting
with "extern" code/libraries.

But it definitively has some value for developers.
It's part of the API of a function, just like declaring the member function of
a class const (or the parameter of a function).
Adding const might even avoid some optimization, and leads to code-duplication
when one needs overloads (like for operator[] in container-like classes), but
from a developer perspective it's great. It helps to catch errors.
Of course one could never use it, for the compiler it would be the same.
And it would not invalidate it's original use-case, thus it would still be
possible to use those attributes like today if someone wants to, they would not
even need to change a thing.

Reply via email to