Hi Bruno, On 12/15/2017 12:40 AM, Bruno Haible wrote: >>> 1) It is not a goal to have absolutely no warnings with GCC or >>> with clang. It is perfectly OK, IMO, if a compilation with "gcc -Wall" >>> shows, say, 5 warnings in 10 files. The maintainer will get used to >>> these warnings and see new warnings when they arise. >> >> That is really bad and it makes me sad. You are saying it's a good thing >> to get used to a bad situation. I hope you don't mean it. > > Sorry but why do you call it "really bad", given that these warnings > - are so few that the maintainer is not hindered in their development, > - we know that these warnings are false positives?
- people who don't regularly build the code will get upset/alarmed by those warnings, either thinking the maintainer is ignorant or/and even trying to find out what's going on (wasting precious time). - this might even repel possible contributors, especially when they invested time, created patches and the upstream answer is "haha, sorry, we don't want to fix these warnings - we got used to them". - the situation might lead to switching off gnulib warnings completely in projects that use gnulib. This means one 'line of defense' less, and less possible contributors. - when projects use static analysis tools and there is too much wave from gnulib, the gnulib will simply be excluded from analysis. If the maintainer of e.g. libz got used to some warnings... well, not so many people/devs are affected. Most projects just use the binary library, not seeing the warnings at all. But the situation is special for gnulib since it is a source code library. > >>> 2) For the problem of uninitialized variables that lead to undefined >>> behaviour, I don't see a GCC option that would warn about them [1]. >>> Instead, 'valgrind' is the ultimate tool for detecting these kinds >>> of problems. >>> So if someone has a habit of looking only at GCC warnings, they should >>> change their habit and also use valgrind once in a while. >> >> ... the quality of Valgrind >> depends on the code path coverage - that is not the same as code >> coverage. To get the test data to cover most code paths you need a >> fuzzer, at least for the more complex functions / functionality. Writing >> good fuzzers takes time and sometimes need a lot of human time for >> tuning. > > I did not say anything negative about fuzzying. Like you say, efforts on > valgrind testing and efforts on fuzzying are complementary: With the > fuzzying you increase the code coverage and code path coverage; with > valgrind you check against undefined behaviour caused by uninitialized > variables. You normally fuzz with sanitizers (ASAN, UBSAN) switched on. I remember only one thing that valgrind found but the sanitizers/fuzzing not. Uninitialized variables *should* be found by the compiler. >> since there are >> tests in glibc/gnulib for glob() that are also used with Valgrind, >> aren't there ? > > We do have a problem with the valgrind integration into projects that use > Automake: There's not one clearly best way to use valgrind that is documented, > therefore every package maintainer spends time on a valgrind integration. Not sure how much part automake takes here. The way to use valgrind heavily depends on the test suite. E.g. in wget we only want to test the wget utility which gets called by the tests itself. So we check a certain env var and add valgrind to the system()/popen() command line. How can automake know about that ? With Best Regards, Tim
signature.asc
Description: OpenPGP digital signature