Hi Akim, > > * Regarding tests: There is a large amount of algorithmic code. While > > the parts that Bison uses are surely correct, there is a possibility > > of bugs in the more rarely used parts. I would find it very useful > > to have unit tests of all 32 operations. > > I definitely agree. But I'd like to be incremental on this > regard.
Sure. You can push it into gnulib, without having extensive tests. > > An approach that minimizes the code to be written would be to > > do the same operations on, say, list-bitsets and array-bitsets, and > > compare the results. This way, you can infer the correctness of the > > list-bitsets implementation, assuming the correctness of the array-bitsets. > > (This approach is already used in the test-*list.c tests.) > > Sure, but that won't catch errors on dispatch that would > accidentally map two operations to the same one. So, eventually > it would be better to set the real expected results. Randomized tests provide the best code coverage (i.e. exercise all branches of the algorithmic code), but for randomized tests you don't have expected results. So, how about a combination of the two approaches? - Tests with given inputs and expected results, as a basis and to catch dispatch code errors, - Tests with randomized input that compare the different implementations against each other, for code coverage. Bruno