Re: sensitivity vs. specificity in software testing

2023-04-09 Thread G. Branden Robinson
At 2023-04-08T18:26:13+0100, Ralph Corderoy wrote: > > My personal test procedures, I think, adequately do this for man(7); > > every time I'm about to push I render all of our man pages (about 60 > > source documents) to text and compare them to my cache of the ones I > > rendered the last time I

Re: sensitivity vs. specificity in software testing

2023-04-08 Thread Ralph Corderoy
Hi Branden, > My personal test procedures, I think, adequately do this for man(7); > every time I'm about to push I render all of our man pages (about 60 > source documents) to text and compare them to my cache of the ones I > rendered the last time I pushed. Yes, that's good as a lone developer.

Re: sensitivity vs. specificity in software testing

2023-04-07 Thread G. Branden Robinson
At 2023-04-07T13:38:38+0100, Ralph Corderoy wrote: > > On the one hand I like the idea of detecting inadvertent changes to > > vertical spacing (or anything else) in a document, but on the other, > > I find narrowly scoped regression tests to be advantageous. > > Agreed. I assume groff is a long

sensitivity vs. specificity in software testing

2023-04-07 Thread Ralph Corderoy
Hi Branden, > On the one hand I like the idea of detecting inadvertent changes to > vertical spacing (or anything else) in a document, but on the other, > I find narrowly scoped regression tests to be advantageous. Agreed. I assume groff is a long way from a set of tests which give high code cov

sensitivity vs. specificity in software testing (was: [PATCH] fix for groff Git regression (Savannah #64005))

2023-04-06 Thread G. Branden Robinson
Hi Ralph, At 2023-04-06T12:59:57+0100, Ralph Corderoy wrote: [snip] > Would it be worth testing all of $output is exactly as expected? This > would widen what's being tested which may catch a future regression > outside the scope of this test, e.g. with .DS/.DE. The downside is a > deliberate ch