Denis Vlasenko wrote:
I still cannot figure out what precision is, so I restricted new code to (n == HOST_BITS_PER_WIDE_INT && precision == HOST_BITS_PER_WIDE_INT) case. Need help here.
At the moment, there is probably no one who understands this code as well as you do, so you may not get much help from others.
I looked at this a little, and I think precision is the number of significant bits in the divisor. Note that unsigned divides use N for precision, but signed divides use N-1. Also, in the pre_shift case, where we shift the divisors right if they have zeros in the low bits, we subtract the shift count from the precision.
This probably has some effect on how many bits we can safely use for the resulting inverse multiplier.
This code is based on a paper writte by Torbjorn Granlund and Peter Montgomery. This was published in the 1994 SIGPLAN PLDI conference proceedings. You should read this paper if you haven't already done so. There may be clues in there about how the gcc algorithm works. The gcc code was written by one of the co-authors, Torbjorn Granlund.
-- Jim Wilson, GNU Tools Support, http://www.specifix.com