At 05:10 PM 11/11/2007, Michael H. Goldwasser wrote:
>Dick, > >Another typical strategy is to use some prescribed special value for >the precision parameter to designate the desire for full precision. >For example, since precisions should presumably be positive, one could >design this function as: > >def fact(n, precision=15): > """compute n!. > > precision the minimum desired precision. > If -1 is specified, computed to full precision. > """ > # ... > if precision == -1: > precision = n * 10 # insures that for n < 1 billion, ... > # ... > > >If you are not happy with the oddity of -1 (or in cases where -1 might >be a legitimate parameter value), you can pick the flag from a >different data type. In this case, perhaps None would be a more >natural way to say that you do not want any limit on the precision. >So this could be coded as > >def fact(n, precision=15): > """compute n!. > > precision the minimum desired precision. > If None is specified, computed to full precision. > """ > # ... > if precision is None: > precision = n * 10 # insures that for n < 1 billion, ... > # ... > >Looking at your examples, this should (unteste) behave as: > ># 1 (precision default overridden and set to 20) > >>> print fact(50, 20) >3.0414093201713378044e+64 > ># 2 (without explicit value, precision defaults to 15) > >>> print fact(50) >3.04140932017134e+64 > ># 3 (explicitly says not to limit precision) > >>> print fact(50, None) >30414093201713378043612608166064768844377641568960512000000000000 Beautiful! Thanks! Dick _______________________________________________ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor