On Mon, Aug 13, 2001 at 12:37:45PM -0500, Dimitri Maziuk wrote: > * Craig Dickson ([EMAIL PROTECTED]) spake thusly: > > Paul Scott wrote: > > > > > Well that may date me a little even though I am actively programming at > > > this moment. I will research this a little more. My logic would be it > > > would break the rules of the language to assume that conversion. > > > > I don't see how. I see it as a legitimate compiler optimization. If you > > have "double f = 4;", and you compile 4 as a double-precision value > > rather than as an int (which would then require an immediate > > conversion), how could that possibly break a program? > > Very simple: double f = 4 may be converted to eg. 4.000000000000000001234, > and any test for (sqrt(f) == 2.0) will fail. Of course if your (generic > "you", not personal) code is like that, you probably shouldn't be playing > with floats.
Actually, any 32 bit int will be exactly converted into a double with no loss of precision... As far as the language definition goes, if you say double f = 4, the language assures you that the '4' will be converted to a double format. Whether it is done at compile time or at runtime makes no difference. -- David Roundy http://civet.berkeley.edu/droundy/