> dictionary = {3.1000000000000014: value, 2.1000000000000005: value, > 1.0999999999999999: value} > Why is this happening? The output is telling me 3.1, but the value isn't
It's a quirk of how computers store floating point numbers. While humans mentally tend to treat everything as characters (and thus "3.1" is three character: a "3", a ".", and a "1") the computer internally stores everything as bytes (which are basically numbers), and we have a character set that says that such-and-such number can represent "A" or "B", or even "3". For the purposes of efficiency, actual numbers can be STORED as numbers. This is the difference between an "integer" value and a "character" value - not what is stored, but the stored number is interpreted. Internally it's all represented as binary numbers = sums of bits that represent powers of two. So 111 = 64+32+8+4+2+1 which is 1101111 (assuming the math I just did in my head is correct, but you get the idea) (Note that the Python Virtual machine is another layer of translation above this, but that's irrelevant to the basic point) Okay fine, so "1024" stored as a number only requires 10 bits (binary digits) to store, while "1024" as a string is 4 characters, requiring (at least, depending on your character set) 4 bytes to store. None of this explains what you're seeing. So how is a floating number stored? Say, 0.5? The short version (you can google for the longer and more accurate version) is that the decimal part is stored as a denominator. So 0.5 would be 2 (because 1/2 = .5) and 0.125 = 8 (because /18 = 0.125) and .75 would be the 2 bit and the 4 bit (because 1/2 + 1/4 = 0.75) That works great for powers of two, but how do you represent something like 0.1? 1/10 isn't easily represented in binary fractions. Answer: you don't. The computer instead gets the best approximation it can. When you deal with most common representations, you never notice the difference, but it's still there. Floating point math is an issue for all programs that require high precision, and there are additional libraries (including in Python) to deal with it, but they aren't the default (again, both in and out of Python) for various reasons. In your case I suspect you'll just want to use a format to output the number and you'll see exactly what you expect. It only becomes a problem in high-precision areas, which 0.1 increments don't tend to be. Hope that helps! -- Brett Ritter / SwiftOne swift...@swiftone.org _______________________________________________ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor