http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58204
Bug ID: 58204 Summary: Spurious error when using BOZ literal to set an integer Product: gcc Version: 4.7.3 Status: UNCONFIRMED Severity: normal Priority: P3 Component: fortran Assignee: unassigned at gcc dot gnu.org Reporter: quantheory at gmail dot com On a x86_64 system, I find that the following program produces two different compile-time errors: integer, parameter :: i128 = selected_int_kind(26) integer(i128), parameter :: foo = int(Z'40000000000000000000000000000000',i128) integer(i128), parameter :: bar = int(Z'80000000000000000000000000000000',i128) integer, parameter :: i64 = selected_int_kind(13) integer(i64), parameter :: foo2 = int(Z'4000000000000000',i64) integer(i64), parameter :: bar2 = int(Z'8000000000000000',i64) end The errors are: test_boz.F90:3.73: integer(i128), parameter :: bar = int(Z'80000000000000000000000000000000',i128) 1 Error: Integer too big for integer kind 16 at (1) test_boz.F90:6.38: integer(i64), parameter :: bar2 = int(Z'8000000000000000',i64) 1 Error: Arithmetic overflow converting INTEGER(16) to INTEGER(8) at (1). This check can be disabled with the option -fno-range-check The problem seems to be that the BOZ literal is being represented as an unsigned integer(16), and converting either to a signed value or to a smaller integer produces an overflow. In bug 54072, Tobias suggests that this is OK behavior because of this section from the standard: "If A is a boz-literal-constant, the value of the result is the value whose bit sequence according to the model in 13.3 is the same as that of A as modified by padding or truncation according to 13.3.3. The interpretation of a bit sequence whose most significant bit is 1 is processor dependent." I would disagree, and say that gfortran's behavior should be considered a bug, for three reasons: 1) No truncation or padding is necessary in either of the cases in my example, and so the bit pattern for "bar" and "bar2" clearly correspond to the bit patterns for actual numbers that are representable in these types (in this case, the smallest representable integer). 2) Throwing an error when the most significant bit is 1 is not in any meaningful sense an "interpretation" of the bit sequence. In particular, an integer with a most significant bit of 1 is valid in any other context in gfortran, so it is unreasonable to treat it as if it were an invalid bit pattern in this case. 3) The current behavior currently makes it impossible to use BOZ literals to set any negative number, unless you use -fno-range-check and/or use the BOZ to set an intermediate value and then use ibset to set the sign bit. This complicates using BOZ literals to set constants representing positive and negative infinity, which is one of the only reasons you would ever want to set a real using a BOZ literal in the first place.