Does -flto give gcc access to data addresses?

2014-09-11 Thread Jeff Prothero

Hi, I'm having trouble based on available docs like
https://gcc.gnu.org/onlinedocs/gccint/LTO.html
in understanding just what the gcc LTO framework is
intended to be architecturally capable of.

As a concrete motivating example, I have a 32K embedded
program about 5% of which consists of sequences like

movhi   r2,0
addir2,r2,26444
stw r15,0(r2)

This is on a 32-bit RISC architecture (Nios2) with 16-bit
immediate values in instructions where in general a
sequence like

movhi   r2,high_half_of_address
addir2,r2,low_half_of_address

is required to assemble an arbitrary 32-bit address in
registers for use.

However, if the high half of the address happens to be
zero, (which is universally true in this program because
code+data fit in 64KB -- forced by hardware constraints)
we can collapse

movhi   r2,0
addir2,r2,26444
stw r15,0(r2)
to just
stw r15,26444(r0)

saving two instructions. (On this architecture
R0 is hardwired to zero.)

This seems like a natural peephole optimization at
linktime -- *if* data addresses are resolved in some
(preliminary?) fashion during linktime code generation.

Is this a plausible optimization to implement in gcc
+ binutils with the current -flto support architecture?

If so, what doc/mechanism/approach/sourcefile should
I be studying in order to implement this?

If not, is there some other productive way to tickle
gcc + binutils here?

Thanks in advance,
 -Jeff




Obscure crashes due to gcc 4.9 -O2 => -fisolate-erroneous-paths-dereference

2015-02-18 Thread Jeff Prothero

Starting with gcc 4.9, -O2 implicitly invokes

-fisolate-erroneous-paths-dereference:

which

https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html

documents as

Detect paths that trigger erroneous or undefined behavior due to
dereferencing a null pointer. Isolate those paths from the main control
flow and turn the statement with erroneous or undefined behavior into a
trap. This flag is enabled by default at -O2 and higher.

This results in a sizable number of previously working embedded programs 
mysteriously
crashing when recompiled under gcc 4.9.  The problem is that embedded
programs will often have ram starting at address zero (think hardware-defined
interrupt vectors, say) which gets initialized by code which the
-fisolate-erroneous-paths-deference logic can recognize as reading and/or
writing address zero.

What happens then is that the previously running program compiles without
any warnings, but then typically locks up mysteriously (often disabling the
remote debug link) due to the trap not being gracefully handled by the
embedded runtime.

Granted, such code is out-of-spec wrt to C standards.

None the less, the problem is quite painful to track down and
unexpected.

Is there any good reason the 

-fisolate-erroneous-paths-dereference

logic could not issue a compiletime warning or error, instead of just
silently generating code virtually certain to crash at runtime?

Such a warning/error would save a lot of engineers significant amounts
of time, energy and frustration tracking down this problem.

I would like to think that the spirit of gcc is about helping engineers
efficiently correct nonstandard pain, rather than inflicting maximal
pain upon engineers violating C standards.  :-)

-Jeff

BTW, I'd also be curious to know what is regarded as engineering best
practice for writing a value to address zero when this is architecturally
required by the hardware platform at hand.  Obviously one can do various
things to obscure the process sufficiently that the current gcc implementation
won't detect it and complain, but as gcc gets smarter about optimization
those are at risk of failing in a future release.  It would be nice to have
a guaranteed-to-work future-proof idiom for doing this. Do we have one, short
of retreating to assembly code?


Re: Obscure crashes due to gcc 4.9 -O2 => -fisolate-erroneous-paths-dereference

2015-02-19 Thread Jeff Prothero

(Thanks to everyone for the helpful feedback!)

Daniel Gutson wrote:

> what about then two warnings (disabled by default), one intended to
> tell the user each time the compiler removes a conditional
>   (-fdelete-null-pointer-checks)
> and another intended to tell the user each time the compiler adds a
> trap due to dereference an address 0?
>
> E.g.
>   -Wnull-pointer-check-deleted
>   -Wnull-dereference-considered-erroneous

I very much like the idea of such warnings.

I'm not clear why one would not warn by default when detecting
non-standards-conformant code and producing code guaranteed not
to do what the programmer intended.  But presumably most sane
engineers these days compile with -Wall. :-)

-Jeff