On Mon, Jan 28, 2013 at 09:39:00AM -0700, Jeff Law wrote:
> I'm assuming that we don't need the shallow_copy_rtx call and
> related code because in the PREFETCH case we generate a new MEM and
> the underlying address can be safely shared. Right?
AFAIK cselib_lookup* never modifies the rtx it is p
On 01/28/2013 07:14 AM, Jakub Jelinek wrote:
Hi!
We ICE on the following testcase when using cselib, because
cselib_lookup* is never called on the PREFETCH argument, and
add_insn_mem_dependence calls cselib_subst_to_values on it, which
assumes cselib_lookup* already happened on it earlier.
For M
Hi!
We ICE on the following testcase when using cselib, because
cselib_lookup* is never called on the PREFETCH argument, and
add_insn_mem_dependence calls cselib_subst_to_values on it, which
assumes cselib_lookup* already happened on it earlier.
For MEMs sched_analyze_2 calls cselib_lookup_from_in