Re: Escape the unnecessary re-optimization in automatic parallelization.

2009-10-13 Thread Alexey Salmin
Oh, hello Yuri, nice to meet you here :) Alexey On Tue, Oct 13, 2009 at 7:51 PM, Yuri Kashnikoff wrote: >> Therefore, the most effective way to address the issue of running redundant >> optimization passes in the context is probably to put it in the wider >> context >> of the work to allow exter

Re: Escape the unnecessary re-optimization in automatic parallelization.

2009-10-13 Thread Yuri Kashnikoff
> Therefore, the most effective way to address the issue of running redundant > optimization passes in the context is probably to put it in the wider > context > of the work to allow external plugins to influence the pass sequence that is > being applied, and to control this with machine learning.

Re: Escape the unnecessary re-optimization in automatic parallelization.

2009-10-11 Thread Li Feng
Hi Joern, On Sat, Oct 10, 2009 at 5:27 PM, Joern Rennecke wrote: > Quoting Li Feng : >> >> So my question is, >> >> 1. Is this necessary/correct if we want to escape the re-optimization for >> the >> first few passes before tree-parloop.c and continue the optimization >> passes >> after it for th

Re: Escape the unnecessary re-optimization in automatic parallelization.

2009-10-10 Thread Joern Rennecke
Quoting Li Feng : So my question is, 1. Is this necessary/correct if we want to escape the re-optimization for the first few passes before tree-parloop.c and continue the optimization passes after it for the function fun.loop_f0, there must be compile time savings if we do this in my opinion

Escape the unnecessary re-optimization in automatic parallelization.

2009-10-09 Thread Li Feng
Hi, I'm considering how to escape the unnecessary re-optimization in automatic parallelization. e.g. in a source code the function fun() one loop is considered can_be_parallel by Graphite pass, and in pass tree-parloop.c, will call the code generation part for this loop and generate