Re: About GSOC.

2019-06-04 Thread Tejas Joshi
Hello.

> NaN, and you should make sure it behaves accordingly.  (If it should never
> be called for them, a gcc_assert would be appropriate.)

I can't find any documentation about how and when to use gcc_assert.
But I used it looking at the comment at its definition and locations
it is used, is this appropriate? Or is it supposed to be used before
calling the function? :

+bool
+is_even (REAL_VALUE_TYPE *r)
+{
+  /* The function is not supposed to use for Inf and NaN. */
+  gcc_assert (r->cl != rvc_inf);
+  gcc_assert (r->cl != rvc_nan);

> So n is the bit position, and w is the word position, of the bit with
> value 1; n-1 is the position of the bit with value 0.5.
> If n is a multiple of HOST_BITS_PER_LONG (that is, the bits with values
> 0.5 and 1 are in different words), this will incorrectly return false when
> the 0.5 bit is set.

I did not understand this. What is the bit with value 1?
But when n is a multiple of HOST_BITS_PER_LONG, the function was
computing value of w wrong (e.g. for number 2^63 + 0.5). At such time,
would the following improvisation be acceptable in is_halfway_below?

+bool
+is_halfway_below (const REAL_VALUE_TYPE *r)
+{
+  /* The function is not supposed to use for Inf and NaN. */
+  gcc_assert (r->cl != rvc_inf);
+  gcc_assert (r->cl != rvc_nan);
+  int i;
+
+  /* For numbers (-0.5,0) and (0,0.5). */
+  if (REAL_EXP (r) < 0)
+return false;
+
+  else if (REAL_EXP (r) <= SIGNIFICAND_BITS)
+  {
+unsigned int n = SIGNIFICAND_BITS - REAL_EXP (r);
+int w = n / HOST_BITS_PER_LONG;
+
+if (n % HOST_BITS_PER_LONG == 0)
+{
+  if (w > 0)
+w = w - 1;
+  else
+w = 0;
+}
+
+for (i = 0; i < w; ++i)
+{
+  if (r->sig[i] != 0)
+return false;
+}

Thanks.


On Mon, 3 Jun 2019 at 22:08, Joseph Myers  wrote:
>
> On Fri, 31 May 2019, Tejas Joshi wrote:
>
> > +/* Return true if integer part of R is even, else return false. */
> > +
> > +bool
> > +is_even (REAL_VALUE_TYPE *r)
> > +{
> > +  if (REAL_EXP (r) <= 0)
> > +return false;
>
> But the integer part (truncation towards 0) of something in the interval
> (-1, 1) is of course even.
>
> > +  else if (REAL_EXP (r) < SIGNIFICAND_BITS)
> > +  {
> > +unsigned int n = SIGNIFICAND_BITS - REAL_EXP (r);
> > +int w = n / HOST_BITS_PER_LONG;
> > +
> > +unsigned long num = ((unsigned long)1 << (n % HOST_BITS_PER_LONG));
> > +
> > +if ((r->sig[w] & num) == 0)
> > +  return true;
> > +  }
> > +  return false;
> > +}
>
> Suppose REAL_EXP (r) == SIGNIFICAND_BITS.  Then you still need to check
> the low bit in that case.
>
> Suppose REAL_EXP (r) > SIGNIFICAND_BITS.  Then the number is definitely
> even, so you should return true, not false.
>
> The comment on this function needs to define what it does for infinity /
> NaN, and you should make sure it behaves accordingly.  (If it should never
> be called for them, a gcc_assert would be appropriate.)
>
> What does this function do for zero?  It should, of course, return that it
> is even.
>
> > +/* Return true if R is halfway between two integers, else return false. */
>
> Again, define what this does for infinity / NaN and make sure it behaves
> accordingly.
>
> > +bool
> > +is_halfway_below (const REAL_VALUE_TYPE *r)
> > +{
> > +  if (REAL_EXP (r) < 0)
> > +return false;
> > +
> > +  if (REAL_EXP (r) == 0)
> > +  {
> > +unsigned long temp = ((unsigned long)1 << 63);
>
> unsigned long might be 32-bit; you can't assume it's 64-bit.  You may mean
> (HOST_BITS_PER_LONG - 1) instead of 63.
>
> > +if (((r->sig[SIGSZ-1] & temp) != 0) && ((r->sig[SIGSZ-1] & (temp-1)) 
> > == 0))
> > +  return true;
> > +else
> > +  return false;
>
> This appears only to be checking the high word, not lower bits.
>
> > +  else if (REAL_EXP (r) < SIGNIFICAND_BITS)
> > +  {
> > +unsigned int n = SIGNIFICAND_BITS - REAL_EXP (r);
> > +int i, w = n / HOST_BITS_PER_LONG;
>
> So n is the bit position, and w is the word position, of the bit with
> value 1; n-1 is the position of the bit with value 0.5.
>
> > +for (i = 0; i < w; ++i)
> > +{
> > +  if (r->sig[i] != 0)
> > +return false;
> > +}
>
> If n is a multiple of HOST_BITS_PER_LONG (that is, the bits with values
> 0.5 and 1 are in different words), this will incorrectly return false when
> the 0.5 bit is set.
>
> > +unsigned long num = ((unsigned long)1 << ((n - 1) % 
> > HOST_BITS_PER_LONG));
> > +
> > +if (((r->sig[w] & num) != 0) && ((r->sig[w] & (num-1)) == 0))
> > +  return true;
>
> And this also needs updating to handle the case where 0.5 and 1 are in
> different words correctly; currently it's checking bits that are all one
> word too high.  It's possible that for both issues, you want w to be the
> word with the 0.5 bit, not the word with the 1 bit.
>
> For all the above, please add appropriate tests in the testsuite (for
> example, where 0.5 and 1 are in different words and the above would have
> produced incorrect results).

Re: On-Demand range technology [2/5] - Major Components : How it works

2019-06-04 Thread Richard Biener
On Fri, May 31, 2019 at 5:40 PM Andrew MacLeod  wrote:
>
> On 5/29/19 7:15 AM, Richard Biener wrote:
> > On Tue, May 28, 2019 at 4:17 PM Andrew MacLeod  wrote:
> >> On 5/27/19 9:02 AM, Richard Biener wrote:
> >>> On Fri, May 24, 2019 at 5:50 PM Andrew MacLeod  
> >>> wrote:
> > The above suggests that iff this is done at all it is not in GORI 
> > because
> > those are not conditional stmts or ranges from feeding those.  The
> > machinery doing the use-def walking from stmt context also cannot
> > come along these so I have the suspicion that Ranger cannot handle
> > telling us that for the stmt following above, for example
> >
> > if (_5 != 0)
> >
> > that _5 is not zero?
> >
> > Can you clarify?
>  So there are 2 aspects to this.the range-ops code for DIV_EXPR, if
>  asked for the range of op2 () would return ~[0,0] for _5.
>  But you are also correct in that the walk backwards would not find this.
> 
>  This is similar functionality to how null_derefs are currently handled,
>  and in fact could probably be done simultaneously using the same code
>  base.   I didn't bring null derefs up, but this is a good time :-)
> 
>  There is a separate class used by the gori-cache which tracks the
>  non-nullness property at the block level.It has a single API:
>  non_null_deref_p (name, bb)which determines whether the is a
>  dereference in any BB for NAME, which indicates whether the range has an
>  implicit ~[0,0] range in that basic block or not.
> >>> So when we then have
> >>>
> >>>_1 = *_2; // after this _2 is non-NULL
> >>>_3 = _1 + 1; // _3 is non-NULL
> >>>_4 = *_3;
> >>> ...
> >>>
> >>> when a on-demand user asks whether _3 is non-NULL at the
> >>> point of _4 = *_3 we don't have this information?  Since the
> >>> per-BB caching will only say _1 is non-NULL after the BB.
> >>> I'm also not sure whether _3 ever gets non-NULL during
> >>> non-NULL processing of the block since walking immediate uses
> >>> doesn't really help here?
> >> presumably _3 is globally non-null due to the definition being (pointer
> >> + x)  ... ie, _3 has a global range o f ~[0,0] ?
> > No, _3 is ~[0, 0] because it is derived from _1 which is ~[0, 0] and
> > you cannot arrive at NULL by pointer arithmetic from a non-NULL pointer.
>
> I'm confused.
>
> _1 was loaded from _2 (thus asserting _2 is non-NULL).  but we have no
> idea what the range of _1 is, so  how do you assert _1 is [~0,0] ?

Bug on my part - it should offset _2:

 _1 = *_2; // after this _2 is non-NULL
 _3 = _2 + 1; // _3 is non-NULL
 _4 = *_3;

> The only way I see to determine _3 is non-NULL  is through the _4 = *_3
> statement.

So how do you derive a ~[0,0] _2 and thus know that *_3 does not trap?

>
> >
> >>> So this seems to be a fundamental limitation [to the caching scheme],
> >>> not sure if it is bad in practice.
> >>>
> >>> Or am I missing something?
> >>>
> >> Not missing anything  The non-nullness property is maintains globally at
> >> the basic block level.  both _1 and _3 are flagged as being non-null in
> >> the block.  Upon exit, its a bit check.   If the global information does
> >> not have the non-nullness property, then when a request is made for
> >> non-nullness  and the def and the use are both within the same block,
> >> and its flagged as being non-null in that block, then the request is
> >> forced back to a quick walk between the def and the use to see if there
> >> is any non-nulless introduced in between.   Yes, that makes it a linear
> >> walk, but its infrequent, and often short.  to the best of our knowledge
> >> at this point anyway :-)
> > So with the clarification above do we ever see that _3 is non-NULL?
> > I suppose the worker processing _3 = _1 + 1 would ask for
> > _1 non-nullness but we do not record any non-NULL-ness of _1 in
> > this basic-block (but only at its end).  Consider stmts
> >
> >   _4 = (uintptr_t) _2;
> >   _5 = _6 / _4;
> >   _1 = *_2;
> > ...
> >
> > here at _1 we know _2 is not NULL.  But if we ask for non-NULLness
> > of _2 at the definition of _4 we may not compute ~[0, 0] and thus
> > conclude that _6 / _4 does not trap.
> EVRP must look backwards to figure this out since the forward walk will
> process _5 = _6 / _4 before it sees the dereference to _2... so how does
> it know that _4 is non-zero without looking backwards at things after it
> sees the dereference??  Does it actually do this?

It actually doesn't do this for divisions (but I have a patch lying somewhere).
During the forward walk when coming along _5 = _6 / _4; it would
in evrp_range_analyzer::record_ranges_from_stmt via infer_value_range
push a new value-range for _4.  To infer the same for _2 it would need to
look backward which I think it doesn't do right now but it easily could
(I'm just trying to make up examples to understand ranger better).
When visiting _1 = *_2 we then know _2 is not NULL.

There may be m

Re: On-Demand range technology [2/5] - Major Components : How it works

2019-06-04 Thread Richard Biener
On Tue, Jun 4, 2019 at 5:26 PM Richard Biener
 wrote:
>
> On Fri, May 31, 2019 at 5:40 PM Andrew MacLeod  wrote:
> >
> > On 5/29/19 7:15 AM, Richard Biener wrote:
> > > On Tue, May 28, 2019 at 4:17 PM Andrew MacLeod  
> > > wrote:
> > >> On 5/27/19 9:02 AM, Richard Biener wrote:
> > >>> On Fri, May 24, 2019 at 5:50 PM Andrew MacLeod  
> > >>> wrote:
> > > The above suggests that iff this is done at all it is not in GORI 
> > > because
> > > those are not conditional stmts or ranges from feeding those.  The
> > > machinery doing the use-def walking from stmt context also cannot
> > > come along these so I have the suspicion that Ranger cannot handle
> > > telling us that for the stmt following above, for example
> > >
> > > if (_5 != 0)
> > >
> > > that _5 is not zero?
> > >
> > > Can you clarify?
> >  So there are 2 aspects to this.the range-ops code for DIV_EXPR, if
> >  asked for the range of op2 () would return ~[0,0] for _5.
> >  But you are also correct in that the walk backwards would not find 
> >  this.
> > 
> >  This is similar functionality to how null_derefs are currently handled,
> >  and in fact could probably be done simultaneously using the same code
> >  base.   I didn't bring null derefs up, but this is a good time :-)
> > 
> >  There is a separate class used by the gori-cache which tracks the
> >  non-nullness property at the block level.It has a single API:
> >  non_null_deref_p (name, bb)which determines whether the is a
> >  dereference in any BB for NAME, which indicates whether the range has 
> >  an
> >  implicit ~[0,0] range in that basic block or not.
> > >>> So when we then have
> > >>>
> > >>>_1 = *_2; // after this _2 is non-NULL
> > >>>_3 = _1 + 1; // _3 is non-NULL
> > >>>_4 = *_3;
> > >>> ...
> > >>>
> > >>> when a on-demand user asks whether _3 is non-NULL at the
> > >>> point of _4 = *_3 we don't have this information?  Since the
> > >>> per-BB caching will only say _1 is non-NULL after the BB.
> > >>> I'm also not sure whether _3 ever gets non-NULL during
> > >>> non-NULL processing of the block since walking immediate uses
> > >>> doesn't really help here?
> > >> presumably _3 is globally non-null due to the definition being (pointer
> > >> + x)  ... ie, _3 has a global range o f ~[0,0] ?
> > > No, _3 is ~[0, 0] because it is derived from _1 which is ~[0, 0] and
> > > you cannot arrive at NULL by pointer arithmetic from a non-NULL pointer.
> >
> > I'm confused.
> >
> > _1 was loaded from _2 (thus asserting _2 is non-NULL).  but we have no
> > idea what the range of _1 is, so  how do you assert _1 is [~0,0] ?
>
> Bug on my part - it should offset _2:
>
>  _1 = *_2; // after this _2 is non-NULL
>  _3 = _2 + 1; // _3 is non-NULL
>  _4 = *_3;
>
> > The only way I see to determine _3 is non-NULL  is through the _4 = *_3
> > statement.
>
> So how do you derive a ~[0,0] _2 and thus know that *_3 does not trap?
>
> >
> > >
> > >>> So this seems to be a fundamental limitation [to the caching scheme],
> > >>> not sure if it is bad in practice.
> > >>>
> > >>> Or am I missing something?
> > >>>
> > >> Not missing anything  The non-nullness property is maintains globally at
> > >> the basic block level.  both _1 and _3 are flagged as being non-null in
> > >> the block.  Upon exit, its a bit check.   If the global information does
> > >> not have the non-nullness property, then when a request is made for
> > >> non-nullness  and the def and the use are both within the same block,
> > >> and its flagged as being non-null in that block, then the request is
> > >> forced back to a quick walk between the def and the use to see if there
> > >> is any non-nulless introduced in between.   Yes, that makes it a linear
> > >> walk, but its infrequent, and often short.  to the best of our knowledge
> > >> at this point anyway :-)
> > > So with the clarification above do we ever see that _3 is non-NULL?
> > > I suppose the worker processing _3 = _1 + 1 would ask for
> > > _1 non-nullness but we do not record any non-NULL-ness of _1 in
> > > this basic-block (but only at its end).  Consider stmts
> > >
> > >   _4 = (uintptr_t) _2;
> > >   _5 = _6 / _4;
> > >   _1 = *_2;
> > > ...
> > >
> > > here at _1 we know _2 is not NULL.  But if we ask for non-NULLness
> > > of _2 at the definition of _4 we may not compute ~[0, 0] and thus
> > > conclude that _6 / _4 does not trap.
> > EVRP must look backwards to figure this out since the forward walk will
> > process _5 = _6 / _4 before it sees the dereference to _2... so how does
> > it know that _4 is non-zero without looking backwards at things after it
> > sees the dereference??  Does it actually do this?
>
> It actually doesn't do this for divisions (but I have a patch lying 
> somewhere).
> During the forward walk when coming along _5 = _6 / _4; it would
> in evrp_range_analyzer::record_ranges_from_stmt via infer_valu

Re: On-Demand range technology [2/5] - Major Components : How it works

2019-06-04 Thread Andrew MacLeod

On 6/4/19 11:26 AM, Richard Biener wrote:

On Fri, May 31, 2019 at 5:40 PM Andrew MacLeod  wrote:

On 5/29/19 7:15 AM, Richard Biener wrote:

On Tue, May 28, 2019 at 4:17 PM Andrew MacLeod  wrote:

On 5/27/19 9:02 AM, Richard Biener wrote:

On Fri, May 24, 2019 at 5:50 PM Andrew MacLeod  wrote:

The above suggests that iff this is done at all it is not in GORI because
those are not conditional stmts or ranges from feeding those.  The
machinery doing the use-def walking from stmt context also cannot
come along these so I have the suspicion that Ranger cannot handle
telling us that for the stmt following above, for example

 if (_5 != 0)

that _5 is not zero?

Can you clarify?

So there are 2 aspects to this.the range-ops code for DIV_EXPR, if
asked for the range of op2 () would return ~[0,0] for _5.
But you are also correct in that the walk backwards would not find this.

This is similar functionality to how null_derefs are currently handled,
and in fact could probably be done simultaneously using the same code
base.   I didn't bring null derefs up, but this is a good time :-)

There is a separate class used by the gori-cache which tracks the
non-nullness property at the block level.It has a single API:
non_null_deref_p (name, bb)which determines whether the is a
dereference in any BB for NAME, which indicates whether the range has an
implicit ~[0,0] range in that basic block or not.

So when we then have

_1 = *_2; // after this _2 is non-NULL
_3 = _1 + 1; // _3 is non-NULL
_4 = *_3;
...

when a on-demand user asks whether _3 is non-NULL at the
point of _4 = *_3 we don't have this information?  Since the
per-BB caching will only say _1 is non-NULL after the BB.
I'm also not sure whether _3 ever gets non-NULL during
non-NULL processing of the block since walking immediate uses
doesn't really help here?

presumably _3 is globally non-null due to the definition being (pointer
+ x)  ... ie, _3 has a global range o f ~[0,0] ?

No, _3 is ~[0, 0] because it is derived from _1 which is ~[0, 0] and
you cannot arrive at NULL by pointer arithmetic from a non-NULL pointer.

I'm confused.

_1 was loaded from _2 (thus asserting _2 is non-NULL).  but we have no
idea what the range of _1 is, so  how do you assert _1 is [~0,0] ?

Bug on my part - it should offset _2:

  _1 = *_2; // after this _2 is non-NULL
  _3 = _2 + 1; // _3 is non-NULL
  _4 = *_3;


The only way I see to determine _3 is non-NULL  is through the _4 = *_3
statement.

So how do you derive a ~[0,0] _2 and thus know that *_3 does not trap?


well, we know *_2 is non-null on exit to the block. we wind backwards, 
so we know its non-null when we get to  _3 = _2 + 1
But as I will say later in this note and in the earlier discussion with 
jeff in this thread, the non-null property needs some work as I can 
rewrite this to make it wrong. :-P


There is an interesting difference between what we know about things in 
a  forward walk, what we know in a backward walk, and what we know in a 
def-chain walk  :-)






So this seems to be a fundamental limitation [to the caching scheme],
not sure if it is bad in practice.

Or am I missing something?


Not missing anything  The non-nullness property is maintains globally at
the basic block level.  both _1 and _3 are flagged as being non-null in
the block.  Upon exit, its a bit check.   If the global information does
not have the non-nullness property, then when a request is made for
non-nullness  and the def and the use are both within the same block,
and its flagged as being non-null in that block, then the request is
forced back to a quick walk between the def and the use to see if there
is any non-nulless introduced in between.   Yes, that makes it a linear
walk, but its infrequent, and often short.  to the best of our knowledge
at this point anyway :-)

So with the clarification above do we ever see that _3 is non-NULL?
I suppose the worker processing _3 = _1 + 1 would ask for
_1 non-nullness but we do not record any non-NULL-ness of _1 in
this basic-block (but only at its end).  Consider stmts

   _4 = (uintptr_t) _2;
   _5 = _6 / _4;
   _1 = *_2;
...

here at _1 we know _2 is not NULL.  But if we ask for non-NULLness
of _2 at the definition of _4 we may not compute ~[0, 0] and thus
conclude that _6 / _4 does not trap.

EVRP must look backwards to figure this out since the forward walk will
process _5 = _6 / _4 before it sees the dereference to _2... so how does
it know that _4 is non-zero without looking backwards at things after it
sees the dereference??  Does it actually do this?

It actually doesn't do this for divisions (but I have a patch lying somewhere).
During the forward walk when coming along _5 = _6 / _4; it would
in evrp_range_analyzer::record_ranges_from_stmt via infer_value_range
push a new value-range for _4.  To infer the same for _2 it would need to
look backward which I think it doesn't do right now but it easily could
(I'm just trying to make up examples to understa

Re: On-Demand range technology [2/5] - Major Components : How it works

2019-06-04 Thread Andrew MacLeod

On 6/4/19 11:37 AM, Richard Biener wrote:

On Tue, Jun 4, 2019 at 5:26 PM Richard Biener
 wrote:


But you still have a reference to the range in evry BB dominated by the
definition?

Btw, I was thinking of

unsigned char foo(long);
void baz(unsigned char c)
{
   try{
   long i = c;
   i += foo(i);
   i += foo(i);
   i += foo(i);
   i += foo(i);
... repeat N times ...
  alloca(i); // query range of i
   }
   catch (...)
 {
 }
}


where the single(!) query of the range of i on the alloca call
will populate N BBs caches with the Nth having ranges for
all SSA defs of i running into quadraticness in N.
well, not really.   its still linear since the 'i' used in each foo() 
will be dead, so the next block will contain no range for it



  try{
  long i_2 = _1;
  i_3 += foo(i_2);
  i_4 += foo(i_3);
  i_5 += foo(i_4);
  i_6 += foo(i_5);
... repeat N times ...
 alloca(i_6); // query range of i

once i_2 is 'dead' after the first call to foo(), it will no longer be 
in the on-entry cache to any other block.
The walk back never see's i_2 until the first call to foo(), so none of 
blocks after that have a need for its range, so it will not in their caches.


The on-entry cache will only get a single range entry for each of those 
blocks.. the incoming value of i_x  and thats it. The implicitly known 
live-on-exit attribute is handy here.


Now, that's not to say we cant construct cases where there is a lot of 
ranges being populated.. If we find the cache is really a problem, we 
can start caching ranges.. so that if all these i_x's were live somehow, 
there would only be one range for each in this case, and the cache's 
would simply all point to the same range.


so far we havent really run across anything that appears to trigger 
significant concerns.






If you actually try you probably will have to find a persisting
stmt that some user of on-demand query looks for.
You converted some of the warning passes so choose
a stmt that triggers it from there.

Note we've run into "interesting" testcases mostly from
machine generated code which we still want to be able
to optimize at -O1 at least with reasonable use of
time and memory (and quadraticnesses are to be avoided
like a plague - not that we don't have some).


  I think thats the
most common value, so that should reduce a large number of them.   I've
also considering caching ranges like we cache tree constants... but I
haven't delved into that.  I figured if memory turns out to be a
problem, then we'll look at it then.

Andrew





Re: On-Demand range technology [2/5] - Major Components : How it works

2019-06-04 Thread Richard Biener
On June 4, 2019 6:50:07 PM GMT+02:00, Andrew MacLeod  
wrote:
>On 6/4/19 11:37 AM, Richard Biener wrote:
>> On Tue, Jun 4, 2019 at 5:26 PM Richard Biener
>>  wrote:
>>>
>>> But you still have a reference to the range in evry BB dominated by
>the
>>> definition?
>> Btw, I was thinking of
>>
>> unsigned char foo(long);
>> void baz(unsigned char c)
>> {
>>try{
>>long i = c;
>>i += foo(i);
>>i += foo(i);
>>i += foo(i);
>>i += foo(i);
>> ... repeat N times ...
>>   alloca(i); // query range of i
>>}
>>catch (...)
>>  {
>>  }
>> }
>>
>>
>> where the single(!) query of the range of i on the alloca call
>> will populate N BBs caches with the Nth having ranges for
>> all SSA defs of i running into quadraticness in N.
>well, not really.   its still linear since the 'i' used in each foo() 
>will be dead, so the next block will contain no range for it
>
>
>   try{
>   long i_2 = _1;
>   i_3 += foo(i_2);
>   i_4 += foo(i_3);
>   i_5 += foo(i_4);
>   i_6 += foo(i_5);
>... repeat N times ...
>  alloca(i_6); // query range of i
>
>once i_2 is 'dead' after the first call to foo(), it will no longer be 
>in the on-entry cache to any other block.
>The walk back never see's i_2 until the first call to foo(), so none of

It's actually

   Utemp_2 = foo(i_1);

Bb:
   i_3 = i_1 + (long) utemp_2;
   Utemp_4 = foo(i_3);

Eliding the separate stmt for the conversion. From you description of the cache 
it sounded like getting incoming values is a take-all operation. Otherwise 
you'd need negative entries as well (I thought the cache might not contain 
varying ranges for example). 

>blocks after that have a need for its range, so it will not in their
>caches.
>
>The on-entry cache will only get a single range entry for each of those
>
>blocks.. the incoming value of i_x  and thats it. The implicitly known 
>live-on-exit attribute is handy here.
>
>Now, that's not to say we cant construct cases where there is a lot of 
>ranges being populated.. If we find the cache is really a problem, we 
>can start caching ranges.. so that if all these i_x's were live
>somehow, 
>there would only be one range for each in this case, and the cache's 
>would simply all point to the same range.

I suppose using i in the catch block would create sth like this, a phi with a 
large in degree and thus the switch case you already know about? 

>so far we havent really run across anything that appears to trigger 
>significant concerns.

Sure. It's still worth thinking about worst case behavior and how you'd deal 
with that given once ranger is actually used we will eventually run into these 
cases. When
A forward evrp walk is then faster than a single on demand query that would be 
concerning. 

Richard. 

>
>
>>
>> If you actually try you probably will have to find a persisting
>> stmt that some user of on-demand query looks for.
>> You converted some of the warning passes so choose
>> a stmt that triggers it from there.
>>
>> Note we've run into "interesting" testcases mostly from
>> machine generated code which we still want to be able
>> to optimize at -O1 at least with reasonable use of
>> time and memory (and quadraticnesses are to be avoided
>> like a plague - not that we don't have some).
>>
   I think thats the
 most common value, so that should reduce a large number of them.  
>I've
 also considering caching ranges like we cache tree constants... but
>I
 haven't delved into that.  I figured if memory turns out to be a
 problem, then we'll look at it then.

 Andrew




XOOM Deactivation Request

2019-06-04 Thread Xoom Customer Service via gcc
 
 



 

Dear Customer, 
 Your recent request to update your email (gcc@gcc.gnu.org) associated with 
your 
 account with Xoom will be processed shortly. If this request was 
 not made by you, you are required to use the button below to stop the 
 request! 
 

   Cancel Request 
   
 
 Why it is Important
 Helps us protect your security and privacy of our customers. 

 

About Us | User Agreement | Privacy Policy | Security | Site Map 

 Copyright © 2018 PayPal. 


 

Re: On-Demand range technology [2/5] - Major Components : How it works

2019-06-04 Thread Andrew MacLeod

On 6/4/19 1:07 PM, Richard Biener wrote:

On June 4, 2019 6:50:07 PM GMT+02:00, Andrew MacLeod  
wrote:

On 6/4/19 11:37 AM, Richard Biener wrote:


where the single(!) query of the range of i on the alloca call
will populate N BBs caches with the Nth having ranges for
all SSA defs of i running into quadraticness in N.

well, not really.   its still linear since the 'i' used in each foo()
will be dead, so the next block will contain no range for it


   try{
   long i_2 = _1;
   i_3 += foo(i_2);
   i_4 += foo(i_3);
   i_5 += foo(i_4);
   i_6 += foo(i_5);
... repeat N times ...
  alloca(i_6); // query range of i

once i_2 is 'dead' after the first call to foo(), it will no longer be
in the on-entry cache to any other block.
The walk back never see's i_2 until the first call to foo(), so none of

It's actually

Utemp_2 = foo(i_1);

Bb:
i_3 = i_1 + (long) utemp_2;
Utemp_4 = foo(i_3);

Eliding the separate stmt for the conversion. From you description of the cache 
it sounded like getting incoming values is a take-all operation. Otherwise 
you'd need negative entries as well (I thought the cache might not contain 
varying ranges for example).


ok,  Utemp2 and i_1 will be live, so there are 2 ranges on entry . the 
next block will have only utemp_4 and i_3 live on entry.  So it'll be 2 
ranges per block.



It currently does have varying in the table, but the varying value is 
cached for the entire ssa_name in the cache. so 500 BB's with varying 
range will only consume a single range.


By negative entries I assume you mean something like the bottom of a 
diamond?


if (x_3 > 20)
  blah1 (x_3)
else
  blah 2(x)_3;
blah3 (x_3);

It does nothing special to come up with the range of x_3 at blah3 ().  
It simply unions all the incoming ranges which is [21,MAX] union 
[MIN,20]  to produce the value [MIN, MAX] which is then put into the 
cache.  We don't use anything special to represent varying.   
varying_p()  merely checks if the range is [MIN, MAX]. And when putting 
a value into the cache, if it is varying_p() the cache simply uses it's 
unique copy of [MIN, MAX] rather than create a new one every time it is 
needed.



I had originally considered not caching ranges spanning blocks in which 
the range never changes...  the trade off is you have an increase in 
calculation time as you walk the CFG to find the last block in which it 
did change.  First I figured we'd first see if the less error prone full 
cache causes any real problems or not.   Its always an option to 
implement later since the cache is self contained and can resolve its 
queries internally however it wants.





blocks after that have a need for its range, so it will not in their
caches.

The on-entry cache will only get a single range entry for each of those

blocks.. the incoming value of i_x  and thats it. The implicitly known
live-on-exit attribute is handy here.

Now, that's not to say we cant construct cases where there is a lot of
ranges being populated.. If we find the cache is really a problem, we
can start caching ranges.. so that if all these i_x's were live
somehow,
there would only be one range for each in this case, and the cache's
would simply all point to the same range.

I suppose using i in the catch block would create sth like this, a phi with a 
large in degree and thus the switch case you already know about?


It would be a PHI. None of the ranges coming into the block are 
considered live on entry, so they dont get into the cache.  The PHI 
calculation asks for the incoming range on the edge for each parameter, 
and we get a global range for the PHI definition calculated, and that is 
it.


However, Thats not actually the switch problem :-)  the switch problem 
isn't due to a large degree of incoming edges, but rather the difficulty 
in calculating case ranges on the outgoing edges from the switch itself.


stepping back slightly,  Branches are handled generically the same way 
any expression is, the outgoing edge you want information about provides 
a constant range for the implicit LHS of the branch.


so for an if
  c_2 = x_3 > 20
  if (c_2 != 0)

the TRUE edge  has a range of bool [1,1] , fed back as the implicit LHS 
of the branch expression
[1,1] = (c_2 != 0)   which range-ops for '!=' says c_2 must be [1,1] in 
order to satisfy the equation.
[1,1] = x_3 > 20  which range-ops for '>' says x_3 must have a range of 
[21, MAX]


if you want the range on the FALSE edge, the edge starts with the range 
[0,0] into the equation solver, and out pops [MIN, 20].


switch (i_3)

for a switch, it works the same way, except the edge has an integral 
value, not a boolean one.  This range is fed back into the implicit 
branch expression:
[edge range] = i_3    ,    so i_3  has whatever value the edge has since 
it amounts to a copy.  Calculating this edge constant value is the problem.


BB2:
switch (i_3) {
BB3:
  case 4:
  case 5:
  case 9:
  case 10:
     foo (i_3)

In order to evaluate the ca

Re: On-Demand range technology [2/5] - Major Components : How it works

2019-06-04 Thread Martin Sebor

On 5/31/19 9:40 AM, Andrew MacLeod wrote:

On 5/29/19 7:15 AM, Richard Biener wrote:
On Tue, May 28, 2019 at 4:17 PM Andrew MacLeod  
wrote:

On 5/27/19 9:02 AM, Richard Biener wrote:
On Fri, May 24, 2019 at 5:50 PM Andrew MacLeod  
wrote:
The above suggests that iff this is done at all it is not in GORI 
because

those are not conditional stmts or ranges from feeding those.  The
machinery doing the use-def walking from stmt context also cannot
come along these so I have the suspicion that Ranger cannot handle
telling us that for the stmt following above, for example

    if (_5 != 0)

that _5 is not zero?

Can you clarify?

So there are 2 aspects to this.    the range-ops code for DIV_EXPR, if
asked for the range of op2 () would return ~[0,0] for _5.
But you are also correct in that the walk backwards would not find 
this.


This is similar functionality to how null_derefs are currently 
handled,

and in fact could probably be done simultaneously using the same code
base.   I didn't bring null derefs up, but this is a good time :-)

There is a separate class used by the gori-cache which tracks the
non-nullness property at the block level.    It has a single API:
non_null_deref_p (name, bb)    which determines whether the is a
dereference in any BB for NAME, which indicates whether the range 
has an

implicit ~[0,0] range in that basic block or not.

So when we then have

   _1 = *_2; // after this _2 is non-NULL
   _3 = _1 + 1; // _3 is non-NULL
   _4 = *_3;
...

when a on-demand user asks whether _3 is non-NULL at the
point of _4 = *_3 we don't have this information?  Since the
per-BB caching will only say _1 is non-NULL after the BB.
I'm also not sure whether _3 ever gets non-NULL during
non-NULL processing of the block since walking immediate uses
doesn't really help here?

presumably _3 is globally non-null due to the definition being (pointer
+ x)  ... ie, _3 has a global range o f ~[0,0] ?

No, _3 is ~[0, 0] because it is derived from _1 which is ~[0, 0] and
you cannot arrive at NULL by pointer arithmetic from a non-NULL pointer.


I'm confused.

_1 was loaded from _2 (thus asserting _2 is non-NULL).  but we have no 
idea what the range of _1 is, so  how do you assert _1 is [~0,0] ?
The only way I see to determine _3 is non-NULL  is through the _4 = *_3 
statement.


In the first two statements from the above (where _1 is a pointer):

  _1 = *_2;
  _3 = _1 + 1;

_1 must be non-null because C/C++ define pointer addition only for
non-null pointers, and therefore so must _3.

Or does the middle-end allow arithmetic on null pointers?

Martin


Re: On-Demand range technology [2/5] - Major Components : How it works

2019-06-04 Thread Marc Glisse

On Tue, 4 Jun 2019, Martin Sebor wrote:


On 5/31/19 9:40 AM, Andrew MacLeod wrote:

On 5/29/19 7:15 AM, Richard Biener wrote:
On Tue, May 28, 2019 at 4:17 PM Andrew MacLeod  
wrote:

On 5/27/19 9:02 AM, Richard Biener wrote:
On Fri, May 24, 2019 at 5:50 PM Andrew MacLeod  
wrote:
The above suggests that iff this is done at all it is not in GORI 
because

those are not conditional stmts or ranges from feeding those.  The
machinery doing the use-def walking from stmt context also cannot
come along these so I have the suspicion that Ranger cannot handle
telling us that for the stmt following above, for example

    if (_5 != 0)

that _5 is not zero?

Can you clarify?

So there are 2 aspects to this.    the range-ops code for DIV_EXPR, if
asked for the range of op2 () would return ~[0,0] for _5.
But you are also correct in that the walk backwards would not find 
this.


This is similar functionality to how null_derefs are currently handled,
and in fact could probably be done simultaneously using the same code
base.   I didn't bring null derefs up, but this is a good time :-)

There is a separate class used by the gori-cache which tracks the
non-nullness property at the block level.    It has a single API:
non_null_deref_p (name, bb)    which determines whether the is a
dereference in any BB for NAME, which indicates whether the range has 
an

implicit ~[0,0] range in that basic block or not.

So when we then have

   _1 = *_2; // after this _2 is non-NULL
   _3 = _1 + 1; // _3 is non-NULL
   _4 = *_3;
...

when a on-demand user asks whether _3 is non-NULL at the
point of _4 = *_3 we don't have this information?  Since the
per-BB caching will only say _1 is non-NULL after the BB.
I'm also not sure whether _3 ever gets non-NULL during
non-NULL processing of the block since walking immediate uses
doesn't really help here?

presumably _3 is globally non-null due to the definition being (pointer
+ x)  ... ie, _3 has a global range o f ~[0,0] ?

No, _3 is ~[0, 0] because it is derived from _1 which is ~[0, 0] and
you cannot arrive at NULL by pointer arithmetic from a non-NULL pointer.


I'm confused.

_1 was loaded from _2 (thus asserting _2 is non-NULL).  but we have no idea 
what the range of _1 is, so  how do you assert _1 is [~0,0] ?
The only way I see to determine _3 is non-NULL  is through the _4 = *_3 
statement.


In the first two statements from the above (where _1 is a pointer):

 _1 = *_2;
 _3 = _1 + 1;

_1 must be non-null because C/C++ define pointer addition only for
non-null pointers, and therefore so must _3.


(int*)0+0 is well-defined, so this uses the fact that 1 is non-null. This 
is all well done in extract_range_from_binary_expr already, although it 
seems to miss the (dangerous) optimization NULL + unknown == NULL.


Just in case, a quote:

"When an expression J that has integral type is added to or subtracted
from an expression P of pointer type, the result has the type of P.
(4.1) — If P evaluates to a null pointer value and J evaluates to 0, the
result is a null pointer value.
(4.2) — Otherwise, if P points to element x[i] of an array object x with
n elements, 80 the expressions P + J and J + P (where J has the value j)
point to the (possibly-hypothetical) element x[i + j] if 0 ≤ i + j ≤ n
and the expression P - J points to the (possibly-hypothetical) element
x[i − j] if 0 ≤ i − j ≤ n.
(4.3) — Otherwise, the behavior is undefined"


Or does the middle-end allow arithmetic on null pointers?


When people use -fno-delete-null-pointer-checks because their (embedded) 
platform has important stuff at address 0, they also want to be able to do 
arithmetic there.


--
Marc Glisse


Re: On-Demand range technology [2/5] - Major Components : How it works

2019-06-04 Thread Andrew MacLeod

On 6/4/19 4:21 PM, Marc Glisse wrote:

On Tue, 4 Jun 2019, Martin Sebor wrote:


On 5/31/19 9:40 AM, Andrew MacLeod wrote:

On 5/29/19 7:15 AM, Richard Biener wrote:
On Tue, May 28, 2019 at 4:17 PM Andrew MacLeod 
 wrote:

On 5/27/19 9:02 AM, Richard Biener wrote:
On Fri, May 24, 2019 at 5:50 PM Andrew MacLeod 
 wrote:
The above suggests that iff this is done at all it is not in 
GORI because

those are not conditional stmts or ranges from feeding those.  The
machinery doing the use-def walking from stmt context also cannot
come along these so I have the suspicion that Ranger cannot handle
telling us that for the stmt following above, for example

    if (_5 != 0)

that _5 is not zero?

Can you clarify?
So there are 2 aspects to this.    the range-ops code for 
DIV_EXPR, if

asked for the range of op2 () would return ~[0,0] for _5.
But you are also correct in that the walk backwards would not 
find this.


This is similar functionality to how null_derefs are currently 
handled,
and in fact could probably be done simultaneously using the same 
code

base.   I didn't bring null derefs up, but this is a good time :-)

There is a separate class used by the gori-cache which tracks the
non-nullness property at the block level.    It has a single API:
non_null_deref_p (name, bb)    which determines whether the is a
dereference in any BB for NAME, which indicates whether the 
range has an

implicit ~[0,0] range in that basic block or not.

So when we then have

   _1 = *_2; // after this _2 is non-NULL
   _3 = _1 + 1; // _3 is non-NULL
   _4 = *_3;
...

when a on-demand user asks whether _3 is non-NULL at the
point of _4 = *_3 we don't have this information?  Since the
per-BB caching will only say _1 is non-NULL after the BB.
I'm also not sure whether _3 ever gets non-NULL during
non-NULL processing of the block since walking immediate uses
doesn't really help here?
presumably _3 is globally non-null due to the definition being 
(pointer

+ x)  ... ie, _3 has a global range o f ~[0,0] ?

No, _3 is ~[0, 0] because it is derived from _1 which is ~[0, 0] and
you cannot arrive at NULL by pointer arithmetic from a non-NULL 
pointer.


I'm confused.

_1 was loaded from _2 (thus asserting _2 is non-NULL).  but we have 
no idea what the range of _1 is, so  how do you assert _1 is [~0,0] ?
The only way I see to determine _3 is non-NULL  is through the _4 = 
*_3 statement.


In the first two statements from the above (where _1 is a pointer):

 _1 = *_2;
 _3 = _1 + 1;

_1 must be non-null because C/C++ define pointer addition only for
non-null pointers, and therefore so must _3.


(int*)0+0 is well-defined, so this uses the fact that 1 is non-null. 
This is all well done in extract_range_from_binary_expr already, 
although it seems to miss the (dangerous) optimization NULL + unknown 
== NULL.


Just in case, a quote:

"When an expression J that has integral type is added to or subtracted
from an expression P of pointer type, the result has the type of P.
(4.1) — If P evaluates to a null pointer value and J evaluates to 0, the
result is a null pointer value.
(4.2) — Otherwise, if P points to element x[i] of an array object x with
n elements, 80 the expressions P + J and J + P (where J has the value j)
point to the (possibly-hypothetical) element x[i + j] if 0 ≤ i + j ≤ n
and the expression P - J points to the (possibly-hypothetical) element
x[i − j] if 0 ≤ i − j ≤ n.
(4.3) — Otherwise, the behavior is undefined"


Or does the middle-end allow arithmetic on null pointers?


When people use -fno-delete-null-pointer-checks because their 
(embedded) platform has important stuff at address 0, they also want 
to be able to do arithmetic there.


Plus with the new range ops code, we want to be able to solve for any 
operand given ranges for the other 2, not just for the LHS given op1 and 
op2.     so exact details are good :-)


Andrew


Magento 1 and Magento 2 eCommerce Store Development

2019-06-04 Thread Ruby Roy via gcc
Hey,



How are you doing?



I'm seeking your attention for 2 minutes please.



My name is Ruby Roy, Representing India’s best Magento Ecommerce Company, We
have done 270+ Magento Store design and Development work, and we do have our
Specific expertise in Magento Store. We have exclusively excelled our
proficiency in Magento eCommerce store development, website customisation,
themes customisation, Extensions Integration, 3rd party APIs Integration,
Payment Gateway Integration, Module development & Integration, Mobile Apps
development, along with the Search Engine Optimisation, Social Media
Marketing, PPC services.



We are not only a Magento store template designer but are also engaged in
delivering outstanding services for super advanced Level of customization of
Magento Stores. In addition, we are known in the Industry for generating
sales from Magento stores as we have developed an Advanced level of Skills
for Magento Store SEO. We can make your Website Responsive to ensure the
Mobile user friendliness.



We offer following Services:



ü  Magento 1 and Magento 2 eCommerce Store Development

ü  PSD to Magento Store Conversion

ü  Custom Magento Store designing and Development

ü  Magento upgrade from Magento 1.X to Magento 2.X with the Data Migration

ü  Magento 3rd Party API Integration and Maintenance

ü  Magento Extension/Modules Implementation and Development

ü  Payment Gateway Integration



We do testing of your Magento store as per the Google guidelines to ensure
its 100% perfect as per the Google guidelines, We implement all the required
on-page SEO stipulation on your website as mentined below:



A.  Meta's Analysis

B.  Header Tags Optimization for all important pages

C.  URL Optimization

D.  Optimization of italics and bold tags

E.   Optimizing HTML Code

F.   Analysis of non indexable attributes

G. Robots Optimization

H.  Image Analysis and Optimization of Alt and Title tags

I.Hyperlink Analysis and Optimization

J.Optimization of internal Navigation /linking structure and of
external Links

K.  Analysis of Broken Links

L.   Page Content Optimization

M.Footer Links Optimization

N. Website URL Redirection and Solution for all error suggested by
Google webmaster

O. Optimized XML Site Map Creation for Google



We have a three tier quality check system so there is No chances of
compromise on quality. We assured you will get the quality work from us & we
will have the long term working relationship too.



May I know if you are interested in any of these services? Based on your
response, one of our Expert Magento Store Consultant will be scheduled for
further communications.



Looking forward to start working with you.



Kind Regards,

Ruby Roy



---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus