On 11/8/24 1:19 PM, Richard Biener wrote:
On Fri, Nov 8, 2024 at 7:30 AM Tejas Belagod <tejas.bela...@arm.com> wrote:

On 11/7/24 5:52 PM, Richard Biener wrote:
On Thu, Nov 7, 2024 at 11:13 AM Tejas Belagod <tejas.bela...@arm.com> wrote:

On 11/7/24 2:36 PM, Richard Biener wrote:
On Thu, Nov 7, 2024 at 8:25 AM Tejas Belagod <tejas.bela...@arm.com> wrote:

On 11/6/24 6:02 PM, Richard Biener wrote:
On Wed, Nov 6, 2024 at 12:49 PM Tejas Belagod <tejas.bela...@arm.com> wrote:

Ensure sizeless types don't end up trying to be canonicalised to BIT_FIELD_REFs.

You mean variable-sized?  But don't we know, when there's a constant
array index,
that the size is at least so this indexing is OK?  So what's wrong with a
fixed position, fixed size BIT_FIELD_REF extraction of a VLA object?

Richard.


Ah! The code and comment/description don't match, sorry. This change
started out as gating out all canonicalizations of VLA vectors when I
had limited understanding of how this worked, but eventually was
simplified to gate in only those offsets that were known_le, but missed
out fixing the comment/description. So, for eg.

int foo (svint32_t v) { return v[3]; }

canonicalises to a BIT_FIELD_REF <v, 32, 96>

but something like:

int foo (svint32_t v) { return v[4]; }

So this is possibly out-of-bounds?

reduces to a VEC_EXTRACT <>

But if out-of-bounds a VEC_EXTRACT isn't any better than a BIT_FIELD_REF, no?

Someone may have code protecting accesses like so:

    /* svcntw () returns num of 32-bit elements in a vec */
    if (svcntw () >= 8)
      return v[4];

So I didn't error or warn (-Warray-bounds) for this or for that matter
make it UB as it will be spurious. So technically, it may not be OOB access.

Therefore BIT_FIELD_REFs are generated for anything within the range of
a Adv SIMD register and anything beyond is left to be vec_extracted with
SVE instructions.

You still didn't state the technical reason why BIT_FIELD_REF is worse than
.VEC_EXTRACT (which is introduced quite late only btw).

I'm mostly questioning that we have two different canonicalizations that oddly
depend on the constant index.  I'd rather always go .VEC_EXTRACT or
always BIT_FIELD_REF (prefer that one) instead of having a mix for VLA vectors.


Sorry, I misunderstood your question. The choice of canonicalization
based on index range wasn't by design - just happened to be a
side-effect of my trying to accommodate VLA poly sizes in place of
constants. When I checked that potentially-out-of-bounds VLA indices
were taking the VEC_EXTRACT route, I didn't think about using
BIT_FIELD_REFs for them too - frankly I didn't know we could even do
that for access outside the minimum vector size.

When I now try to canonicalize all constant VLA indices to
BIT_FIELD_REFs I encounter ICE and gimple verification does not seem to
be happy about potentially accessing outside object size range. If we
have to make BIT_FIELD_REF work for potentially OOB constant VLA
indices, wouldn't this be quite a fundamental assumption we might have
to change?

I see BIT_FIELD_REF verification uses maybe_gt, it could as well use
known_gt.  How does .VEC_EXTRACT end up handling "maybe_gt" constant
indices?  I'm not familiar enough with VLA regs handling to decide here.

That said, I'd prefer if you either avoid any canonicalization to BIT_FIELD_REF
or make all of them "work".  As said we introudce .VEC_EXTRACT only very
late during ISEL IIRC.


Allowing OOB constants in BIT_FIELD_REF in the gimple-verifier seems to work ok, but the extraction happens via memory in the generated code which isn't the most optimal - need to fix it up to expand BIT_FIELD_REFs more optimally - looking into it now...

Thanks,
Tejas.

Richard.


Thanks,
Tejas.

Richard.


Thanks,
Tejas.



I'll fix the comment/description.

Thanks,
Tejas.

gcc/ChangeLog:

            * gimple-fold.cc (maybe_canonicalize_mem_ref_addr): Disallow 
sizeless
            types in BIT_FIELD_REFs.
---
     gcc/gimple-fold.cc | 3 ++-
     1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/gcc/gimple-fold.cc b/gcc/gimple-fold.cc
index c19dac0dbfd..dd45d9f7348 100644
--- a/gcc/gimple-fold.cc
+++ b/gcc/gimple-fold.cc
@@ -6281,6 +6281,7 @@ maybe_canonicalize_mem_ref_addr (tree *t, bool is_debug = 
false)
           && VECTOR_TYPE_P (TREE_TYPE (TREE_OPERAND (TREE_OPERAND (*t, 0), 
0))))
         {
           tree vtype = TREE_TYPE (TREE_OPERAND (TREE_OPERAND (*t, 0), 0));
+      /* BIT_FIELD_REF can only happen on constant-size vectors.  */
           if (VECTOR_TYPE_P (vtype))
            {
              tree low = array_ref_low_bound (*t);
@@ -6294,7 +6295,7 @@ maybe_canonicalize_mem_ref_addr (tree *t, bool is_debug = 
false)
                                             (TYPE_SIZE (TREE_TYPE (*t))));
                      widest_int ext
                        = wi::add (idx, wi::to_widest (TYPE_SIZE (TREE_TYPE 
(*t))));
-                 if (wi::les_p (ext, wi::to_widest (TYPE_SIZE (vtype))))
+                 if (known_le (ext, wi::to_poly_widest (TYPE_SIZE (vtype))))
                        {
                          *t = build3_loc (EXPR_LOCATION (*t), BIT_FIELD_REF,
                                           TREE_TYPE (*t),
--
2.25.1





Reply via email to