On Fri, Nov 7, 2025 at 7:25 PM Andrew MacLeod <[email protected]> wrote:
>
>
> On 11/7/25 11:15, Andrew MacLeod wrote:
> >
> > On 11/7/25 00:46, Andrew Pinski wrote:
> >> On Thu, Nov 6, 2025 at 6:22 PM Andrew MacLeod <[email protected]>
> >> wrote:
> >>> I can add a check in the inferred range manager for atomic loads to
> >>> resolve this PR.
> >>> The IL sequence tends to look like:
> >>>
> >>>       _1 = &this_2(D)->b;
> >>>       __atomic_load_8 (_1, 0);
> >>>
> >>> Just want to make sure I get this right since memory operations are not
> >>> my strong suit.
> >>>
> >>> The first argument to the atomic load is non-null (so _1), as well as
> >>> the base of the RHS of the address expression that defines _1 are
> >>> non-zero?.  (this_2)
> >>>
> >>> The attached patch scavenges a little code from
> >>> fold_using_range::range_of_address that I think works... but perhaps
> >>> there is a more efficient way to do this?  Is this likely to work OK
> >>> and
> >>> be safe?  or are there additional checks I need to be doing?
> >>>
> >>> And I suppose there is an entire range of atomic operations this
> >>> applies
> >>> to?  certainly atomic_store should qualify...
> >>>
> >>> Bootstraps on x86_64-pc-linux-gnu with no regressions, but I'm not sure
> >>> that is a really extensive test for this..
> >> Isn't the other route on fixing this is by adding the nonnull
> >> attribute to these builtins? I tried looking to see if anyone had
> >> mentioned why these builtins didn't have the nonnull attribute on them
> >> but I didn't find anything.
> >> The nonnull attribute support for builtins has been there since 2002
> >> and atomic builtins were added afterwards (towards 2011).
> >>
> >> DEF_ATTR_TREE_LIST (ATTR_NONNULL_1_NOTHROW_LEAF_LIST,
> >> ATTR_NONNULL,     \
> >>                          ATTR_LIST_1, ATTR_NOTHROW_LEAF_LIST)
> >> DEF_ATTR_TREE_LIST (ATTR_NONNULL_1_LEAF_LIST , ATTR_NONNULL, \
> >>                          ATTR_LIST_1, ATTR_LEAF_LIST )
> >> #define ATTR_NONNULL_1_NOTHROWCALL_LEAF_LIST
> >> (flag_non_call_exceptions ? \
> >>          ATTR_NONNULL_1_LEAF_LIST : ATTR_NONNULL_1_NOTHROW_LEAF_LIST)
> >>
> >> And then use ATTR_NONNULL_1_NOTHROWCALL_LEAF_LIST instead of
> >> ATTR_NOTHROWCALL_LEAF_LIST in DEF_SYNC_BUILTIN for the builtins in
> >> sync-builtins.def that are non null for the 1st argument.
> >>
> >> I can only think that nobody has looked into the code generation
> >> enough here to make a big difference. This would also handle isolate
> >> paths too without any extra coding so I suspect special casing these
> >> builtins everywhere is not a route which we want just for nonnullness.
> >>
> >> Thanks,
> >> Andrew
> >
> >      _1 = &this_2(D)->b;
> >      __atomic_load_8 (_1, 0);
> >
> > That would  tag _1 as non_null, but that won't tag 'this_2' as
> > non-zero.   I think the goal is to treat the use of _1 in the
> > __atomic_load as if it was dereferenced...
> >
> > ie
> >
> > struct d {
> >   long a;
> >   long b;
> > };
> >
> > void call ();
> > void call2 ();
> > void fake (long *) __attribute__((nonnull));
> >
> > void q (d *j)
> > {
> >   long *p = &j->b;
> >   fake (p);
> >   if (j)
> >     call ();
> > }
> >
> > Replicates a similar thing.   we know p is non-null,  but never figure
> > out that j is.  I guess we do not know that p is dereferenced in the
> > call to fake().
> >
> > If however we add a derefernce:
> >
> > void t (d *j)
> > {
> >   long *p = &j->b;
> >   fake (p);
> >   if (*p)
> >     call2 ()
> >   if (j)
> >     call ();
> > }
> >
> > we *DO* know that j is non-null due to the
> > _1 = MEM[(long int *)j_4(D) + 8B];
> >
> > That is introduced by the de-reference, and the 'if (j)' is removed.
> >
> > So I think the goal is to treat the argument to the atomic calls as if
> > they are memory de-references, not just non-null?  I don't suppose we
> > have anything like that already available :-P
> >
> > Which means I probably need to do the same thing that the
> > non_null_loadstore () callback does at the very least.
> >
> > Does any of this make sense?
> >
> > Andrew
> >
> >
> That got me to thinking...   Should this maybe  be an enhancement to
> walk_stmt_load_store_ops() ?  Where if the stmt is an atomic load (or
> whatever operation) and the argument being operated on is an ssa_name,
> that we do equivalent processing of the defining statement?   It is
> technically a load or store operation that isn't being processed.  Do we
> do anything for other builtins? like builtin_strcpy.?  it also does
> impliciit loads and stores like this
>
>    p_5 = &j_4(D)->b;
>    atomic_load_8 (p_5, 0);
>    _1 = MEM[(long int *)j_4(D) + 8B];
>
> The _1 is the result of  '*p'..  we want the equivalent effect for the
> atomic load...  In walk_load_store_ops () can we detect an atomic load,
> expand the defining statement of p_5,  so &j_4(D)-B  into a MEM tree and
> the walk that?  or is that overthinking what needs to be done or wrong
> path completely?

This isn't for walking "semantic operations" but like for walking aggregate
arguments to calls.  So I don't think this is sth for
walk_stmt_load_store_ops (),
that doesn't do what you think.  The tool you might be looking for is
maybe find_data_references_in_stmt, that has support for some of the
function calls.

Richard.

>
> Andrew
>

Reply via email to