/vect/slp-perm-6.c: Likewise.
>
> > -Original Message-
> > From: rguent...@c653.arch.suse.de On
> > Behalf Of Richard Biener
> > Sent: Thursday, October 22, 2020 9:44 AM
> > To: Tamar Christina
> > Cc: gcc-patches@gcc.gnu.org; nd ; o...@ucw.cz
> > S
Pastoed the previous fix too quickly, the following fixes the
correct spot - the memset, not the allocation.
Bootstrapped / tested on x86_64-unknown-linux-gnu, pushed.
2020-11-04 Richard Biener
PR bootstrap/97666
* tree-vect-slp.c (vect_build_slp_tree_2): Revert previous
(raised != (FE_ALL_EXCEPT & ~e))
> +FAIL(raised, FE_ALL_EXCEPT & ~e, e, s, "__builtin_feclearexcept");
> +
> +
> + s = "FE_INEXACT | FE_OVERFLOW";
> + e = FE_INEXACT | FE_OVERFLOW;
> + INFO("test: %s(%x)\n", s, e);
> +
> + fecleare
On Wed, 4 Nov 2020, Tamar Christina wrote:
> Hi Richi,
>
> > -Original Message-
> > From: rguent...@c653.arch.suse.de On
> > Behalf Of Richard Biener
> > Sent: Wednesday, November 4, 2020 8:07 AM
> > To: Tamar Christina
> > Cc: gcc-patches@gcc
This re-instantiates the previously removed CSE, fixing the
FAIL of gcc.dg/vect/costmodel/x86_64/costmodel-pr30843.c
It turns out the previous approach still works.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-04 Richard Biener
* tree-vect-loop.c
, shift));
> +}
>
>/* Insert our new statements at the end of conditional block before the
> COND_STMT. */
> --- gcc/testsuite/gcc.dg/tree-ssa/phi-opt-22.c.jj 2020-11-03
> 18:22:19.756124543 +0100
> +++ gcc/testsuite/gcc.dg/tree-ssa/phi-opt-22.c2020-11-03
> 18:25:12.795176885 +0100
> @@ -0,0 +1,11 @@
> +/* PR tree-optimization/97690 */
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-tree-phiopt2" } */
> +
> +int foo (_Bool d) { return d ? 2 : 0; }
> +int bar (_Bool d) { return d ? 1 : 0; }
> +int baz (_Bool d) { return d ? -__INT_MAX__ - 1 : 0; }
> +int qux (_Bool d) { return d ? 1024 : 0; }
> +
> +/* { dg-final { scan-tree-dump-not "if" "phiopt2" } } */
> +/* { dg-final { scan-tree-dump-times " << " 3 "phiopt2" } } */
>
> Jakub
>
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
traits::equal): Move...
> > * tree-vectorizer.h (struct bst_traits, bst_traits::hash,
> > bst_traits::equal): ... to here.
> > (vect_mark_pattern_stmts, vect_free_slp_tree,
> > vect_build_slp_tree): Declare.
> >
> > --
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
This properly sets the abnormal flag when vectorizing live lanes
when the original scalar was live across an abnormal edge.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-04 Richard Biener
PR tree-optimization/97709
* tree-vect-loop.c
s,
> Tamar
>
> gcc/ChangeLog:
>
> * Makefile.in (tree-vect-slp-patterns.o): New.
> * doc/passes.texi: Update documentation.
> * tree-vect-slp.c (vect_print_slp_tree): Add new state.
> (vect_match_slp_patterns_2, vect_match_slp_patterns): New.
>
I forgot to cost vectorized PHIs. Scalar PHIs are just costed
as scalar_stmt so the following costs vector PHIs as vector_stmt.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-04 Richard Biener
* tree-vectorizer.h (vectorizable_phi): Adjust prototype
vect_slp_permute (perms[perm], SLP_TREE_LANE_PERMUTATION
(node),
+ true);
in case you want to split up, independently of the rest of the
patches.
Thanks,
Richard.
> Thanks,
> Tamar
>
> gcc/ChangeLog:
>
> * tree-vect-slp.c (vect_slp_tree_per
On Wed, 4 Nov 2020, Tamar Christina wrote:
> Hi Richi,
>
> > -Original Message-
> > From: rguent...@c653.arch.suse.de On
> > Behalf Of Richard Biener
> > Sent: Wednesday, November 4, 2020 1:00 PM
> > To: Tamar Christina
> > Cc: gcc-patches@gcc
oup performing what you do before
any of the vertices/graph stuff is built. That's
probably easiest at this point and it can be done
when then bst_map is still around so you can properly
CSE the new load you build.
Thanks,
Richard.
> Thanks,
> Tamar
>
> gcc/ChangeLog:
>
> * tree-vect-slp.c (vect_optimize_slp): Promote permutes.
>
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
On Wed, 4 Nov 2020, Tamar Christina wrote:
> > -Original Message-
> > From: rguent...@c653.arch.suse.de On
> > Behalf Of Richard Biener
> > Sent: Wednesday, November 4, 2020 12:41 PM
> > To: Tamar Christina
> > Cc: Richard Sandiford ; nd ;
> &g
On Wed, 4 Nov 2020, Tamar Christina wrote:
> Hi Richi,
>
> > -Original Message-
> > From: rguent...@c653.arch.suse.de On
> > Behalf Of Richard Biener
> > Sent: Wednesday, November 4, 2020 1:36 PM
> > To: Tamar Christina
> > Cc: gcc-patches@gcc
On Wed, 4 Nov 2020, Tamar Christina wrote:
> > -Original Message-
> > From: rguent...@c653.arch.suse.de On
> > Behalf Of Richard Biener
> > Sent: Wednesday, November 4, 2020 2:04 PM
> > To: Tamar Christina
> > Cc: Richard Sandiford ; nd ;
> &g
* tree-vect-loop.c (vect_analyze_loop_2): Check kind.
> * tree-vect-slp.c (vect_build_slp_instance): New.
> (enum slp_instance_kind): Move to...
> * tree-vectorizer.h (enum slp_instance_kind): .. Here
> (SLP_INSTANCE_KIND): New.
>
>
--
Richard Biener
origin link in the concrete instance.
Bootstrapped and tested on x86_64-unknown-linux-gnu.
2020-11-05 Richard Biener
PR debug/97718
* dwarf2out.c (add_abstract_origin_attribute): Make sure to
point to the abstract instance.
---
gcc/dwarf2out.c | 11 ++-
1 file
wrong pattern/non-pattern
stmts.
Bootstrap & regtest running on x86_64-unknown-linux-gnu.
2020-11-05 Richard Biener
* tree-vect-data-refs.c (vect_slp_analyze_node_dependences):
Use the original stmts.
(vect_slp_analyze_node_alignment): Use the pattern stmt.
*
On Wed, 4 Nov 2020, Tamar Christina wrote:
> Hi Richi,
>
> > -Original Message-
> > From: rguent...@c653.arch.suse.de On
> > Behalf Of Richard Biener
> > Sent: Wednesday, November 4, 2020 8:07 AM
> > To: Tamar Christina
> > Cc: gcc-patches@gcc
This adds a missing check.
Bootstrap & regtest running on x86_64-unknown-linux-gnu.
2020-11-06 Richard Biener
PR tree-optimization/97733
* tree-vect-slp.c (vect_analyze_slp_instance): If less
than two reductions were relevant or live do nothing.
---
gcc/tree-
n x86_64-unknown-linux-gnu.
2020-11-06 Richard Biener
PR tree-optimization/97732
* tree-vect-loop.c (vectorizable_induction): Convert the
init elements to the vector component type.
* gimple-fold.c (gimple_build_vector): Use CONSTANT_CLASS_P
rather than TREE_
On Thu, 5 Nov 2020, sunil.k.pandey wrote:
> On Linux/x86_64,
>
> 1436ef2a57e79b6b8ce5b03e32a38dd64f46c97c is the first bad commit
> commit 1436ef2a57e79b6b8ce5b03e32a38dd64f46c97c
> Author: Richard Biener
> Date: Thu Nov 5 09:27:28 2020 +0100
>
> debug/977
On Thu, 5 Nov 2020, Jan Hubicka wrote:
> >
> > On 10/27/20 3:01 AM, Richard Biener wrote:
> > > On Tue, 27 Oct 2020, Jan Hubicka wrote:
> > >
> > >>> On Mon, 26 Oct 2020, Jan Hubicka wrote:
> > >>>
> > >>>> Hi,
&
On Fri, 6 Nov 2020, Jiufu Guo wrote:
> On 2020-11-05 21:43, Richard Biener wrote:
>
> Hi Richard,
>
> Thanks for your comments and suggestions!
>
> > On Thu, Nov 5, 2020 at 2:19 PM guojiufu via Gcc-patches
> > wrote:
> >>
> >> In PR87473, th
This computes vect_determine_mask_precision in a RPO forward walk
rather than in a backward walk and using a worklist. It will make
fixing PR97706 easier but for bisecting I wanted it to be separate.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-06 Richard Biener
This passes down the graph entry kind down to vect_analyze_slp_instance
which simplifies it and makes it a shallow wrapper around
vect_build_slp_instance.
Bootstrapped / tested on x86_64-unknown-linux-gnu, pushed.
2020-11-06 Richard Biener
* tree-vect-slp.c (vect_analyze_slp): Pass
This adds handling of PHIs to mask precision compute which is
eventually needed to detect a bool pattern when the def chain
contains such a PHI node.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-06 Richard Biener
PR tree-optimization/97706
* tree-vect
factorings.
Bootstrapped on x86_64-unknown-linux-gnu, testing in progress.
2020-11-06 Richard Biener
* tree-ssa-sccvn.h (get_max_constant_value_id): Declare.
(get_next_constant_value_id): Likewise.
(value_id_constant_p): Inline and simplify.
* tree-ssa-sccvn.c (constant
Turns out its size and time requirements can be stripped down
dramatically.
Bootstrap & regtest running on x86_64-unknown-linux-gnu.
2020-11-06 Richard Biener
* tree-ssa-pre.c (expr_pred_trans_d): Modify so elements
are embedded rather than allocated. Remove hashval me
-11-09 Richard Biener
PR tree-optimization/97765
* tree-ssa-pre.c (bb_bitmap_sets::phi_translate_table): Add.
(PHI_TRANS_TABLE): New macro.
(phi_translate_table): Remove.
(expr_pred_trans_d::pred): Remove.
(expr_pred_trans_d::hash): Simplify
The following CSEs VN_INFO calls which nowadays are hashtable queries.
Bootstrapped / tested on x86_64-unknown-linux-gnu, pushed.
2020-11-09 Richard Biener
* tree-ssa-pre.c (get_representative_for): CSE VN_INFO calls.
(create_expression_by_pieces): Likewise
This fixes updating of the step vectors when filling up to group_size.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-09 Richard Biener
PR tree-optimization/97753
* tree-vect-loop.c (vectorizable_induction): Fill vec_steps
when CSEing inside the
This fixes the order of walking PHIs and stmts for BB mask
precision compute.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-09 Richard Biener
PR tree-optimization/97746
* tree-vect-patterns.c (vect_determine_precisions): First walk PHIs
This removes a premature end of the DFS walk.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-09 Richard Biener
PR tree-optimization/97761
* tree-vect-slp.c (vect_bb_slp_mark_live_stmts): Remove
premature end of DFS walk.
* gfortran.dg
FLAGS); \
> (LOOP); \
> (LOOP) = li.next ())
>
> #define FOR_EACH_LOOP_FN(FN, LOOP, FLAGS) \
> - for (loop_iterator li(FN, &(LOOP), FLAGS); \
> + for (loop_iterator li(FN, NULL, &(LOOP), FLAGS); \
> + (LOOP); \
> + (LOOP) = li.next ())
> +
> +#define FOR_EACH_ENCLOSED_LOOP(TOP, LOOP, FLAGS) \
> + for (loop_iterator li(cfun, TOP, &(LOOP), FLAGS); \
> + (LOOP); \
> + (LOOP) = li.next ())
> +
> +#define FOR_EACH_ENCLOSED_LOOP_FN(FN, TOP, LOOP, FLAGS) \
> + for (loop_iterator li(FN, TOP, &(LOOP), FLAGS);\
> (LOOP); \
> (LOOP) = li.next ())
>
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
On Mon, 9 Nov 2020, Iain Sandoe wrote:
> Hi,
>
> I?ve been carrying this patch around on my Darwin branches for a very long
> time?
>
> tested across the Darwin patch and on x86_64-linux-gnu,
> OK for master?
> thanks
> Iain
>
> = commit message
>
> If an interface is marked 'deprecated' t
cf_check attribute in hash as it affects code
> - generation. */
> - if (code == GIMPLE_CALL
> - && flag_cf_protection & CF_BRANCH)
> - hstate.add_flag (gimple_call_nocf_check_p (as_a (stmt)));
> + {
> + func_checker::operand_access_type_map map (5);
> + func_checker::classify_operands (stmt, &map);
> +
> + /* All these statements are equivalent if their operands are. */
> + for (unsigned i = 0; i < gimple_num_ops (stmt); ++i)
> + m_checker->hash_operand (gimple_op (stmt, i), hstate, 0,
> +func_checker::get_operand_access_type
> + (&map, gimple_op (stmt, i)));
> + /* Consider nocf_check attribute in hash as it affects code
> +generation. */
> + if (code == GIMPLE_CALL
> + && flag_cf_protection & CF_BRANCH)
> + hstate.add_flag (gimple_call_nocf_check_p (as_a (stmt)));
> + }
> + break;
> default:
>break;
> }
> @@ -1534,7 +1533,8 @@ sem_function::compare_phi_node (basic_block bb1,
> basic_block bb2)
>tree phi_result1 = gimple_phi_result (phi1);
>tree phi_result2 = gimple_phi_result (phi2);
>
> - if (!m_checker->compare_operand (phi_result1, phi_result2))
> + if (!m_checker->compare_operand (phi_result1, phi_result2,
> +func_checker::OP_NORMAL))
> return return_false_with_msg ("PHI results are different");
>
>size1 = gimple_phi_num_args (phi1);
> @@ -1548,7 +1548,7 @@ sem_function::compare_phi_node (basic_block bb1,
> basic_block bb2)
> t1 = gimple_phi_arg (phi1, i)->def;
> t2 = gimple_phi_arg (phi2, i)->def;
>
> - if (!m_checker->compare_operand (t1, t2))
> + if (!m_checker->compare_operand (t1, t2, func_checker::OP_NORMAL))
> return return_false ();
>
> e1 = gimple_phi_arg_edge (phi1, i);
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
This makes sure we reject reduction paths with a live stmt that
is not the last one altering the value. This is because we do not
handle this in the epilogue unless there's a scalar epilogue loop.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-09 Richard B
this_buffer + (amnt != 0), size / BITS_PER_UNIT);
> --- gcc/testsuite/gcc.c-torture/execute/pr97764.c.jj 2020-11-09
> 14:36:20.974258322 +0100
> +++ gcc/testsuite/gcc.c-torture/execute/pr97764.c 2020-11-09
> 14:36:56.108865088 +0100
> @@ -0,0 +1,14 @@
> +/* PR tree-op
+ for (tree parm = DECL_ARGUMENTS (current_function_decl); parm;
> parm_index++,
> + parm = TREE_CHAIN (parm))
> +{
> + tree name = ssa_default_def (cfun, parm);
> + if (!name)
> + continue;
looks like the vec might be quite sparse ...
> + int
This makes get_expr_value_id cheap and completes the
constant value-id simplification by turning the constant_value_expressions
into a direct map instead of a set of pre_exprs for the value.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-10 Richard Biener
* tree
On Tue, 10 Nov 2020, Iain Sandoe wrote:
> Hi Richard,
>
> Richard Biener wrote:
>
> >On Mon, 9 Nov 2020, Iain Sandoe wrote:
> >
> >>Hi,
> >>
> >>I?ve been carrying this patch around on my Darwin branches for a very long
> >>time?
>
This deals with blocks elimination added.
Pushed as obvious.
2020-11-10 Richard Biener
PR tree-optimization/97780
* tree-ssa-pre.c (fini_pre): Deal with added basic blocks
when freeing PHI_TRANS_TABLE.
---
gcc/tree-ssa-pre.c | 2 +-
1 file changed, 1 insertion(+), 1
The following removes an assert that can not easily be adjusted to
cover the additional cases we now handle after the removal of
the same-align DRs vector.
Tested on x86_64-unknown-linux and aarch64, pushed.
2020-11-10 Richard Biener
PR tree-optimization/97769
* tree-vect
t -- i.e. those that are likely to be win regardless of the
> register
> + pressure. Return the pass TODO flags that need to be carried out after
> the
> + transformation. */
>
> -static unsigned int
> -tree_ssa_lim (function *fun)
> +unsigned int
> +loop_invarian
#x27; && str[idx] != 'R'
> + && str[idx] != 'w' && str[idx] != 'W'
> + && str[idx] != 'o' && str[idx] != 'O')
> + err = true;
> + if (str[idx] != 't'
> + /* Size specified is scalar, so it should be described
> +by ". " if specified at all. */
> + && (arg_specified_p (str[idx + 1] - '1')
> + && str[arg_idx (str[idx + 1] - '1')] != '.'))
> + err = true;
> + }
> + else if (str[idx + 1] != ' ')
> + err = true;
> + break;
> + case 'c':
> + case 'C':
> + /* Copied arugment should specify the argument being copied
> +that should be specified output argument. */
> + if (str[idx + 1] < '1' || str[idx + 1] > '9'
> + || !arg_specified_p (str[idx + 1] - '1')
> + || (str[arg_idx (str[idx + 1] - '1')] != 'W'
> + && str[arg_idx (str[idx + 1] - '1')] != 'w'
> + && str[arg_idx (str[idx + 1] - '1')] != 'O'
> + && str[arg_idx (str[idx + 1] - '1')] != 'o'))
> + err = true;
> break;
> default:
> err = true;
> }
> - if ((str[idx + 1] >= '1' && str[idx + 1] <= '9')
> - || str[idx + 1] == 't')
> - {
> - if (str[idx] != 'r' && str[idx] != 'R'
> - && str[idx] != 'w' && str[idx] != 'W'
> - && str[idx] != 'o' && str[idx] != 'O')
> - err = true;
> - }
> - else if (str[idx + 1] != ' ')
> - err = true;
> + if (err)
> + internal_error ("invalid fn spec attribute \"%s\" arg %i", str, i);
> }
> - if (err)
> -internal_error ("invalid fn spec attribute \"%s\"", str);
> }
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
s are not considered escape points
> > > + by tree-ssa-structalias. */
> > > + else if (gimple_code (use_stmt) == GIMPLE_COND)
> > > + ;
> > > + else
> > > + {
> > > + if (dump_file)
> > > + fprintf (dump_file, "%*s
ng on phi-translation caching to avoid
doing redundant work.
So this patch drops the use of sorted_array_from_bitmap_set from
phi_translate_set because this function is quite expensive.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-11 Richard Biener
* tree-ssa-
gt; return true;
>
> - if (maybe_ne (GET_MODE_SIZE (value_mode), GET_MODE_SIZE (cmp_op_mode))
> - || maybe_ne (GET_MODE_NUNITS (value_mode), GET_MODE_NUNITS
> (cmp_op_mode)))
> + if (maybe_ne (GET_MODE_NUNITS (value_mode), GET_MODE_NUNITS (cmp_op_mode)))
> return false;
>
>if (TREE_CODE_CLASS (code) != tcc_comparison)
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
hoist
insertion iteration alltogether.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-11 Richard Biener
PR tree-optimization/97623
* params.opt (-param=max-pre-hoist-insert-iterations): Remove
again.
* doc/invoke.texi (max-pre-hoist-insert
Tested on x86_64-unknown-linux-gnu, pushed.
2020-11-11 Richard Biener
PR testsuite/97797
* gcc.dg/torture/ssa-fre-5.c: Use __SIZETYPE__ where
appropriate.
* gcc.dg/torture/ssa-fre-6.c: Likewise.
---
gcc/testsuite/gcc.dg/torture/ssa-fre-5.c | 8
gcc
rking at during the last
patches (PR97623) it is neutral in compile-time cost.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-11 Richard Biener
* tree-ssa-pre.c (pre_expr_DFS): New function.
(sorted_array_from_bitmap_set): Use it to pro
This fixes the postorder compute for the case of multiple
expression leaders for a value.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-12 Richard Biener
PR tree-optimization/97806
* tree-ssa-pre.c (pre_expr_DFS): New overload for visiting
isn't enough to
guarantee we get all opportunities of a block in one iteration.
This avoids costly re-compute of the topologically sorted expression
array (more micro-optimization is possible here).
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-12 Richard Biener
erand (DECL_FIELD_OFFSET (field),
> + hstate, flags & ~OEP_ADDRESS_OF);
> + hash_operand (DECL_FIELD_BIT_OFFSET (field),
> + hstate, flags & ~OEP_ADDRESS_OF);
> + hash_operand (TREE_OPERAND (t,
= 'r' && str[idx] != 'R'
> + && str[idx] != 'w' && str[idx] != 'W'
> + && str[idx] != 'o' && str[idx] != 'O')
> + err = true;
> + if (str[idx] != 't'
> + /* Size specified is scalar, so it should be described
> +by ". " if specified at all. */
> + && (arg_specified_p (str[idx + 1] - '1')
> + && str[arg_idx (str[idx + 1] - '1')] != '.'))
> + err = true;
> + }
> + else if (str[idx + 1] != ' ')
> + err = true;
> break;
> default:
> - err = true;
> + if (str[idx] < '1' || str[idx] > '9')
> + err = true;
> }
> - if ((str[idx + 1] >= '1' && str[idx + 1] <= '9')
> - || str[idx + 1] == 't')
> - {
> - if (str[idx] != 'r' && str[idx] != 'R'
> - && str[idx] != 'w' && str[idx] != 'W'
> - && str[idx] != 'o' && str[idx] != 'O')
> - err = true;
> - }
> - else if (str[idx + 1] != ' ')
> - err = true;
> + if (err)
> + internal_error ("invalid fn spec attribute \"%s\" arg %i", str, i);
> }
> - if (err)
> -internal_error ("invalid fn spec attribute \"%s\"", str);
> }
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
e, flags & ~OEP_ADDRESS_OF);
> + }
> + return;
> + }
> + break;
> case ARRAY_REF:
> case ARRAY_RANGE_REF:
> - case COMPONENT_REF:
> case BIT_FIELD_REF:
> sflags &= ~OEP_ADDRESS_OF;
> break;
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
big difference.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-12 Richard Biener
* bitmap.c (bitmap_list_view): Restore head->current.
* tree-ssa-pre.c (pre_expr_DFS): Elide expr_visited bitmap.
Special-case value expression bitmaps with one e
On Thu, 12 Nov 2020, Martin Jambor wrote:
> Hi,
>
> On Wed, Nov 11 2020, Richard Biener wrote:
> > On Mon, 9 Nov 2020, Martin Jambor wrote:
> >
> >> this patch modifies the loop invariant pass so that is can operate
> >> only on a single requested loo
hash_operand (DECL_FIELD_OFFSET (field),
> + hstate, flags & ~OEP_ADDRESS_OF);
> + hash_operand (DECL_FIELD_BIT_OFFSET (field),
> + hstate, flags & ~OEP_ADDRESS_OF);
> + }
> + return;
> + }
> + break;
> case ARRAY_REF:
> case ARRAY_RANGE_REF:
> - case COMPONENT_REF:
> case BIT_FIELD_REF:
> sflags &= ~OEP_ADDRESS_OF;
> break;
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
(c2))
> + return flags | ACCESS_PATH;
> +
> + /* aliasing_matching_component_refs_p compares
> + offsets within the path. Other properties are ignored.
> + Do not bother to verify offsets in variable accesses. Here we
> + already compared them by operand_equal_p so they are
> + structurally same. */
> + if (!known_eq (ref1->size, ref1->max_size))
> + {
> + poly_int64 offadj1, sztmc1, msztmc1;
> + bool reverse1;
> + get_ref_base_and_extent (c1, &offadj1, &sztmc1, &msztmc1, &reverse1);
> + poly_int64 offadj2, sztmc2, msztmc2;
> + bool reverse2;
> + get_ref_base_and_extent (c2, &offadj2, &sztmc2, &msztmc2, &reverse2);
> + if (!known_eq (offadj1, offadj2))
> + return flags | ACCESS_PATH;
> + }
> + c1 = TREE_OPERAND (c1, 0);
> + c2 = TREE_OPERAND (c2, 0);
> +}
> + /* Finally test the access type. */
> + if (!types_equal_for_same_type_for_tbaa_p (TREE_TYPE (c1),
> + TREE_TYPE (c2),
> + lto_streaming_safe))
> +return flags | ACCESS_PATH;
> + return flags;
> +}
> +
> +/* Hash REF to HSTATE. If LTO_STREAMING_SAFE do not use alias sets
> + and canonical types. */
> +void
> +ao_compare::hash_ao_ref (ao_ref *ref, bool lto_streaming_safe, bool tbaa,
> + inchash::hash &hstate)
> +{
> + tree base = ao_ref_base (ref);
> + tree tbase = base;
> +
> + if (!known_eq (ref->size, ref->max_size))
> +{
> + tree r = ref->ref;
> + if (TREE_CODE (r) == COMPONENT_REF
> + && DECL_BIT_FIELD (TREE_OPERAND (r, 1)))
> + {
> + tree field = TREE_OPERAND (r, 1);
> + hash_operand (DECL_FIELD_OFFSET (field), hstate, 0);
> + hash_operand (DECL_FIELD_BIT_OFFSET (field), hstate, 0);
> + hash_operand (DECL_SIZE (field), hstate, 0);
> + r = TREE_OPERAND (r, 0);
> + }
> + if (TREE_CODE (r) == BIT_FIELD_REF)
> + {
> + hash_operand (TREE_OPERAND (r, 1), hstate, 0);
> + hash_operand (TREE_OPERAND (r, 2), hstate, 0);
> + r = TREE_OPERAND (r, 0);
> + }
> + hash_operand (TYPE_SIZE (TREE_TYPE (ref->ref)), hstate, 0);
> + hash_operand (r, hstate, OEP_ADDRESS_OF);
> +}
> + else
> +{
> + hash_operand (tbase, hstate, OEP_ADDRESS_OF);
> + hstate.add_poly_int (ref->offset);
> + hstate.add_poly_int (ref->size);
> + hstate.add_poly_int (ref->max_size);
> +}
> + if (!lto_streaming_safe && tbaa)
> +{
> + hstate.add_int (ao_ref_alias_set (ref));
> + hstate.add_int (ao_ref_base_alias_set (ref));
> +}
> +}
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
cc.target/aarch64/vect-widen-add.c: New test.
> ? ? ? ? * gcc.target/aarch64/vect-widen-sub.c: New test.
>
>
> Ok for trunk?
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
hift_hi/lo
> patterns
> ? ? ? ? * tree-vect-stmts.c
> ? ? ? ? (vectorizable_conversion): Fix for widen_lshift case
>
> gcc/testsuite/ChangeLog:
>
> 2020-11-12 ?Joel Hutton ?
>
> ? ? ? ? * gcc.target/aarch64/vect-widen-lshift.c: New test.
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
local mode
> (false) or the IPA mode (true). */
>
> @@ -1174,6 +1531,10 @@ analyze_function (function *f, bool ipa)
> param_modref_max_accesses);
>summary_lto->writes_errno = false;
> }
> +
> + if (!ipa)
> +analyze_parms (summary);
> +
>int ecf_flags = flags_from_decl_or_type (current_function_decl);
>auto_vec recursive_calls;
>
> @@ -1191,8 +1552,9 @@ analyze_function (function *f, bool ipa)
> || ((!summary || !summary->useful_p (ecf_flags))
> && (!summary_lto || !summary_lto->useful_p (ecf_flags
> {
> - remove_summary (lto, nolto, ipa);
> - return;
> + collapse_loads (summary, summary_lto);
> + collapse_stores (summary, summary_lto);
> + break;
> }
> }
> }
> @@ -1957,7 +2319,7 @@ compute_parm_map (cgraph_edge *callee_edge,
> vec *parm_map)
> : callee_edge->caller);
>callee_pi = IPA_NODE_REF (callee);
>
> - (*parm_map).safe_grow_cleared (count);
> + (*parm_map).safe_grow_cleared (count, true);
>
>for (i = 0; i < count; i++)
> {
> diff --git a/gcc/ipa-modref.h b/gcc/ipa-modref.h
> index 31ceffa8d34..59872301cd6 100644
> --- a/gcc/ipa-modref.h
> +++ b/gcc/ipa-modref.h
> @@ -29,6 +29,7 @@ struct GTY(()) modref_summary
>/* Load and stores in function (transitively closed to all callees) */
>modref_records *loads;
>modref_records *stores;
> + auto_vec GTY((skip)) arg_flags;
>
>modref_summary ();
>~modref_summary ();
> diff --git a/gcc/params.opt b/gcc/params.opt
> index a33a371a395..70152bf59bb 100644
> --- a/gcc/params.opt
> +++ b/gcc/params.opt
> @@ -931,6 +931,10 @@ Maximum number of accesse stored in each modref
> reference.
> Common Joined UInteger Var(param_modref_max_tests) Init(64)
> Maximum number of tests performed by modref query.
>
> +-param=modref-max-depth=
> +Common Joined UInteger Var(param_modref_max_depth) Init(256)
> +Maximum depth of DFS walk used by modref escape analysis
> +
> -param=tm-max-aggregate-size=
> Common Joined UInteger Var(param_tm_max_aggregate_size) Init(9) Param
> Optimization
> Size in bytes after which thread-local aggregates should be instrumented
> with the logging functions instead of save/restore pairs.
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
quot;'\n");
> + }
> +
> + replace_uses_by (lhs, rhs);
> + remove_phi_node (&psi, true);
> + cfg_altered = true;
in the end the return value is unused but I think we should avoid
altering the CFG since doing so requires it to be cleaned up for
unreachable blocks. That means to open-code replace_uses_by as
imm_use_iterator imm_iter;
use_operand_p use;
gimple *stmt;
FOR_EACH_IMM_USE_STMT (stmt, imm_iter, name)
{
FOR_EACH_IMM_USE_ON_STMT (use, imm_iter)
replace_exp (use, val);
update_stmt (stmt);
}
Thanks,
Richard.
> + }
> + else
> + gsi_next (&gsi);
> + }
> +}
> +
> + return cfg_altered;
> +}
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
This replaces the old-school gimple_expr_code with more selective
functions throughout the compiler, in all cases making the code
shorter or more clear.
Bootstrapped / tested on x86_64-unknown-linux-gnu, pushed.
2020-11-13 Richard Biener
* cfgexpand.c (gimple_assign_rhs_to_tree): Use
This makes sure to properly extend the input range before seeing
whether it fits the target.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-13 Richard Biener
PR tree-optimization/97812
* tree-vrp.c (register_edge_assert_for_2): Extend the range
x86_64-unknown-linux-gnu, pushed.
2020-11-13 Richard Biener
* tree-ssa-sccvn.c (vn_phi_compute_hash): Always hash the
number of predecessors. Hash the block number also for
loop header PHIs.
(expressions_equal_p): Short-cut SSA name compares, remove
test
;i" "42" } }
> +l = i; // { dg-final { gdb-test 15 "i" "42" } }
> {
>extern int i;
> - l = i; // { dg-final { gdb-test 17 "i" "24" } }
> + p[0]++;
> + l = i; // { dg-final { gdb-te
rn_pass_by_reference (stmt, wlims);
> + if (gcall *call = dyn_cast (stmt))
> + maybe_warn_pass_by_reference (call, wlims);
> else if (gimple_assign_load_p (stmt)
> && gimple_has_location (stmt))
> {
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
We're stripping conversions off access functions of inductions and
thus the step can be of different sign. Fix bogus step CTORs by
converting the elements rather than the whole vector.
Bootstrap and regtest running on x86_64-unknown-linux-gnu.
2020-11-16 Richard Biener
PR
This properly handles reduction PHI nodes with unrepresented
initial value as leaf in the SLP graph.
Bootstrap & regtest running on x86_64-unknown-linux-gnu.
2020-11-16 Richard Biener
PR tree-optimization/97838
* tree-vect-slp.c (vect_slp_build_vertices): Properly ha
This avoids passing NULL to expressions_equal_p.
Bootstrap & regtest running on x86_64-unknown-linux-gnu.
2020-11-16 Richard Biener
PR tree-optimization/97830
* tree-ssa-sccvn.c (vn_reference_eq): Check for incomplete
types before comparing TYPE_
))
> {
> int f = cur_summary->arg_flags[ee->parm_index];
> diff --git a/gcc/tree-core.h b/gcc/tree-core.h
> index c9280a8d3b1..3e9455a553b 100644
> --- a/gcc/tree-core.h
> +++ b/gcc/tree-core.h
> @@ -110,6 +110,9 @@ struct die_struct;
> /* Nonzero if the argument is not used by the function. */
> #define EAF_UNUSED (1 << 3)
>
> +/* Nonzero if the argument does not escape to return value. */
> +#define EAF_NOT_RETURNED (1 << 4)
> +
> /* Call return flags. */
> /* Mask for the argument number that is returned. Lower two bits of
> the return flags, encodes argument slots zero to three. */
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
ary_after_inlining (cgraph_edge *e);
> +void ipa_modref_analyze (cgraph_node *node);
>
> #endif
> diff --git a/gcc/tree-ssa-alias-compare.h b/gcc/tree-ssa-alias-compare.h
> index 0e8409a7565..13bfa3381b5 100644
> --- a/gcc/tree-ssa-alias-compare.h
> +++ b/gcc/tree-ssa-alias-compare.h
> @@ -37,7 +37,7 @@ class ao_compare : public operand_compare
>int compare_ao_refs (ao_ref *ref1, ao_ref *ref2, bool lto_streaming_safe,
> bool tbaa);
>void hash_ao_ref (ao_ref *ref, bool lto_streaming_safe, bool tbaa,
> - inchash::hash &hstate);
> + bool base_alias_set, inchash::hash &hstate);
> };
>
> #endif
> diff --git a/gcc/tree-ssa-alias.c b/gcc/tree-ssa-alias.c
> index 5ebbb087285..52dd0055905 100644
> --- a/gcc/tree-ssa-alias.c
> +++ b/gcc/tree-ssa-alias.c
> @@ -4169,11 +4169,14 @@ ao_compare::compare_ao_refs (ao_ref *ref1, ao_ref
> *ref2,
>return flags;
> }
>
> -/* Hash REF to HSTATE. If LTO_STREAMING_SAFE do not use alias sets
> +/* Hash REF to HSTATE. If TBAA is false do not hash info relevant
> + for type based alias anslysi. If BASE_ALIS_SET is false
> + do not hash base alias set and the access path.
> + If LTO_STREAMING_SAFE do not use alias sets
> and canonical types. */
> void
> ao_compare::hash_ao_ref (ao_ref *ref, bool lto_streaming_safe, bool tbaa,
> - inchash::hash &hstate)
> + bool base_alias_set, inchash::hash &hstate)
> {
>tree base = ao_ref_base (ref);
>tree tbase = base;
> @@ -4209,6 +4212,7 @@ ao_compare::hash_ao_ref (ao_ref *ref, bool
> lto_streaming_safe, bool tbaa,
>if (!lto_streaming_safe && tbaa)
> {
>hstate.add_int (ao_ref_alias_set (ref));
> - hstate.add_int (ao_ref_base_alias_set (ref));
> + if (base_alias_set)
> + hstate.add_int (ao_ref_base_alias_set (ref));
> }
> }
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
This properly handles reduction PHI nodes with unrepresented
initial value as leaf in the SLP graph.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-16 Richard Biener
PR tree-optimization/97838
* tree-vect-slp.c (vect_slp_build_vertices): Properly handle
gt; +
> +tree
> +ao_ref_alias_ptr_type (ao_ref *ref)
> +{
> + if (!ref->ref)
> +return NULL_TREE;
> + tree ret = reference_alias_ptr_type (ref->ref);
> + gcc_checking_assert (get_deref_alias_set (ret) == ao_ref_alias_set (ref));
> + return ret;
> +}
> +
> +
> /* Init an alias-oracle reference representation from a gimple pointer
> PTR a range specified by OFFSET, SIZE and MAX_SIZE under the assumption
> that RANGE_KNOWN is set.
> diff --git a/gcc/tree-ssa-alias.h b/gcc/tree-ssa-alias.h
> index 1561ead2941..830ac1bf84d 100644
> --- a/gcc/tree-ssa-alias.h
> +++ b/gcc/tree-ssa-alias.h
> @@ -114,6 +114,8 @@ extern void ao_ref_init_from_ptr_and_size (ao_ref *,
> tree, tree);
> extern tree ao_ref_base (ao_ref *);
> extern alias_set_type ao_ref_alias_set (ao_ref *);
> extern alias_set_type ao_ref_base_alias_set (ao_ref *);
> +extern tree ao_ref_alias_ptr_type (ao_ref *);
> +extern tree ao_ref_base_alias_ptr_type (ao_ref *);
> extern bool ptr_deref_may_alias_global_p (tree);
> extern bool ptr_derefs_may_alias_p (tree, tree);
> extern bool ptrs_compare_unequal (tree, tree);
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
L_FUNCTION_CODE (fndecl) : (built_in_function)BUILT_IN_LAST);
> @@ -523,6 +527,10 @@ maybe_warn_pass_by_reference (gimple *stmt, wlimits
> &wlims)
> (but not definitive) read access. */
> wlims.always_executed = false;
>
> + /* Ignore args we are not going to rea
/* Match but do not perform any additional operations on the SLP tree.
> */
> +virtual bool matches (slp_tree_to_load_perm_map_t *) = 0;
> +
> +/* Match but use for the first operation the supplied COMPLEX_OPERATION.
> No
> + additional operations or modification of the SLP tree are performed.
> */
> +virtual bool matches (enum _complex_operation,
> + slp_tree_to_load_perm_map_t *, vec)
> +{
> + return false;
> +}
> +
> +/* Friendly name of the operation the pattern matches. */
> +virtual const char* get_name () = 0;
> +
> +/* Default destructor. */
> +virtual ~vect_pattern ()
> +{
> + this->m_ops.release ();
> +}
> +
> +/* Check to see if the matched tree is valid for the operation the
> matcher
> + wants. If the operation is valid then the tree is reshaped in the
> final
> + format that build () requires. */
> +virtual bool validate_p (slp_tree_to_load_perm_map_t *)
> +{
> + return true;
> +}
> +
> +/* Return the matched internal function. If no match was done this is
> set
> + to LAST_IFN. */
> +virtual internal_fn get_ifn ()
> +{
> + return this->m_ifn;
> +}
> +};
> +
> +/* Function pointer to create a new pattern matcher from a generic type. */
> +typedef vect_pattern* (*vect_pattern_decl_t) (slp_tree *);
> +
> +/* List of supported pattern matchers. */
> +extern vect_pattern_decl_t slp_patterns[];
> +
> +/* Number of supported pattern matchers. */
> +extern size_t num__slp_patterns;
> +
> #endif /* GCC_TREE_VECTORIZER_H */
> diff --git a/gcc/tree-vectorizer.c b/gcc/tree-vectorizer.c
> index
> d81774b242569262a51b7be02815acd6d1a6bfd0..2a6ddd685922f6b60ae1305974335fb863a2af39
> 100644
> --- a/gcc/tree-vectorizer.c
> +++ b/gcc/tree-vectorizer.c
> @@ -535,6 +535,8 @@ vec_info::add_pattern_stmt (gimple *stmt, stmt_vec_info
> stmt_info)
>stmt_vec_info res = new_stmt_vec_info (stmt);
>set_vinfo_for_stmt (stmt, res, false);
>STMT_VINFO_RELATED_STMT (res) = stmt_info;
> + vect_save_relevancy (stmt_info);
> + vect_push_relevancy (res, STMT_VINFO_RELEVANT (stmt_info));
Hmmm, that looks like an odd place to do this. I suspect it's not
the "final" modification of either relevancy?
Can we get rid of this hunk somehow?
Thanks,
Richard.
>return res;
> }
>
>
>
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
On Mon, 16 Nov 2020, Richard Biener wrote:
> On Sat, 14 Nov 2020, Tamar Christina wrote:
>
> > Hi All,
> >
> > This patch adds the pre-requisites and general scaffolding for supporting
> > doing
> > SLP pattern matching.
> >
> > Bootstrapped R
or the aarch64 bits with the testsuite changes above.
> ok?
The gcc/tree-vect-stmts.c parts are OK.
Richard.
> gcc/ChangeLog:
>
> 2020-11-13 Joel Hutton
>
> * config/aarch64/aarch64-simd.md: Add vec_widen_lshift_hi/lo
> patterns.
> * tree-vect
D): Define vectorized widen add, subtracts.
> * tree-cfg.c (verify_gimple_assign_binary): Add case for widening adds,
> subtracts.
> * tree-inline.c (estimate_operator_cost): Add case for widening adds,
> subtracts.
> * tree-vect-generic.c (expand_vector_o
This properly handles reduction PHI nodes with unrepresented
initial value as leaf in the SLP graph.
Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.
2020-11-16 Richard Biener
PR tree-optimization/97838
* tree-vect-slp.c (vect_slp_build_vertices): Properly handle
On Sat, 14 Nov 2020, Tamar Christina wrote:
> Hi All,
>
> This patch series adds support for SLP vectorization of complex instructions
> [1].
>
> These instructions exist only in their vector forms and require you to
> recognize
> two statements in parallel. Complex operations usually require
Status
==
GCC trunk which eventually will become GCC 11 is now in Stage 3
which means open for general bugfixing.
We have accumulated quite a number of regressions, a lot of the
untriaged and eventually stale. Please help in cleaning up.
Quality Data
Priority # C
6.c2020-11-16 22:06:27.021813335
> +0100
> @@ -6,7 +6,7 @@
>
> struct S { float f, g; };
>
> -__attribute__((noinline, noclone)) void
> +__attribute__((noipa)) void
> foo (struct S *p)
> {
>struct S s1, s2; /* { dg-final { gdb-test pr59776.c:17
> "s1.f" "5.0" } } */
>
> Jakub
>
>
--
Richard Biener
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Felix Imend
On Mon, 16 Nov 2020, Tamar Christina wrote:
> Hi Richi, thanks for the review!
>
> Just a quick comment on one of the questions asked:
>
> > -Original Message-
> > From: rguent...@c653.arch.suse.de On
> > Behalf Of Richard Biener
> > Sent: Mond
On Tue, Jan 7, 2020 at 11:33 AM Richard Sandiford
wrote:
>
> Richard Sandiford writes:
> > Richard Biener writes:
> >> On December 14, 2019 11:43:48 AM GMT+01:00, Richard Sandiford
> >> wrote:
> >>>Richard Biener writes:
> >>>> On D
On Wed, Dec 11, 2019 at 5:55 PM Wilco Dijkstra wrote:
>
> Hi Richard,
>
> >> +(match (ctz_table_index @1 @2 @3)
> >> + (rshift (mult (bit_and (negate @1) @1) INTEGER_CST@2) INTEGER_CST@3))
> >
> > You need a :c on the bit_and
>
> Fixed.
>
> > + unsigned HOST_WIDE_INT val = tree_to_uhwi (mulc);
>
fied @var{dumppfx} also undergoes the @var{dumpbase}-
> +transformation above. If neither @option{-dumpdir} nor
> +@option{-dumpbase} are given, the linker output base name, minus the
> +executable suffix, plus a dash is appended to the default @var{dumppfx}
> +instead.
> +
> +When
On Thu, Jan 9, 2020 at 1:38 PM Iain Sandoe wrote:
>
> Hi Richard,
>
> The SVN commit IDs that address changes to this part of the patchset are noted
> in the revised patch header below, for reference.
>
> Richard Biener wrote:
>
> > On Sun, Nov 17, 2019
On Thu, Jan 9, 2020 at 2:35 PM Kwok Cheung Yeung wrote:
>
> Hello
>
> On 09/12/2019 8:01 am, Richard Biener wrote:
> >
> > The stream-in code has
> >
> >/* If we're recompiling LTO objects with debug stmts but
> > we'
Committed.
Richard.
2010-01-10 Richard Biener
PR testsuite/93216
* gcc.dg/optimize-bswaphi-1.c: Split previously added
case into a LE and BE variant.
Index: gcc/testsuite/gcc.dg/optimize-bswaphi-1.c
alyze them.
Bootstrapped and tested on x86_64-unknown-linux-gnu.
This shifts the quadraticness elsewhere (empty LP cleanup,
patch for that testing).
OK?
Thanks,
Richard.
2020-01-10 Richard Biener
PR middle-end/93199
* tree-eh.c (sink_clobbers): Move clobbers to
4GB (it's actually this very patch
that helps here for reasons I have not investigated - maybe we never
shrink some EH data structures, who knows).
Thanks,
Richard.
2020-01-10 Richard Biener
PR middle-end/93199
* tree-eh.c (redirect_eh_edge_1): Avoid some w
This caches alias info avoiding repeated (expensive)
get_ref_base_and_extent. It doesn't address the unlimited quadraticness
in this function the PR93199 testcase runs into.
Bootstrapped on x86_64-unknown-linux-gnu, testing in progress.
Richard.
2020-01-10 Richard B
On Fri, Jan 10, 2020 at 2:23 PM Richard Earnshaw (lists)
wrote:
>
> This patch is intended to help with folks setting up a git work
> environment for use with GCC following the transition to git. It
> currently does a couple of things.
>
> 1) Add an alias 'svn-rev' to git so that you can look up
On Fri, Jan 10, 2020 at 1:39 PM Richard Sandiford
wrote:
>
> update_epilogue_loop_vinfo applies SSA renmaing to the DR_REF of a
> gather or scatter, so that vect_check_gather_scatter continues to work.
> However, we sometimes also rely on vect_check_gather_scatter when
> using gathers and scatters
On Fri, Jan 10, 2020 at 1:45 PM Richard Sandiford
wrote:
>
> The related_vector_mode series missed this case in
> vect_create_epilog_for_reduction, where we want to create the
> unsigned integer equivalent of another vector. Without it we
> could mix SVE and Advanced SIMD vectors in the same oper
1001 - 1100 of 27301 matches
Mail list logo