https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116142
--- Comment #4 from Xi Ruoyao <xry111 at gcc dot gnu.org> ---
(In reply to Richard Biener from comment #3)
> (In reply to Xi Ruoyao from comment #2)
> > (In reply to Richard Biener from comment #1)
> > > To make it used by the reduction you'd need to have a dot_product covering
> > > the accumulation as well.
> >
> > I can add that, but what if we slightly alter it to something like
> >
> > short x[8], y[8];
> >
> > int dot() {
> > int ret = 0;
> > for (int i = 0; i < 8; i++)
> > ret ^= x[i] * y[i];
> > return ret;
> > }
> >
> > ? It's no longer a dot product but shouldn't
> > vec_widen_smult_{even,odd}_v8hi be used anyway?
>
> Sure, you should see
>
> t.c:5:20: note: Analyze phi: ret_13 = PHI <ret_9(5), 0(2)>
> t.c:5:20: note: reduction path: ret_9 ret_13
> t.c:5:20: note: reduction: detected reduction
> t.c:5:20: note: Detected reduction.
> ...
> t.c:5:20: note: vect_recog_widen_mult_pattern: detected: _5 = _2 * _4;
> t.c:5:20: note: widen_mult pattern recognized: patt_24 = _1 w* _3;
>
> and then
>
> # vect_ret_13.11_12 = PHI <vect_ret_9.12_7(5), { 0, 0, 0, 0 }(2)>
> # ivtmp_29 = PHI <ivtmp_30(5), 0(2)>
> vect__1.6_20 = MEM <vector(8) short int> [(short int *)vectp_x.4_22];
> _1 = x[i_15];
> _2 = (int) _1;
> vect__3.9_17 = MEM <vector(8) short int> [(short int *)vectp_y.7_19];
> vect_patt_23.10_16 = WIDEN_MULT_LO_EXPR <vect__1.6_20, vect__3.9_17>;
> vect_patt_23.10_14 = WIDEN_MULT_HI_EXPR <vect__1.6_20, vect__3.9_17>;
> vect_ret_9.12_11 = vect_patt_23.10_16 ^ vect_ret_13.11_12;
> vect_ret_9.12_7 = vect_patt_23.10_14 ^ vect_ret_9.12_11;
>
> at least that's what happens on x86. It should also work with _EVEN/_ODD.
The condition of _EVEN/_ODD is more strict than _HI/_LO. It requires
STMT_VINFO_RELEVANT (stmt_info) == vect_used_by_reduction but this condition
seems not true for my test cases.