On Tue, Apr 12, 2011 at 3:59 PM, Richard Sandiford <richard.sandif...@linaro.org> wrote: > This patch adds vec_load_lanes and vec_store_lanes optabs for instructions > like NEON's vldN and vstN. The optabs are defined this way because the > vectors must be allocated to a block of consecutive registers. > > Tested on x86_64-linux-gnu and arm-linux-gnueabi. OK to install? > > Richard > > > gcc/ > * doc/md.texi (vec_load_lanes, vec_store_lanes): Document. > * optabs.h (COI_vec_load_lanes, COI_vec_store_lanes): New > convert_optab_index values. > (vec_load_lanes_optab, vec_store_lanes_optab): New convert optabs. > * genopinit.c (optabs): Initialize the new optabs. > * internal-fn.def (LOAD_LANES, STORE_LANES): New internal functions. > * internal-fn.c (get_multi_vector_move, expand_LOAD_LANES) > (expand_STORE_LANES): New functions. > * tree.h (build_simple_array_type): Declare. > * tree.c (build_simple_array_type): New function. > * tree-vectorizer.h (vect_model_store_cost): Add a bool argument. > (vect_model_load_cost): Likewise. > (vect_store_lanes_supported, vect_load_lanes_supported) > (vect_record_strided_load_vectors): Declare. > * tree-vect-data-refs.c (vect_lanes_optab_supported_p) > (vect_store_lanes_supported, vect_load_lanes_supported): New functions. > (vect_transform_strided_load): Split out statement recording into... > (vect_record_strided_load_vectors): ...this new function. > * tree-vect-stmts.c (create_vector_array, read_vector_array) > (write_vector_array, create_array_ref): New functions. > (vect_model_store_cost): Add store_lanes_p argument. > (vect_model_load_cost): Add load_lanes_p argument. > (vectorizable_store): Try to use store-lanes functions for > interleaved stores. > (vectorizable_load): Likewise load-lanes and loads. > * tree-vect-slp.c (vect_get_and_check_slp_defs) > (vect_build_slp_tree): > > Index: gcc/doc/md.texi > =================================================================== > --- gcc/doc/md.texi 2011-04-12 12:16:46.000000000 +0100 > +++ gcc/doc/md.texi 2011-04-12 14:48:28.000000000 +0100 > @@ -3846,6 +3846,48 @@ into consecutive memory locations. Oper > consecutive memory locations, operand 1 is the first register, and > operand 2 is a constant: the number of consecutive registers. > > +@cindex @code{vec_load_lanes@var{m}@var{n}} instruction pattern > +@item @samp{vec_load_lanes@var{m}@var{n}} > +Perform an interleaved load of several vectors from memory operand 1 > +into register operand 0. Both operands have mode @var{m}. The register > +operand is viewed as holding consecutive vectors of mode @var{n}, > +while the memory operand is a flat array that contains the same number > +of elements. The operation is equivalent to: > + > +@smallexample > +int c = GET_MODE_SIZE (@var{m}) / GET_MODE_SIZE (@var{n}); > +for (j = 0; j < GET_MODE_NUNITS (@var{n}); j++) > + for (i = 0; i < c; i++) > + operand0[i][j] = operand1[j * c + i]; > +@end smallexample > + > +For example, @samp{vec_load_lanestiv4hi} loads 8 16-bit values > +from memory into a register of mode @samp{TI}@. The register > +contains two consecutive vectors of mode @samp{V4HI}@.
So vec_load_lanestiv2qi would load ... ? c == 8 here. Intuitively such operation would have adjacent blocks of siv2qi memory. But maybe you want to constrain the mode size to GET_MODE_SIZE (@var{n}) * GET_MODE_NUNITS (@var{n})? In which case the mode m is redundant? You could specify that we load NUNITS adjacent vectors into an integer mode of appropriate size. > +This pattern can only be used if: > +@smallexample > +TARGET_ARRAY_MODE_SUPPORTED_P (@var{n}, @var{c}) > +@end smallexample > +is true. GCC assumes that, if a target supports this kind of > +instruction for some mode @var{n}, it also supports unaligned > +loads for vectors of mode @var{n}. > + > +@cindex @code{vec_store_lanes@var{m}@var{n}} instruction pattern > +@item @samp{vec_store_lanes@var{m}@var{n}} > +Equivalent to @samp{vec_load_lanes@var{m}@var{n}}, with the memory > +and register operands reversed. That is, the instruction is > +equivalent to: > + > +@smallexample > +int c = GET_MODE_SIZE (@var{m}) / GET_MODE_SIZE (@var{n}); > +for (j = 0; j < GET_MODE_NUNITS (@var{n}); j++) > + for (i = 0; i < c; i++) > + operand0[j * c + i] = operand1[i][j]; > +@end smallexample > + > +for a memory operand 0 and register operand 1. > + > @cindex @code{vec_set@var{m}} instruction pattern > @item @samp{vec_set@var{m}} > Set given field in the vector value. Operand 0 is the vector to modify, > Index: gcc/optabs.h > =================================================================== > --- gcc/optabs.h 2011-04-12 12:16:46.000000000 +0100 > +++ gcc/optabs.h 2011-04-12 14:48:28.000000000 +0100 > @@ -578,6 +578,9 @@ enum convert_optab_index > COI_satfract, > COI_satfractuns, > > + COI_vec_load_lanes, > + COI_vec_store_lanes, > + Um, they are not really conversion optabs. Any reason they can't use the direct_optab table and path? What are the two modes usually? I don't see how you specify the kind of permutation that is performed on the load - so, why not go the targetm.expand_builtin path instead (well, targetm.expand_internal_fn, of course - or rather targetm.expand_gimple_call which we need anyway for expanding directly from gimple calls at some point). > COI_MAX > }; > > @@ -598,6 +601,8 @@ #define fract_optab (&convert_optab_tabl > #define fractuns_optab (&convert_optab_table[COI_fractuns]) > #define satfract_optab (&convert_optab_table[COI_satfract]) > #define satfractuns_optab (&convert_optab_table[COI_satfractuns]) > +#define vec_load_lanes_optab (&convert_optab_table[COI_vec_load_lanes]) > +#define vec_store_lanes_optab (&convert_optab_table[COI_vec_store_lanes]) > > /* Contains the optab used for each rtx code. */ > extern optab code_to_optab[NUM_RTX_CODE + 1]; > Index: gcc/genopinit.c > =================================================================== > --- gcc/genopinit.c 2011-04-12 12:16:46.000000000 +0100 > +++ gcc/genopinit.c 2011-04-12 14:48:28.000000000 +0100 > @@ -74,6 +74,8 @@ static const char * const optabs[] = > "set_convert_optab_handler (fractuns_optab, $B, $A, > CODE_FOR_$(fractuns$Q$a$I$b2$))", > "set_convert_optab_handler (satfract_optab, $B, $A, > CODE_FOR_$(satfract$a$Q$b2$))", > "set_convert_optab_handler (satfractuns_optab, $B, $A, > CODE_FOR_$(satfractuns$I$a$Q$b2$))", > + "set_convert_optab_handler (vec_load_lanes_optab, $A, $B, > CODE_FOR_$(vec_load_lanes$a$b$))", > + "set_convert_optab_handler (vec_store_lanes_optab, $A, $B, > CODE_FOR_$(vec_store_lanes$a$b$))", > "set_optab_handler (add_optab, $A, CODE_FOR_$(add$P$a3$))", > "set_optab_handler (addv_optab, $A, CODE_FOR_$(add$F$a3$)),\n\ > set_optab_handler (add_optab, $A, CODE_FOR_$(add$F$a3$))", > Index: gcc/internal-fn.def > =================================================================== > --- gcc/internal-fn.def 2011-04-12 14:10:42.000000000 +0100 > +++ gcc/internal-fn.def 2011-04-12 14:48:28.000000000 +0100 > @@ -32,3 +32,6 @@ along with GCC; see the file COPYING3. > > where NAME is the name of the function and FLAGS is a set of > ECF_* flags. */ > + > +DEF_INTERNAL_FN (LOAD_LANES, ECF_CONST | ECF_LEAF) > +DEF_INTERNAL_FN (STORE_LANES, ECF_CONST | ECF_LEAF) > Index: gcc/internal-fn.c > =================================================================== > --- gcc/internal-fn.c 2011-04-12 14:10:42.000000000 +0100 > +++ gcc/internal-fn.c 2011-04-12 14:48:28.000000000 +0100 > @@ -41,6 +41,69 @@ #define DEF_INTERNAL_FN(CODE, FLAGS) FLA > 0 > }; > > +/* ARRAY_TYPE is an array of vector modes. Return the associated insn > + for load-lanes-style optab OPTAB. The insn must exist. */ > + > +static enum insn_code > +get_multi_vector_move (tree array_type, convert_optab optab) > +{ > + enum insn_code icode; > + enum machine_mode imode; > + enum machine_mode vmode; > + > + gcc_assert (TREE_CODE (array_type) == ARRAY_TYPE); > + imode = TYPE_MODE (array_type); > + vmode = TYPE_MODE (TREE_TYPE (array_type)); > + > + icode = convert_optab_handler (optab, imode, vmode); > + gcc_assert (icode != CODE_FOR_nothing); > + return icode; > +} > + > +/* Expand: LHS = LOAD_LANES (ARGS[0]). */ > + > +static void > +expand_LOAD_LANES (tree lhs, tree *args) > +{ > + struct expand_operand ops[2]; > + tree type; > + rtx target, mem; > + > + type = TREE_TYPE (lhs); > + > + target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE); > + mem = expand_normal (args[0]); > + > + gcc_assert (MEM_P (mem)); > + PUT_MODE (mem, TYPE_MODE (type)); > + > + create_output_operand (&ops[0], target, TYPE_MODE (type)); > + create_fixed_operand (&ops[1], mem); > + expand_insn (get_multi_vector_move (type, vec_load_lanes_optab), 2, ops); > +} > + > +/* Expand: LHS = STORE_LANES (ARGS[0]). */ > + > +static void > +expand_STORE_LANES (tree lhs, tree *args) > +{ > + struct expand_operand ops[2]; > + tree type; > + rtx target, rhs; > + > + type = TREE_TYPE (args[0]); > + > + target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE); > + rhs = expand_normal (args[0]); > + > + gcc_assert (MEM_P (target)); > + PUT_MODE (target, TYPE_MODE (type)); > + > + create_fixed_operand (&ops[0], target); > + create_input_operand (&ops[1], rhs, TYPE_MODE (type)); > + expand_insn (get_multi_vector_move (type, vec_store_lanes_optab), 2, ops); > +} > + > /* Routines to expand each internal function, indexed by function number. > Each routine has the prototype: > > Index: gcc/tree.h > =================================================================== > --- gcc/tree.h 2011-04-12 12:16:46.000000000 +0100 > +++ gcc/tree.h 2011-04-12 14:48:28.000000000 +0100 > @@ -4198,6 +4198,7 @@ extern tree build_type_no_quals (tree); > extern tree build_index_type (tree); > extern tree build_array_type (tree, tree); > extern tree build_nonshared_array_type (tree, tree); > +extern tree build_simple_array_type (tree, unsigned HOST_WIDE_INT); > extern tree build_function_type (tree, tree); > extern tree build_function_type_list (tree, ...); > extern tree build_function_type_skip_args (tree, bitmap); > Index: gcc/tree.c > =================================================================== > --- gcc/tree.c 2011-04-12 12:16:46.000000000 +0100 > +++ gcc/tree.c 2011-04-12 14:48:28.000000000 +0100 > @@ -7385,6 +7385,15 @@ build_nonshared_array_type (tree elt_typ > return build_array_type_1 (elt_type, index_type, false); > } > > +/* Return a representation of ELT_TYPE[NELTS], using indices of type > + sizetype. */ > + > +tree > +build_simple_array_type (tree elt_type, unsigned HOST_WIDE_INT nelts) build_array_type_nelts The rest looks ok to me. Richard.