Richard Guenther <richard.guent...@gmail.com> writes:
> On Tue, Apr 12, 2011 at 3:59 PM, Richard Sandiford
> <richard.sandif...@linaro.org> wrote:
>> This patch adds vec_load_lanes and vec_store_lanes optabs for instructions
>> like NEON's vldN and vstN.  The optabs are defined this way because the
>> vectors must be allocated to a block of consecutive registers.
>>
>> Tested on x86_64-linux-gnu and arm-linux-gnueabi.  OK to install?
>>
>> Richard
>>
>>
>> gcc/
>>        * doc/md.texi (vec_load_lanes, vec_store_lanes): Document.
>>        * optabs.h (COI_vec_load_lanes, COI_vec_store_lanes): New
>>        convert_optab_index values.
>>        (vec_load_lanes_optab, vec_store_lanes_optab): New convert optabs.
>>        * genopinit.c (optabs): Initialize the new optabs.
>>        * internal-fn.def (LOAD_LANES, STORE_LANES): New internal functions.
>>        * internal-fn.c (get_multi_vector_move, expand_LOAD_LANES)
>>        (expand_STORE_LANES): New functions.
>>        * tree.h (build_simple_array_type): Declare.
>>        * tree.c (build_simple_array_type): New function.
>>        * tree-vectorizer.h (vect_model_store_cost): Add a bool argument.
>>        (vect_model_load_cost): Likewise.
>>        (vect_store_lanes_supported, vect_load_lanes_supported)
>>        (vect_record_strided_load_vectors): Declare.
>>        * tree-vect-data-refs.c (vect_lanes_optab_supported_p)
>>        (vect_store_lanes_supported, vect_load_lanes_supported): New 
>> functions.
>>        (vect_transform_strided_load): Split out statement recording into...
>>        (vect_record_strided_load_vectors): ...this new function.
>>        * tree-vect-stmts.c (create_vector_array, read_vector_array)
>>        (write_vector_array, create_array_ref): New functions.
>>        (vect_model_store_cost): Add store_lanes_p argument.
>>        (vect_model_load_cost): Add load_lanes_p argument.
>>        (vectorizable_store): Try to use store-lanes functions for
>>        interleaved stores.
>>        (vectorizable_load): Likewise load-lanes and loads.
>>        * tree-vect-slp.c (vect_get_and_check_slp_defs)
>>        (vect_build_slp_tree):
>>
>> Index: gcc/doc/md.texi
>> ===================================================================
>> --- gcc/doc/md.texi     2011-04-12 12:16:46.000000000 +0100
>> +++ gcc/doc/md.texi     2011-04-12 14:48:28.000000000 +0100
>> @@ -3846,6 +3846,48 @@ into consecutive memory locations.  Oper
>>  consecutive memory locations, operand 1 is the first register, and
>>  operand 2 is a constant: the number of consecutive registers.
>>
>> +@cindex @code{vec_load_lanes@var{m}@var{n}} instruction pattern
>> +@item @samp{vec_load_lanes@var{m}@var{n}}
>> +Perform an interleaved load of several vectors from memory operand 1
>> +into register operand 0.  Both operands have mode @var{m}.  The register
>> +operand is viewed as holding consecutive vectors of mode @var{n},
>> +while the memory operand is a flat array that contains the same number
>> +of elements.  The operation is equivalent to:
>> +
>> +@smallexample
>> +int c = GET_MODE_SIZE (@var{m}) / GET_MODE_SIZE (@var{n});
>> +for (j = 0; j < GET_MODE_NUNITS (@var{n}); j++)
>> +  for (i = 0; i < c; i++)
>> +    operand0[i][j] = operand1[j * c + i];
>> +@end smallexample
>> +
>> +For example, @samp{vec_load_lanestiv4hi} loads 8 16-bit values
>> +from memory into a register of mode @samp{TI}@.  The register
>> +contains two consecutive vectors of mode @samp{V4HI}@.
>
> So vec_load_lanestiv2qi would load ... ?  c == 8 here.  Intuitively
> such operation would have adjacent blocks of siv2qi memory.  But
> maybe you want to constrain the mode size to GET_MODE_SIZE (@var{n})
> * GET_MODE_NUNITS (@var{n})?  In which case the mode m is
> redundant?  You could specify that we load NUNITS adjacent vectors into
> an integer mode of appropriate size.

Like you say, vec_load_lanestiv2qi would load 16 QImode elements into
8 consecutive V2QI registers.  The first element from register vector I
would come from operand1[I] and the second element would come from
operand1[I + 8].  That's meant to be a valid combination.

We specifically want to allow:

  GET_MODE_SIZE (@var{m})
    != GET_MODE_SIZE (@var{n}) * GET_MODE_NUNITS (@var{n})

The vec_load_lanestiv4hi example in the docs is one case of this:

  GET_MODE_SIZE (@var{m}) = 16
  GET_MODE_SIZE (@var{n}) = 8
  GET_MODE_NUNITS (@var{n}) = 4

That example maps directly to ARM's vld2.32.  We also want cases
where @var{m} is three times the size of @var{n} (vld3.WW) and
cases where @var{m} is four times the size of @var{n} (vld4.WW)

>> +/* Return a representation of ELT_TYPE[NELTS], using indices of type
>> +   sizetype.  */
>> +
>> +tree
>> +build_simple_array_type (tree elt_type, unsigned HOST_WIDE_INT nelts)
>
> build_array_type_nelts

OK.

Richard

Reply via email to