Hi,

Some comments below, but otherwise it looks good to me.

A few of the comments are about removing hook or macro definitions
that are the same as the default.  Doing that helps people who want
to update a hook interface in future, since there are then fewer
places to adjust.

xucheng...@loongson.cn writes:
> […]
> +/* Describes how a symbol is used.
> +
> +   SYMBOL_CONTEXT_CALL
> +       The symbol is used as the target of a call instruction.
> +
> +   SYMBOL_CONTEXT_LEA
> +       The symbol is used in a load-address operation.
> +
> +   SYMBOL_CONTEXT_MEM
> +       The symbol is used as the address in a MEM.  */
> +enum loongarch_symbol_context {
> +  SYMBOL_CONTEXT_CALL,
> +  SYMBOL_CONTEXT_LEA,
> +  SYMBOL_CONTEXT_MEM
> +};

It looks like this is unused: loongarch_classify_symbol takes an
argument of this type, but ignores it.

> […]
> +/* Classifies a type of call.
> +
> +   LARCH_CALL_NORMAL
> +     A normal call or call_value pattern.
> +
> +   LARCH_CALL_SIBCALL
> +     A sibcall or sibcall_value pattern.
> +
> +   LARCH_CALL_EPILOGUE
> +     A call inserted in the epilogue.  */
> +enum loongarch_call_type {
> +  LARCH_CALL_NORMAL,
> +  LARCH_CALL_SIBCALL,
> +  LARCH_CALL_EPILOGUE
> +};

This also looks unused.

> +
> +/* Controls the conditions under which certain instructions are split.
> +
> +   SPLIT_IF_NECESSARY
> +     Only perform splits that are necessary for correctness
> +     (because no unsplit version exists).
> +
> +   SPLIT_FOR_SPEED
> +     Perform splits that are necessary for correctness or
> +     beneficial for code speed.
> +
> +   SPLIT_FOR_SIZE
> +     Perform splits that are necessary for correctness or
> +     beneficial for code size.  */
> +enum loongarch_split_type {
> +  SPLIT_IF_NECESSARY,
> +  SPLIT_FOR_SPEED,
> +  SPLIT_FOR_SIZE
> +};

It looks like this is also unused: loongarch_split_move_p takes an
argument of this type, but ignores it.

I think it'd be better to remove these three enums for now and add them
back later if they become useful.

> […]
> +/* RTX costs of various operations on the different architectures.  */
> +struct loongarch_rtx_cost_data
> +{
> +  unsigned short fp_add;
> +  unsigned short fp_mult_sf;
> +  unsigned short fp_mult_df;
> +  unsigned short fp_div_sf;
> +  unsigned short fp_div_df;
> +  unsigned short int_mult_si;
> +  unsigned short int_mult_di;
> +  unsigned short int_div_si;
> +  unsigned short int_div_di;
> +  unsigned short branch_cost;
> +  unsigned short memory_latency;
> +};
> +
> +/* Default RTX cost initializer.  */
> +#define COSTS_N_INSNS(N) ((N) * 4)
> +#define DEFAULT_COSTS                                \
> +    .fp_add          = COSTS_N_INSNS (1),    \
> +    .fp_mult_sf              = COSTS_N_INSNS (2),    \
> +    .fp_mult_df              = COSTS_N_INSNS (2),    \
> +    .fp_div_sf               = COSTS_N_INSNS (4),    \
> +    .fp_div_df               = COSTS_N_INSNS (4),    \
> +    .int_mult_si     = COSTS_N_INSNS (1),    \
> +    .int_mult_di     = COSTS_N_INSNS (1),    \
> +    .int_div_si              = COSTS_N_INSNS (1),    \
> +    .int_div_di              = COSTS_N_INSNS (1),    \
> +    .branch_cost     = 2,                    \
> +    .memory_latency  = 4

Unfortunately we need to stay within standard C++(11), so the initialisers
can't use “.name = value” notation.

> […]
> +/* Classifies an address.
> +
> +   ADDRESS_REG
> +       A natural register + offset address.  The register satisfies
> +       loongarch_valid_base_register_p and the offset is a 
> const_arith_operand.
> +
> +   ADDRESS_REG_REG
> +       A base register indexed by (optionally scaled) register.
> +
> +   ADDRESS_CONST_INT
> +       A signed 16-bit constant address.
> +
> +   ADDRESS_SYMBOLIC:
> +       A constant symbolic address.  */
> +enum loongarch_address_type
> +{
> +  ADDRESS_REG,
> +  ADDRESS_REG_REG,
> +  ADDRESS_CONST_INT,
> +  ADDRESS_SYMBOLIC
> +};
> +
> +
> +/* Information about an address described by loongarch_address_type.
> +
> +   ADDRESS_CONST_INT
> +       No fields are used.
> +
> +   ADDRESS_REG
> +       REG is the base register and OFFSET is the constant offset.
> +
> +   ADDRESS_SYMBOLIC
> +       SYMBOL_TYPE is the type of symbol that the address references.  */
> +struct loongarch_address_info
> +{
> +  enum loongarch_address_type type;
> +  rtx reg;
> +  rtx offset;
> +  enum loongarch_symbol_type symbol_type;
> +};

It'd be worth documenting what the fields mean for ADDRESS_REG_REG too.
It looks like OFFSET is the index register in that case.

> […]
> +/* The value of TARGET_ATTRIBUTE_TABLE.  */
> +static const struct attribute_spec loongarch_attribute_table[] = {
> +    /* { name, min_len, max_len, decl_req, type_req, fn_type_req,
> +       affects_type_identity, handler, exclude }  */
> +      { NULL, 0, 0, false, false, false, false, NULL, NULL }
> +};

I think it'd be better to leave this and TARGET_ATTRIBUTE_TABLE
undefined until a non-terminator entry is needed.

> […]
> +/* See whether TYPE is a record whose fields should be returned in one or
> +   two floating-point registers.  If so, populate FIELDS accordingly.  */
> +
> +static unsigned
> +loongarch_pass_aggregate_in_fpr_pair_p (const_tree type,
> +

_p usually means a predicate that returns bool, and the name doesn't
really match the behaviour of the function (which as the comment says,
checks for single FPRs as well as pairs).

Maybe loongarch_pass_aggregate_num_fprs or something would be better.

> […]
> +/* See whether TYPE is a record whose fields should be returned in one or

typo: s/one or/one/

> +   floating-point register and one integer register.  If so, populate
> +   FIELDS accordingly.  */
> +
> +static bool
> +loongarch_pass_aggregate_in_fpr_and_gpr_p (const_tree type,
> +                                        loongarch_aggregate_field fields[2])
> +{
> +  unsigned num_int = 0, num_float = 0;
> +  int n = loongarch_flatten_aggregate_argument (type, fields);
> +
> +  for (int i = 0; i < n; i++)
> +    {
> +      num_float += SCALAR_FLOAT_TYPE_P (fields[i].type);
> +      num_int += INTEGRAL_TYPE_P (fields[i].type);
> +    }
> +
> +  return num_int == 1 && num_float == 1;
> +}
> […]
> +static void
> +loongarch_emit_stack_tie (void)
> +{
> +  if (Pmode == SImode)
> +    emit_insn (gen_stack_tiesi (stack_pointer_rtx, hard_frame_pointer_rtx));
> +  else
> +    emit_insn (gen_stack_tiedi (stack_pointer_rtx, hard_frame_pointer_rtx));

These days it's possible to write this as:

  emit_insn (gen_stack_tie (Pmode, stack_pointer_rtx, hard_frame_pointer_rtx));

with the stack_tie insn being defined using an “@” at the start of the name:

  (define_insn "@stack_tie<mode>"

Same for the later tls patterns.

> +}
> […]
> +/* Expand the "prologue" pattern.  */
> +
> +void
> +loongarch_expand_prologue (void)
> +{
> +  struct loongarch_frame_info *frame = &cfun->machine->frame;
> +  HOST_WIDE_INT size = frame->total_size;
> +  HOST_WIDE_INT tmp;
> +  unsigned mask = frame->mask;
> +  rtx insn;
> +
> +  if (flag_stack_usage_info)
> +    current_function_static_stack_size = size;
> +
> +  if (flag_stack_check == STATIC_BUILTIN_STACK_CHECK
> +      || flag_stack_clash_protection)
> +    {
> +      if (crtl->is_leaf && !cfun->calls_alloca)
> +     {
> +       if (size > PROBE_INTERVAL && size > get_stack_check_protect ())
> +         {
> +           tmp = size - get_stack_check_protect ();
> +           loongarch_emit_probe_stack_range (get_stack_check_protect (),
> +                                             tmp);
> +         }
> +     }
> +      else if (size > 0)
> +     loongarch_emit_probe_stack_range (get_stack_check_protect (), size);
> +    }
> +
> +  /* Save the registers.  */
> +  if ((frame->mask | frame->fmask) != 0)
> +    {
> +      HOST_WIDE_INT step1 = MIN (size, loongarch_first_stack_step (frame));
> +
> +      insn = gen_add3_insn (stack_pointer_rtx, stack_pointer_rtx,
> +                         GEN_INT (-step1));
> +      RTX_FRAME_RELATED_P (emit_insn (insn)) = 1;
> +      size -= step1;
> +      loongarch_for_each_saved_reg (size, loongarch_save_reg);
> +    }
> +
> +  frame->mask = mask; /* Undo the above fib.  */

Is this and the “unsigned mask = frame->mask” still needed?
I couldn't see where frame->mask was changed.

> […]
> +/* Expand an "epilogue" or "sibcall_epilogue" pattern; SIBCALL_P
> +   says which.  */
> +
> +void
> +loongarch_expand_epilogue (bool sibcall_p)
> +{
> +  /* Split the frame into two.  STEP1 is the amount of stack we should
> +     deallocate before restoring the registers.  STEP2 is the amount we
> +     should deallocate afterwards.
> +
> +     Start off by assuming that no registers need to be restored.  */
> +  struct loongarch_frame_info *frame = &cfun->machine->frame;
> +  HOST_WIDE_INT step1 = frame->total_size;
> +  HOST_WIDE_INT step2 = 0;
> +  rtx ra = gen_rtx_REG (Pmode, RETURN_ADDR_REGNUM);
> +  rtx insn;
> +
> +  /* We need to add memory barrier to prevent read from deallocated stack.  
> */
> +  bool need_barrier_p
> +    = (get_frame_size () + cfun->machine->frame.arg_pointer_offset) != 0;
> +
> +  if (!sibcall_p && loongarch_can_use_return_insn ())
> +    {
> +      emit_jump_insn (gen_return ());
> +      return;
> +    }
> +
> +  /* Move past any dynamic stack allocations.  */
> +  if (cfun->calls_alloca)
> +    {
> +      /* Emit a barrier to prevent loads from a deallocated stack.  */
> +      loongarch_emit_stack_tie ();
> +      need_barrier_p = false;
> +
> +      rtx adjust = GEN_INT (-frame->hard_frame_pointer_offset);
> +      if (!IMM12_OPERAND (INTVAL (adjust)))
> +     {
> +       loongarch_emit_move (LARCH_PROLOGUE_TEMP (Pmode), adjust);
> +       adjust = LARCH_PROLOGUE_TEMP (Pmode);
> +     }
> +
> +      insn = emit_insn (gen_add3_insn (stack_pointer_rtx,
> +                                    hard_frame_pointer_rtx,
> +                                    adjust));
> +
> +      rtx dwarf = NULL_RTX;
> +      rtx minus_offset = NULL_RTX;
> +      minus_offset = GEN_INT (-frame->hard_frame_pointer_offset);

The minus_offset = NULL_RTX looks redundant, might as well initialise
with the GEN_INT instead.

> […]
> +/* Return true if SYMBOL_REF X is associated with a global symbol
> +   (in the STB_GLOBAL sense).  */
> +
> +bool
> +loongarch_global_symbol_p (const_rtx x)
> +{
> +  if (GET_CODE (x) == LABEL_REF)
> +    return false;
> +
> +  const_tree decl = SYMBOL_REF_DECL (x);
> +
> +  if (!decl)
> +    return !SYMBOL_REF_LOCAL_P (x) || SYMBOL_REF_EXTERNAL_P (x);
> +
> +  /* Weakref symbols are not TREE_PUBLIC, but their targets are global
> +     or weak symbols.  Relocations in the object file will be against
> +     the target symbol, so it's that symbol's binding that matters here.  */
> +  return DECL_P (decl) && (TREE_PUBLIC (decl) || DECL_WEAK (decl));
> +}
> +
> +bool
> +loongarch_global_symbol_noweak_p (const_rtx x)
> +{
> +  if (GET_CODE (x) == LABEL_REF)
> +    return false;
> +
> +  const_tree decl = SYMBOL_REF_DECL (x);
> +
> +  if (!decl)
> +    return !SYMBOL_REF_LOCAL_P (x) || SYMBOL_REF_EXTERNAL_P (x);
> +
> +  /* Weakref symbols are not TREE_PUBLIC, but their targets are global
> +     or weak symbols.  Relocations in the object file will be against
> +     the target symbol, so it's that symbol's binding that matters here.  */

This comment doesn't apply to the noweak function.

> +  return DECL_P (decl) && TREE_PUBLIC (decl);
> +}
> […]
> +/* Return the method that should be used to access SYMBOL_REF or
> +   LABEL_REF X in context CONTEXT.  */
> +
> +static enum loongarch_symbol_type
> +loongarch_classify_symbol (const_rtx x,
> +                        enum loongarch_symbol_context context  \
> +                        ATTRIBUTE_UNUSED)
> +{
> +  if (GET_CODE (x) == LABEL_REF)
> +    {
> +      return SYMBOL_GOT_DISP;
> +    }

Minor formatting nit: redundant braces.  Some other instances
later in the file.

> +
> +  gcc_assert (GET_CODE (x) == SYMBOL_REF);
> +
> +  if (SYMBOL_REF_TLS_MODEL (x))
> +    return SYMBOL_TLS;
> +
> +  if (GET_CODE (x) == SYMBOL_REF)
> +    return SYMBOL_GOT_DISP;
> +
> +  return SYMBOL_GOT_DISP;
> +}
> +
> +/* Return true if X is a symbolic constant that can be used in context
> +   CONTEXT.  If it is, store the type of the symbol in *SYMBOL_TYPE.  */
> +
> +bool
> +loongarch_symbolic_constant_p (rtx x, enum loongarch_symbol_context context,
> +                            enum loongarch_symbol_type *symbol_type)
> +{
> +  rtx offset;
> +
> +  split_const (x, &x, &offset);
> +  if (UNSPEC_ADDRESS_P (x))
> +    {
> +      *symbol_type = UNSPEC_ADDRESS_TYPE (x);
> +      x = UNSPEC_ADDRESS (x);
> +    }
> +  else if (GET_CODE (x) == SYMBOL_REF || GET_CODE (x) == LABEL_REF)
> +    {
> +      *symbol_type = loongarch_classify_symbol (x, context);
> +      if (*symbol_type == SYMBOL_TLS)
> +     return true;
> +    }
> +  else
> +    return false;
> +
> +  if (offset == const0_rtx)
> +    return true;
> +
> +  /* Check whether a nonzero offset is valid for the underlying
> +     relocations.  */
> +  switch (*symbol_type)
> +    {
> +      /* Fall through.  */
> +

There are no longer any previous cases to fall through from.

> +    case SYMBOL_GOT_DISP:
> +    case SYMBOL_TLSGD:
> +    case SYMBOL_TLSLDM:
> +    case SYMBOL_TLS:
> +      return false;
> +    }
> +  gcc_unreachable ();
> +}
> +
> +/* Like loongarch_symbol_insns We rely on the fact that, in the worst case.  
> */

This comment seems incomplete.

> +
> +static int
> +loongarch_symbol_insns_1 (enum loongarch_symbol_type type, machine_mode mode)
> +{
> +  switch (type)
> +    {
> +    case SYMBOL_GOT_DISP:
> +      /* The constant will have to be loaded from the GOT before it
> +      is used in an address.  */
> +      if (mode != MAX_MACHINE_MODE)
> +     return 0;
> +
> +      /* Fall through.  */

No (case) fall through here.  I guess the comment should just be deleted.

> +
> +      return 3;
> +
> +    case SYMBOL_TLSGD:
> +    case SYMBOL_TLSLDM:
> +      return 1;
> +
> +    case SYMBOL_TLS:
> +      /* We don't treat a bare TLS symbol as a constant.  */
> +      return 0;
> +    }
> +  gcc_unreachable ();
> +}
> +
> +/* If MODE is MAX_MACHINE_MODE, return the number of instructions needed
> +   to load symbols of type TYPE into a register.  Return 0 if the given
> +   type of symbol cannot be used as an immediate operand.
> +
> +   Otherwise, return the number of instructions needed to load or store
> +   values of mode MODE to or from addresses of type TYPE.  Return 0 if
> +   the given type of symbol is not valid in addresses.
> +
> +   In both cases, instruction counts are based off BASE_INSN_LENGTH.  */
> +
> +static int
> +loongarch_symbol_insns (enum loongarch_symbol_type type, machine_mode mode)
> +{
> +  return loongarch_symbol_insns_1 (type, mode) * (1);

“* (1)” looks odd, maybe just remove.

> +}
> […]
> +/* Return true if, for every base register BASE_REG, (plus BASE_REG X)
> +   can address a value of mode MODE.  */
> +
> +static bool
> +loongarch_valid_offset_p (rtx x, machine_mode mode)
> +{
> +  /* Check that X is a signed 12-bit number,
> +   * or check that X is a signed 16-bit number
> +   * and offset 4 byte aligned.  */

Minor nit, but it isn't GNU style to indent comment lines with a leading “*”.
(This was the only instance I could see in the patch.)

> +  if (!(const_arith_operand (x, Pmode)
> +     || ((mode == E_SImode || mode == E_DImode)
> +         && const_imm16_operand (x, Pmode)
> +         && (loongarch_signed_immediate_p (INTVAL (x), 14, 2)))))
> +    return false;
> +
> +  /* We may need to split multiword moves, so make sure that every word
> +     is accessible.  */
> +  if (GET_MODE_SIZE (mode) > UNITS_PER_WORD
> +      && !IMM12_OPERAND (INTVAL (x) + GET_MODE_SIZE (mode) - UNITS_PER_WORD))
> +    return false;
> +
> +  return true;
> +}
> +
> […]
> +/* Return the number of instructions needed to load constant X,
> +   assuming that BASE_INSN_LENGTH is the length of one instruction.
> +   Return 0 if X isn't a valid constant.  */
> +
> +int
> +loongarch_const_insns (rtx x)
> +{
> +  enum loongarch_symbol_type symbol_type;
> +  rtx offset;
> +
> +  switch (GET_CODE (x))
> +    {
> +    case CONST_INT:
> +      return loongarch_integer_cost (INTVAL (x));
> +
> +    case CONST_VECTOR:
> +      /* Fall through.  */
> +    case CONST_DOUBLE:
> +      /* Allow zeros for normal mode, where we can use $0.  */

“normal mode” is a holdover from MIPS (where it meant “not MIPS16
and not micromips”).

> +      return x == CONST0_RTX (GET_MODE (x)) ? 1 : 0;
> +
> +    case CONST:
> +      /* See if we can refer to X directly.  */
> +      if (loongarch_symbolic_constant_p (x, SYMBOL_CONTEXT_LEA, 
> &symbol_type))
> +     return loongarch_symbol_insns (symbol_type, MAX_MACHINE_MODE);
> +
> +      /* Otherwise try splitting the constant into a base and offset.
> +      If the offset is a 12-bit value, we can load the base address
> +      into a register and then use ADDI.{W/D} to add in the offset.
> +      If the offset is larger, we can load the base and offset
> +      into separate registers and add them together with ADD.{W/D}.
> +      However, the latter is only possible before reload; during
> +      and after reload, we must have the option of forcing the
> +      constant into the pool instead.  */
> +      split_const (x, &x, &offset);
> +      if (offset != 0)
> +     {
> +       int n = loongarch_const_insns (x);
> +       if (n != 0)
> +         {
> +           if (IMM12_INT (offset))
> +             return n + 1;
> +           else if (!targetm.cannot_force_const_mem (GET_MODE (x), x))
> +             return n + 1 + loongarch_integer_cost (INTVAL (offset));
> +         }
> +     }
> +      return 0;
> +
> +    case SYMBOL_REF:
> +    case LABEL_REF:
> +      return loongarch_symbol_insns (
> +     loongarch_classify_symbol (x, SYMBOL_CONTEXT_LEA), MAX_MACHINE_MODE);
> +
> +    default:
> +      return 0;
> +    }
> +}
> +
> +/* X is a doubleword constant that can be handled by splitting it into
> +   two words and loading each word separately.  Return the number of
> +   instructions required to do this, assuming that BASE_INSN_LENGTH
> +   is the length of one instruction.  */

The BASE_INSN_LENGTH part no longer applies.  Same for later functions.

> […]
> +/* Return true if there is a instruction that implements CODE
> +   and if that instruction accepts X as an immediate operand.  */
> +
> +static int
> +loongarch_immediate_operand_p (int code, HOST_WIDE_INT x)
> +{
> +  switch (code)
> +    {
> +    case ASHIFT:
> +    case ASHIFTRT:
> +    case LSHIFTRT:
> +      /* All shift counts are truncated to a valid constant.  */
> +      return true;
> +
> +    case ROTATE:
> +    case ROTATERT:
> +      /* Likewise rotates, if the target supports rotates at all.  */

The comment doesn't really apply to Loongson, where rotates are
unconditionally available.

> […]
> +/* Return the cost of sign-extending OP to mode MODE, not including the
> +   cost of OP itself.  */
> +
> +static int
> +loongarch_sign_extend_cost (machine_mode mode, rtx op)
> +{
> +  if (MEM_P (op))
> +    /* Extended loads are as cheap as unextended ones.  */
> +    return 0;
> +
> +  if (TARGET_64BIT && mode == DImode && GET_MODE (op) == SImode)
> +    /* A sign extension from SImode to DImode in 64-bit mode is free.  */
> +    return 0;

Coming back to the TRULY_NOOP_TRUNCATION question from patch 4:
this would no longer be true if the definition of TRULY_NOOP_TRUNCATION
were removed.  On balance, though, I'd expect removing the definition
to improve the quality of the generated code.

> +
> +  return COSTS_N_INSNS (1);
> +}
> […]
> +/* Return one word of double-word value OP, taking into account the fixed
> +   endianness of certain registers.  HIGH_P is true to select the high part,
> +   false to select the low part.  */
> +
> +rtx
> +loongarch_subword (rtx op, bool high_p)
> +{
> +  unsigned int byte, offset;
> +  machine_mode mode;
> +
> +  mode = GET_MODE (op);
> +  if (mode == VOIDmode)
> +    mode = TARGET_64BIT ? TImode : DImode;
> +
> +  if (high_p)
> +    byte = UNITS_PER_WORD;
> +  else
> +    byte = 0;
> +
> +  if (FP_REG_RTX_P (op))
> +    {
> +      /* Paired FPRs are always ordered little-endian.  */
> +      offset = (UNITS_PER_WORD < UNITS_PER_HWFPVALUE ? high_p : byte != 0);
> +      return gen_rtx_REG (word_mode, REGNO (op) + offset);
> +    }

Do you need this?  Loongson seems to be little-endian only, so this…

> +
> +  if (MEM_P (op))
> +    return loongarch_rewrite_small_data (adjust_address (op, word_mode, 
> byte));
> +
> +  return simplify_gen_subreg (word_mode, op, mode, byte);

…default behaviour should work.

> +}
> +
> […]
> +/* Forward declaration.  Used below.  */
> +static HOST_WIDE_INT
> +loongarch_constant_alignment (const_tree exp, HOST_WIDE_INT align);

It'd be better to move the definition further up, since there isn't a cyclic
dependency.

> […]
> +}
> +/* Return the appropriate instructions to move SRC into DEST.  Assume
> +   that SRC is operand 1 and DEST is operand 0.  */

Nit: missing newline before the comment.

> +
> +const char *
> +loongarch_output_move (rtx dest, rtx src)
> +{
> […]
> +      if (symbolic_operand (src, VOIDmode))
> +     {
> +       if ((TARGET_CMODEL_TINY && (!loongarch_global_symbol_p (src)
> +                                   || loongarch_symbol_binds_local_p (src)))
> +           || (TARGET_CMODEL_TINY_STATIC && !loongarch_weak_symbol_p (src)))
> +         {
> +           /* The symbol must be aligned to 4 byte.  */
> +           unsigned int align;
> +
> +           if (GET_CODE (src) == LABEL_REF)
> +             align = 128 /* Whatever.  */;

128 seems a strange choice.  Couldn't it just be 32 (the number of bits
in an instruction) given the later use?

> +           else if (CONSTANT_POOL_ADDRESS_P (src))
> +             align = GET_MODE_ALIGNMENT (get_pool_mode (src));
> +           else if (TREE_CONSTANT_POOL_ADDRESS_P (src))
> +             {
> +               tree exp = SYMBOL_REF_DECL (src);
> +               align = TYPE_ALIGN (TREE_TYPE (exp));
> +               align = loongarch_constant_alignment (exp, align);
> +             }
> +           else if (SYMBOL_REF_DECL (src))
> +             align = DECL_ALIGN (SYMBOL_REF_DECL (src));
> +           else if (SYMBOL_REF_HAS_BLOCK_INFO_P (src)
> +                    && SYMBOL_REF_BLOCK (src) != NULL)
> +             align = SYMBOL_REF_BLOCK (src)->alignment;
> +           else
> +             align = BITS_PER_UNIT;
> +
> +           if (align % (4 * 8) == 0)
> +             return "pcaddi\t%0,%%pcrel(%1)>>2";
> +         }
> +       if (TARGET_CMODEL_TINY
> +           || TARGET_CMODEL_TINY_STATIC
> +           || TARGET_CMODEL_NORMAL
> +           || TARGET_CMODEL_LARGE)
> +         {
> +           if (!loongarch_global_symbol_p (src)
> +               || loongarch_symbol_binds_local_p (src))
> +             return "la.local\t%0,%1";
> +           else
> +             return "la.global\t%0,%1";
> +         }
> +       if (TARGET_CMODEL_EXTREME)
> +         {
> +           sorry ("Normal symbol loading not implemented in extreme mode.");
> +           /* GCC complains.  */
> +           /* return ""; */

Similar comments here to the review of part 4: sorry () isn't a fatal error,
so the function still needs to return something.  Is this a situation
that can be triggered by specific source code, or is it an internal
error that the user should never see?

> […]
> +/* Like riscv_emit_int_compare, but for floating-point comparisons.  */

s/riscv/loongarch/

> +
> +static void
> +loongarch_emit_float_compare (enum rtx_code *code, rtx *op0, rtx *op1)
> +{
> […]
> +/* Implement TARGET_ASM_FUNCTION_RODATA_SECTION.
> +
> +   The complication here is that, with the combination
> +   !TARGET_ABSOLUTE_ABICALLS , jump tables will use
> +   absolute addresses, and should therefore not be included in the
> +   read-only part of a DSO.  Handle such cases by selecting a normal
> +   data section instead of a read-only one.  The logic apes that in
> +   default_function_rodata_section.  */

The !TARGET_ABSOLUTE_ABICALLS bit doesn't apply to Loongson.
I think you can just say “The complication here is that jump tables…”.

> +
> +static section *
> +loongarch_function_rodata_section (tree decl, bool)
> +{
> +  return default_function_rodata_section (decl, false);
> +}
> […]
> +/* Implement TARGET_USE_ANCHORS_FOR_SYMBOL_P.  We don't want to use
> +   anchors for small data: the GP register acts as an anchor in that
> +   case.  We also don't want to use them for PC-relative accesses,
> +   where the PC acts as an anchor.  */
> +
> +static bool
> +loongarch_use_anchors_for_symbol_p (const_rtx symbol)
> +{
> +  return default_use_anchors_for_symbol_p (symbol);
> +}

The comment doesn't describe the behaviour.  Probably easiest to leave
this hook undefined.

> […]
> +/* Implement TARGET_DWARF_FRAME_REG_MODE.  */
> +
> +static machine_mode
> +loongarch_dwarf_frame_reg_mode (int regno)
> +{
> +  machine_mode mode = default_dwarf_frame_reg_mode (regno);
> +
> +  return mode;
> +}

This is the default behaviour, so it can be deleted.

> […]
> +#ifdef ASM_OUTPUT_SIZE_DIRECTIVE
> +extern int size_directive_output;
> +
> +/* Implement ASM_DECLARE_OBJECT_NAME.  This is like most of the standard ELF
> +   definitions except that it uses loongarch_declare_object to emit the 
> label.
> +*/
> +
> +void
> +loongarch_declare_object_name (FILE *stream, const char *name,
> +                            tree decl ATTRIBUTE_UNUSED)
> +{
> +#ifdef ASM_OUTPUT_TYPE_DIRECTIVE
> +#ifdef USE_GNU_UNIQUE_OBJECT

Are these preprocessor conditions necessary for Loongson?  I would have
expected things to be more regular than they were for MIPS.

(Also applies to the ASM_OUTPUT_SIZE_DIRECTIVE condition above,
and to similar conditions later.)

> +  /* As in elfos.h.  */
> +  if (USE_GNU_UNIQUE_OBJECT && DECL_ONE_ONLY (decl)
> +      && (!DECL_ARTIFICIAL (decl) || !TREE_READONLY (decl)))
> +    ASM_OUTPUT_TYPE_DIRECTIVE (stream, name, "gnu_unique_object");
> +  else
> +#endif
> +    ASM_OUTPUT_TYPE_DIRECTIVE (stream, name, "object");
> +#endif
> +
> +  size_directive_output = 0;
> +  if (!flag_inhibit_size_directive && DECL_SIZE (decl))
> +    {
> +      HOST_WIDE_INT size;
> +
> +      size_directive_output = 1;
> +      size = int_size_in_bytes (TREE_TYPE (decl));
> +      ASM_OUTPUT_SIZE_DIRECTIVE (stream, name, size);
> +    }
> +
> +  loongarch_declare_object (stream, name, "", ":\n");
> +}
> […]
> +/* Implement TARGET_ASM_FILE_START.  */
> +
> +static void
> +loongarch_file_start (void)
> +{
> +  default_file_start ();
> +}

This is the default behaviour, so it can be deleted.

> […]
> +bool
> +loongarch_hard_regno_rename_ok (unsigned int old_reg ATTRIBUTE_UNUSED,
> +                             unsigned int new_reg ATTRIBUTE_UNUSED)
> +{
> +  return true;
> +}

Same for this.

> […]
> +/* Implement TARGET_CAN_CHANGE_MODE_CLASS.  */
> +
> +static bool
> +loongarch_can_change_mode_class (machine_mode from, machine_mode to,
> +                              reg_class_t rclass)
> +{
> +  /* Allow conversions between different Loongson integer vectors,
> +     and between those vectors and DImode.  */
> +  if (GET_MODE_SIZE (from) == 8 && GET_MODE_SIZE (to) == 8
> +      && INTEGRAL_MODE_P (from) && INTEGRAL_MODE_P (to))
> +    return true;

Is the comment accurate?  There didn't seem to be any integer vector support.

> +
> +  return !reg_classes_intersect_p (FP_REGS, rclass);
> +}
> […]
> +/* Implement TARGET_SECONDARY_MEMORY_NEEDED.
> +
> +   This can be achieved using MOVFRH2GR.S/MOVGR2FRH.W when these instructions
> +   are available but otherwise moves must go via memory.  Using
> +   MOVGR2FR/MOVFR2GR to access the lower-half of these registers would 
> require
> +   a forbidden single-precision access.  We require all double-word moves to
> +   use memory because adding even and odd floating-point registers classes
> +   would have a significant impact on the backend.  */
> +
> +static bool
> +loongarch_secondary_memory_needed (machine_mode mode ATTRIBUTE_UNUSED,
> +                                reg_class_t class1, reg_class_t class2)
> +{
> +  /* Ignore spilled pseudos.  */
> +  if (lra_in_progress && (class1 == NO_REGS || class2 == NO_REGS))
> +    return false;
> +
> +  return false;
> +}

Looks like this hook definition can be deleted (which is a good sign!)

> […]
> +/* Implement TARGET_VECTORIZE_PREFERRED_SIMD_MODE.  */
> +
> +static machine_mode
> +loongarch_preferred_simd_mode (scalar_mode mode ATTRIBUTE_UNUSED)
> +{
> +  return word_mode;
> +}

This is the default behaviour, so it can be deleted.

> […]
> +/* Return the number of instructions that can be issued per cycle.  */
> +
> +static int
> +loongarch_issue_rate (void)
> +{
> +  if ((unsigned long) __ACTUAL_TUNE < N_TUNE_TYPES)
> +    return loongarch_cpu_issue_rate[__ACTUAL_TUNE];
> +  else
> +    return 1;
> +}

Following on from a comment about patch 2: __FOO macros are in the
implementation namespace, so __ACTUAL_TUNE should be something like
LARCH_ACTUAL_TUNE instead.

> […]
> +/* A structure representing the state of the processor pipeline.
> +   Used by the loongarch_sim_* family of functions.  */
> +
> +struct loongarch_sim
> +{
> +  /* The maximum number of instructions that can be issued in a cycle.
> +     (Caches loongarch_issue_rate.)  */
> +  unsigned int issue_rate;
> +
> +  /* The current simulation time.  */
> +  unsigned int time;
> +
> +  /* How many more instructions can be issued in the current cycle.  */
> +  unsigned int insns_left;
> +
> +  /* LAST_SET[X].INSN is the last instruction to set register X.
> +     LAST_SET[X].TIME is the time at which that instruction was issued.
> +     INSN is null if no instruction has yet set register X.  */
> +  struct
> +  {
> +    rtx_insn *insn;
> +    unsigned int time;
> +  } last_set[FIRST_PSEUDO_REGISTER];
> +
> +  /* The pipeline's current DFA state.  */
> +  state_t dfa_state;
> +};
> +
> +/* Reset STATE to the initial simulation state.  */
> +
> +static void
> +loongarch_sim_reset (struct loongarch_sim *state)
> +{
> +  curr_state = state->dfa_state;
> +
> +  state->time = 0;
> +  state->insns_left = state->issue_rate;
> +  memset (&state->last_set, 0, sizeof (state->last_set));
> +  state_reset (curr_state);
> +
> +  targetm.sched.init (0, false, 0);
> +  advance_state (curr_state);
> +}
> +
> +/* Initialize STATE before its first use.  DFA_STATE points to an
> +   allocated but uninitialized DFA state.  */
> +
> +static void
> +loongarch_sim_init (struct loongarch_sim *state, state_t dfa_state)
> +{
> +  if (targetm.sched.init_dfa_pre_cycle_insn)
> +    targetm.sched.init_dfa_pre_cycle_insn ();
> +
> +  if (targetm.sched.init_dfa_post_cycle_insn)
> +    targetm.sched.init_dfa_post_cycle_insn ();
> +
> +  state->issue_rate = loongarch_issue_rate ();
> +  state->dfa_state = dfa_state;
> +  loongarch_sim_reset (state);
> +}
> +
> +/* Set up costs based on the current architecture and tuning settings.  */
> +
> +static void
> +loongarch_set_tuning_info (void)
> +{
> +  dfa_start ();
> +
> +  struct loongarch_sim state;
> +  loongarch_sim_init (&state, alloca (state_size ()));
> +
> +  dfa_finish ();
> +}
> +/* Implement TARGET_EXPAND_TO_RTL_HOOK.  */
> +
> +static void
> +loongarch_expand_to_rtl_hook (void)
> +{
> +  /* We need to call this at a point where we can safely create sequences
> +     of instructions, so TARGET_OVERRIDE_OPTIONS is too early.  We also
> +     need to call it at a point where the DFA infrastructure is not
> +     already in use, so we can't just call it lazily on demand.
> +
> +     At present, loongarch_tuning_info is only needed during post-expand
> +     RTL passes such as split_insns, so this hook should be early enough.
> +     We may need to move the call elsewhere if loongarch_tuning_info starts
> +     to be used for other things (such as rtx_costs, or expanders that
> +     could be called during gimple optimization).  */
> +
> +  loongarch_set_tuning_info ();
> +}

Do you need the code above?  It doesn't seem to be using the
simulator state.

IIRC, on MIPS this was originally used to pad instructions in a
VLIW-like way for NEC's VR4131.

> […]
> +static void
> +loongarch_option_override_internal (struct gcc_options *opts,
> +                                 struct gcc_options *opts_set);

Here too it would be better to rearrange the functions to avoid
the forward declaration.

> +
> +/* Implement TARGET_OPTION_OVERRIDE.  */
> +
> +static void
> +loongarch_option_override (void)
> +{
> +  loongarch_option_override_internal (&global_options, &global_options_set);
> +}
> +
> +static void
> +loongarch_option_override_internal (struct gcc_options *opts,
> +                                 struct gcc_options *opts_set)
> +{
> +  int i, regno, mode;
> +
> +  (void) opts_set;

Might as well just drop the argument name from the prototype instead.

> +
> +  if (flag_pic)
> +    g_switch_value = 0;
> +
> +  /* Handle target-specific options: compute defaults/conflicts etc.  */
> +  loongarch_config_target (&la_target, la_opt_switches,
> +                        la_opt_cpu_arch, la_opt_cpu_tune, la_opt_fpu,
> +                        la_opt_abi_base, la_opt_abi_ext, la_opt_cmodel, 0);
> +
> +  /* End of code shared with GAS.  */

Just checking: is this code shared with GAS, in the way that the
MIPS code was?  (Genuine question, since it might be.)

> […]
> +/* Implement TARGET_SHIFT_TRUNCATION_MASK.  We want to keep the default
> +   behavior of TARGET_SHIFT_TRUNCATION_MASK for non-vector modes.  */
> +
> +static unsigned HOST_WIDE_INT
> +loongarch_shift_truncation_mask (machine_mode mode)
> +{
> +  return GET_MODE_BITSIZE (mode) - 1;
> +}

It shouldn't be necessary to define this hook, given that
SHIFT_COUNT_TRUNCATED is 1.  (The reason MIPS had to define
it was that vector shifts didn't truncate in the same way.)

> +
> +/* Implement TARGET_SCHED_REASSOCIATION_WIDTH.  */
> +
> +static int
> +loongarch_sched_reassociation_width (unsigned int opc ATTRIBUTE_UNUSED,
> +                                  machine_mode mode ATTRIBUTE_UNUSED)
> +{
> +  return 1;
> +}

This also shouldn't be necessary, since it matches the default.

> […]
> +/* Implement TARGET_CASE_VALUES_THRESHOLD.  */
> +
> +unsigned int
> +loongarch_case_values_threshold (void)
> +{
> +  return default_case_values_threshold ();
> +}

This is also the default.

> […]
> +/* All our function attributes are related to how out-of-line copies should
> +   be compiled or called.  They don't in themselves prevent inlining.  */
> +#undef TARGET_FUNCTION_ATTRIBUTE_INLINABLE_P
> +#define TARGET_FUNCTION_ATTRIBUTE_INLINABLE_P hook_bool_const_tree_true

This shouldn't be necessary, since there are no target-specific
attributes.

> […]
> diff --git a/gcc/config/loongarch/loongarch.h 
> b/gcc/config/loongarch/loongarch.h
> new file mode 100644
> index 00000000000..791f9d064c5
> --- /dev/null
> +++ b/gcc/config/loongarch/loongarch.h
> @@ -0,0 +1,1273 @@
> +/* Definitions of target machine for GNU compiler.  LoongArch version.
> +   Copyright (C) 2021-2022 Free Software Foundation, Inc.
> +   Contributed by Loongson Ltd.
> +   Based on MIPS and RISC-V target for GNU compiler.
> +
> +This file is part of GCC.
> +
> +GCC is free software; you can redistribute it and/or modify
> +it under the terms of the GNU General Public License as published by
> +the Free Software Foundation; either version 3, or (at your option)
> +any later version.
> +
> +GCC is distributed in the hope that it will be useful,
> +but WITHOUT ANY WARRANTY; without even the implied warranty of
> +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +GNU General Public License for more details.
> +
> +You should have received a copy of the GNU General Public License
> +along with GCC; see the file COPYING3.  If not see
> +<http://www.gnu.org/licenses/>.  */
> +
> +/* LoongArch external variables defined in loongarch.c.  */

longarch.cc now :-)

> […]
> +/* Support for a compile-time default CPU, et cetera.  The rules are:
> +   --with-divide is ignored if -mdivide-traps or -mdivide-breaks are
> +     specified.  */
> +#define OPTION_DEFAULT_SPECS \
> +  {"divide", "%{!mdivide-traps:%{!mdivide-breaks:-mdivide-%(VALUE)}}"},

Loongson doesn't have the -mdivide-* options, so I think this can
be deleted.

> […]
> +/* This definition replaces the formerly used 'm' constraint with a
> +   different constraint letter in order to avoid changing semantics of
> +   the 'm' constraint when accepting new address formats in
> +   TARGET_LEGITIMATE_ADDRESS_P.  The constraint letter defined here
> +   must not be used in insn definitions or inline assemblies.  */
> +#define TARGET_MEM_CONSTRAINT 'w'

Do you need to do this for a new port like Loongson?  It looks like
TARGET_LEGITIMATE_ADDRESS_P accepts all valid forms of address,
including reg+reg (good!), so shouldn't "m" accept them too?

If there is already existing code that assumes that "m" is never indexed
then the definition obviously makes sense.  But if you don't know of any
such code then it would be better to make "m" accept all the things that
TARGET_LEGITIMATE_ADDRESS_P does (by removing this definition).

> +
> +/* Tell collect what flags to pass to nm.  */
> +#ifndef NM_FLAGS
> +#define NM_FLAGS "-Bn"
> +#endif
> +
> +/* SUBTARGET_ASM_DEBUGGING_SPEC handles passing debugging options to
> +   the assembler.  It may be overridden by subtargets.  */
> +
> +#ifndef SUBTARGET_ASM_DEBUGGING_SPEC
> +#define SUBTARGET_ASM_DEBUGGING_SPEC "\
> +%{g} %{g0} %{g1} %{g2} %{g3} \
> +%{ggdb:-g} %{ggdb0:-g0} %{ggdb1:-g1} %{ggdb2:-g2} %{ggdb3:-g3} \
> +%{gstabs:-g} %{gstabs0:-g0} %{gstabs1:-g1} %{gstabs2:-g2} %{gstabs3:-g3} \
> +%{gstabs+:-g} %{gstabs+0:-g0} %{gstabs+1:-g1} %{gstabs+2:-g2} 
> %{gstabs+3:-g3}"

It doesn't look like this and %(subtarget_asm_debugging_spec)
(from EXTRA_SPECS) are used.  That's a good thing, since the stabs
stuff shouldn't be needed.

Probably best to remove this definition and the EXTRA_SPECS entry.

> +#endif
> […]
> +/* The number of consecutive floating-point registers needed to store the
> +   largest format supported by the FPU.  */
> +#define MAX_FPRS_PER_FMT 1
> +
> +/* The number of consecutive floating-point registers needed to store the
> +   smallest format supported by the FPU.  */
> +#define MIN_FPRS_PER_FMT 1

It might be clearer to drop this abstraction for Loongson, given that
it doesn't have the MIPS restrictions around 32-bit FPR pairs.

> […]
> +#define CALLEE_SAVED_REG_NUMBER(REGNO) \
> +  ((REGNO) >= 22 && (REGNO) <= 31 ? (REGNO) -22 : -1)

Formatting nit: missing space between “-” and “22”.

> […]
> +enum reg_class
> +{
> +  NO_REGS,     /* no registers in set  */
> +  SIBCALL_REGS,        /* SIBCALL_REGS  */
> +  JIRL_REGS,   /* JIRL_REGS  */

These two comments don't really say anything.  It would be worth
describing them briefly in words.  E.g.: I assume SIBCALL_REGS
is something like:

  /* registers that can hold an indirect sibcall address.  */

> +  GR_REGS,     /* integer registers  */
> +  CSR_REGS,    /* integer registers except for $r0 and $r1 for lcsr.  */

Normally it's better to list supersets after subsets, so I think these
two should be the other way around.  Swapping them would mean updating
the register class macros too, of course.

> +  FP_REGS,     /* floating point registers  */
> +  FCC_REGS,    /* status registers (fp status)  */
> +  FRAME_REGS,          /* arg pointer and frame pointer  */
> +  ALL_REGS,    /* all registers  */
> +  LIM_REG_CLASSES /* max value + 1  */
> +};
> […]
> +#define CONST_LOW_PART(VALUE) ((VALUE) -CONST_HIGH_PART (VALUE))

Formatting nit: missing space after “-”.

> […]
> +#define ELIMINABLE_REGS \
> +  { \
> +    {ARG_POINTER_REGNUM, STACK_POINTER_REGNUM}, \
> +      {ARG_POINTER_REGNUM, HARD_FRAME_POINTER_REGNUM}, \
> +      {FRAME_POINTER_REGNUM, STACK_POINTER_REGNUM}, \
> +      {FRAME_POINTER_REGNUM, HARD_FRAME_POINTER_REGNUM}, \

Very minor, sorry, but: these last three lines look over-indented.

> +  }
> […]
> +/* This structure has to cope with two different argument allocation
> +   schemes.  Most LoongArch ABIs view the arguments as a structure, of which
> +   the first N words go in registers and the rest go on the stack.  If I
> +   < N, the Ith word might go in Ith integer argument register or in a
> +   floating-point register.  For these ABIs, we only need to remember
> +   the offset of the current argument into the structure.
> +
> +   So for the standard ABIs, the first N words are allocated to integer
> +   registers, and loongarch_function_arg decides on an argument-by-argument
> +   basis whether that argument should really go in an integer register,
> +   or in a floating-point one.  */

Does the comment apply to Loongson?  It seemed like there was only
really one allocation scheme (excepting soft-float vs. hard-float,
but I don't think the comment was referring to that).

> +
> +typedef struct loongarch_args
> +{
> +  /* Number of integer registers used so far, up to MAX_ARGS_IN_REGISTERS.  
> */
> +  unsigned int num_gprs;
> +
> +  /* Number of floating-point registers used so far, likewise.  */
> +  unsigned int num_fprs;
> +
> +} CUMULATIVE_ARGS;
> +
> […]
> +/* Output assembler code to FILE to increment profiler label # LABELNO
> +   for profiling a function entry.  */
> +
> +#define MCOUNT_NAME "_mcount"

This comment looks like a remnant from when profiling code was
injected as asm text.  I think it can be deleted now.

> […]
> +#define CASE_VECTOR_MODE Pmode
> +
> +/* Only use short offsets if their range will not overflow.  */
> +#define CASE_VECTOR_SHORTEN_MODE(MIN, MAX, BODY) Pmode

The comment doesn't seem to match the code.

> […]
> +struct GTY (()) machine_function
> +{
> +  /* The next floating-point condition-code register to allocate
> +     for 8CC targets, relative to FCC_REG_FIRST.  */
> +  unsigned int next_fcc;
> +
> +  /* The number of extra stack bytes taken up by register varargs.
> +     This area is allocated by the callee at the very top of the frame.  */
> +  int varargs_size;
> +
> +  /* The current frame information, calculated by 
> loongarch_compute_frame_info.
> +   */
> +  struct loongarch_frame_info frame;
> +
> +  /* True if at least one of the formal parameters to a function must be
> +     written to the frame header (probably so its address can be taken).  */
> +  bool does_not_use_frame_header;
> +
> +  /* True if none of the functions that are called by this function need
> +     stack space allocated for their arguments.  */
> +  bool optimize_call_stack;
> +
> +  /* True if one of the functions calling this function may not allocate
> +     a frame header.  */
> +  bool callers_may_not_allocate_frame;

The last three fields seem to be unused.

> +};
> +#endif
> +
> +/* As on most targets, we want the .eh_frame section to be read-only where
> +   possible.  And as on most targets, this means two things:
> +
> +     (a) Non-locally-binding pointers must have an indirect encoding,
> +      so that the addresses in the .eh_frame section itself become
> +      locally-binding.
> +
> +     (b) A shared library's .eh_frame section must encode locally-binding
> +      pointers in a relative (relocation-free) form.
> +
> +   However, LoongArch has traditionally not allowed directives like:
> +
> +     .long   x-.
> +
> +   in cases where "x" is in a different section, or is not defined in the
> +   same assembly file.  We are therefore unable to emit the PC-relative
> +   form required by (b) at assembly time.

Is this really true for LoongArch?  It should be possible to use a more
efficient definition on modern targets.

> +
> +   Fortunately, the linker is able to convert absolute addresses into
> +   PC-relative addresses on our behalf.  Unfortunately, only certain
> +   versions of the linker know how to do this for indirect pointers,
> +   and for personality data.  We must fall back on using writable
> +   .eh_frame sections for shared libraries if the linker does not
> +   support this feature.  */
> +#define ASM_PREFERRED_EH_DATA_FORMAT(CODE, GLOBAL) \
> +  (((GLOBAL) ? DW_EH_PE_indirect : 0) | DW_EH_PE_absptr)
> +
> +/* Several named LoongArch patterns depend on Pmode.  These patterns have the
> +   form <NAME>_si for Pmode == SImode and <NAME>_di for Pmode == DImode.
> +   Add the appropriate suffix to generator function NAME and invoke it
> +   with arguments ARGS.  */
> +#define PMODE_INSN(NAME, ARGS) \
> +  (Pmode == SImode ? NAME##_si ARGS : NAME##_di ARGS)

FWIW, the "@” .md notation mentioned above would be a more modern
way of doing this.

Thanks,
Richard

Reply via email to