On Wed, Sep 17, 2025 at 9:57 PM Daniel Henrique Barboza <
[email protected]> wrote:
> Is this correct? The 'reserved' value you're returning when the new
> extension is enabled
> is the original value from vsetvl:
>
> > + if (riscv_cpu_cfg(env)->ext_zvfbfa) {
> > + reserved = vtype & MAKE_64BIT_MASK(R_VTYPE_RESERVED_SHIFT,
> > + xlen - 1 -
> R_VTYPE_RESERVED_SHIFT);
>
> The original val you removed:
>
> > - target_ulong reserved = s2 &
> > - MAKE_64BIT_MASK(R_VTYPE_RESERVED_SHIFT,
> > - xlen - 1 -
> R_VTYPE_RESERVED_SHIFT);
>
>
> To preserve the existing behavior I believe you want to negate the
> conditional:
>
> > + if (!riscv_cpu_cfg(env)->ext_zvfbfa) {
> > + reserved = vtype & MAKE_64BIT_MASK(R_VTYPE_RESERVED_SHIFT,
> > + xlen - 1 -
> R_VTYPE_RESERVED_SHIFT);
> > + } else {
> > + reserved = vtype & MAKE_64BIT_MASK(R_VTYPE_ALTFMT_SHIFT,
> > + xlen - 1 -
> R_VTYPE_ALTFMT_SHIFT);
> > + }
>
>
> i.e. return the existing 'reserved' val if the new extension is absent,
> otherwise return
> the new val.
>
>
> Thanks,
>
> Daniel
>
Hi Daniel,
Yes, I believe that’s correct. After enabling the Zvfbfa extension, the
reserved field in the VTYPE CSR depends on the extension as follows:
- When Zvfbfa is enabled:
- The reserved field in the VTYPE CSR: from bit 9 (VTYPE_RESERVED) to
XLEN
- When Zvfbfa is not enabled:
- The reserved field in the VTYPE CSR: from bit 8 (VTYPE_ALTFMT) to
XLEN
PS: This commit also modifies the definition of VTYPE_RESERVED.
Because the EDIV extension is not planned to be part of the base V
extension. Therefore, this commit modifies the default RESERVED field
definition.
Reference:
https://github.com/riscvarchive/riscv-v-spec/blob/master/ediv.adoc
Thanks,
Max
>
>
> +
> > + return reserved;
> > +}
> > +
> > target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
> > target_ulong s2, target_ulong x0)
> > {
> > @@ -41,12 +57,9 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env,
> target_ulong s1,
> > uint64_t vlmul = FIELD_EX64(s2, VTYPE, VLMUL);
> > uint8_t vsew = FIELD_EX64(s2, VTYPE, VSEW);
> > uint16_t sew = 8 << vsew;
> > - uint8_t ediv = FIELD_EX64(s2, VTYPE, VEDIV);
> > + uint8_t altfmt = FIELD_EX64(s2, VTYPE, ALTFMT);
> > int xlen = riscv_cpu_xlen(env);
> > bool vill = (s2 >> (xlen - 1)) & 0x1;
> > - target_ulong reserved = s2 &
> > - MAKE_64BIT_MASK(R_VTYPE_RESERVED_SHIFT,
> > - xlen - 1 -
> R_VTYPE_RESERVED_SHIFT);
> > uint16_t vlen = cpu->cfg.vlenb << 3;
> > int8_t lmul;
> >
> > @@ -63,7 +76,13 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env,
> target_ulong s1,
> > }
> > }
> >
> > - if ((sew > cpu->cfg.elen) || vill || (ediv != 0) || (reserved !=
> 0)) {
> > + if (cpu->cfg.ext_zvfbfa) {
> > + if (altfmt == 1 && vsew >= MO_32) {
> > + vill = true;
> > + }
> > + }
> > +
> > + if ((sew > cpu->cfg.elen) || vill || (vtype_reserved(env, s2) !=
> 0)) {
> > /* only set vill bit. */
> > env->vill = 1;
> > env->vtype = 0;
>
>