On Fri, Apr 12, 2019 at 3:30 PM Uros Bizjak <ubiz...@gmail.com> wrote:
>
> On Fri, Apr 12, 2019 at 9:09 AM Liu, Hongtao <hongtao....@intel.com> wrote:
> >
> > Hi :
> >     This patch is about to enable support for bfloat16 which will be in 
> > Future Cooper Lake, Please refer to 
> > https://software.intel.com/en-us/download/intel-architecture-instruction-set-extensions-programming-reference
> > for more details about BF16.
> >
> > There are 3 instructions for AVX512BF16: VCVTNE2PS2BF16, VCVTNEPS2BF16 and 
> > DPBF16PS instructions, which are Vector Neural Network Instructions 
> > supporting:
> >
> > -       VCVTNE2PS2BF16: Convert Two Packed Single Data to One Packed BF16 
> > Data.
> > -       VCVTNEPS2BF16: Convert Packed Single Data to Packed BF16 Data.
> > -       VDPBF16PS: Dot Product of BF16 Pairs Accumulated into Packed Single 
> > Precision.
> >
> > Since only BF16 intrinsics are supported, we treat it as HI for simplicity.
>
> I think it was a mistake declaring cvtps2ph and cvtph2ps using HImode
> instead of HFmode. Is there a compelling reason not to introduce
> corresponding bf16_format supporting infrastructure and declare these
> intrinsics using half-binary (HBmode ?) mode instead?
>
> Uros.

Bfloat16 isn't IEEE standard which we want to reserve HFmode for.

The IEEE 754 standard specifies a binary16 as having the following format:
Sign bit: 1 bit
Exponent width: 5 bits
Significand precision: 11 bits (10 explicitly stored)

Bfloat16 has the following format:
Sign bit: 1 bit
Exponent width: 8 bits
Significand precision: 8 bits (7 explicitly stored), as opposed to 24
bits in a classical single-precision floating-point format


--
BR,
Hongtao

Reply via email to