leonardchan added inline comments.

================
Comment at: clang/include/clang/Basic/FixedPoint.h:67
+  FixedPointSemantics
+  getCommonSemantics(const FixedPointSemantics &Other) const;
+
----------------
rjmccall wrote:
> leonardchan wrote:
> > rjmccall wrote:
> > > Actually representing the fully-precise value is operation-specific; 
> > > you'd need a different calculation for at least addition and 
> > > multiplication, and maybe also subtraction and (maybe?) division.  Is 
> > > this what you actually want?
> > For addition and subtraction, I believe we just need to extend and shift to 
> > a type that is large enough to hold the larger scale and integral bits, 
> > then we can do a normal add/sub. I think the relational operations can use 
> > this also since we would just need to resize and align scales for them.
> > 
> > For multiplication, the intrinsics we will use also assume the scale for 
> > both operands are the same, which I believe will also require finding 
> > semantics large enough to hold the larger scale and integral bits.
> > 
> > ```
> > declare iN @llvm.smul.fix(iN %L, %iN R, i32 %Scale)
> > ```
> > 
> > Division is similar since the saturating and non-saturating intrinsics 
> > assume both operands have the same scale.
> > 
> > ```
> > declare iN @llvm.sdiv.fix(iN %L, %iN R, i32 %Scale)
> > ```
> > 
> > In each of these cases, I believe the common semantics just needs to be 
> > large enough to hold the scale and largest representable value, which is 
> > what this method does.
> Okay, so I believe what you're saying is that `getCommonSemantics` is defined 
> as returning a semantics that is capable of precisely representing both input 
> values, not one that's capable of precisely representing the result of the 
> computation.  The documentation could be clearer about that.
Yup. Updated the documentation.


================
Comment at: clang/lib/Sema/SemaExpr.cpp:1304
+    RHSTy = ResultTy;
+  }
+
----------------
rjmccall wrote:
> ebevhan wrote:
> > rjmccall wrote:
> > > Hmm.  So adding a signed integer to an unsigned fixed-point type always 
> > > converts the integer to unsigned?  That's a little weird, but only 
> > > because the fixed-point rule seems to be the other way.  Anyway, I assume 
> > > it's not a bug in the spec.
> > `handleFixedPointConversion` only calculates the result type of the 
> > expression, not the calculation type. The final result type of the 
> > operation will be the unsigned fixed-point type, but the calculation itself 
> > will be done in a signed type with enough precision to represent both the 
> > signed integer and the unsigned fixed-point type. 
> > 
> > Though, if the result ends up being negative, the final result is undefined 
> > unless the type is saturating. I don't think there is a test for the 
> > saturating case (signed integer + unsigned saturating fixed-point) in the 
> > SaturatedAddition tests.
> > `handleFixedPointConversion` only calculates the result type of the 
> > expression, not the calculation type.
> 
> Right, I understand that, but the result type of the expression obviously 
> impacts the expressive range of the result, since you can end up with a 
> negative value.
> 
> Hmm.  With that said, if the general principle is to perform the operation 
> with full precision on the original operand values, why are unsigned 
> fixed-point operands coerced to their corresponding signed types *before* the 
> operation?  This is precision-limiting if the unsigned representation doesn't 
> use a padding bit.  That seems like a bug in the spec.
> Hmm. With that said, if the general principle is to perform the operation 
> with full precision on the original operand values, why are unsigned 
> fixed-point operands coerced to their corresponding signed types *before* the 
> operation? This is precision-limiting if the unsigned representation doesn't 
> use a padding bit. That seems like a bug in the spec.

Possibly. When the standard mentions converting from signed to unsigned fixed 
point, the only requirement involved is conservation of magnitude (the number 
of integral bits excluding the sign)

```
when signed and unsigned fixed-point types are mixed, the unsigned type is 
converted to the corresponding signed type, and this should go without loss of 
magnitude
```

This is just speculation, but I'm under the impression that not as much 
"attention" was given for unsigned types as for signed types since "`In the 
embedded processor world, support for unsigned fixed-point data types is rare; 
normally only signed fixed-point data types are supported`", but was kept 
anyway since unsigned types are used a lot.

```
However, to disallow unsigned fixed-point arithmetic from programming languages 
(in general, and from C in particular) based on this observation, seems overly 
restrictive.
```

I also imagine that if the programmer needs more precision, the correct 
approach would be to cast up to a type with a higher scale. The standard seems 
to make an effort to expose as much in regards to the underlying fixed point 
types as possible:

```
it should be possible to write fixed-point algorithms that are independent of 
the actual fixed-point hardware support. This implies that a programmer (or a 
running program) should have access to all parameters that define the behavior 
of the underlying hardware (in other words: even if these parameters are 
implementation-defined).
```

So the user would at least know that unsigned types may not have the same scale 
as their corresponding signed types if the hardware handles them with different 
scales.

Also added test for signed integer + unsigned saturating fixed point.


Repository:
  rC Clang

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D53738/new/

https://reviews.llvm.org/D53738



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to