rjmccall added a comment.

In D108643#3000556 <https://reviews.llvm.org/D108643#3000556>, @erichkeane 
wrote:

> In D108643#3000540 <https://reviews.llvm.org/D108643#3000540>, @rjmccall 
> wrote:
>
>> 
>
> I don't work on the microcode, it is just what I was told when we asked about 
> this.  SO until someone can clarify, I have no idea.
>
> Again, it was an argument made at the time that is outside of my direct 
> expertise, so if you have experience with mixed FPGA/traditional core 
> interfaces, I'll have to defer to your expertise.
>
> Again at the time, my FPGA-CPU interconnect experts expressed issue with 
> making the extra-bits 0, and it is filtered by my memory/ the "ELI5" 
> explanation that was given to me, so I apologize it didn't come through 
> correctly.

Okay.  Sorry if I came down on you personally, I know what it's like to be in 
the middle on things like this.

>> I have a lot of concerns about turning "whatever LLVM does when you pass an 
>> i17 as an argument" into platform ABI.  My experience is that LLVM does a 
>> lot of things that you wouldn't expect when you push it outside of simple 
>> cases like power-of-two integers.  Different targets may even use different 
>> rules, because the IR specification doesn't define this stuff.
>
> That seems like a better argument for leaving them unspecified I would think. 
>  If we can't count on our backends to act consistently, then it is obviously 
> going to be some level of behavior-change/perf-hit to force a decision on 
> them.

Hmm.  I did some experiments, and it looks like there's an inconsistency in a 
different way than I remembered.  All the backends I tested seem to treat an 
`iN` parameter without the `zeroext` or `signext` attribute as only having `N` 
valid bits.  However, they also also seem to assume that an `iN` will always be 
zero-padded in memory.  For example, this module:

  target datalayout = 
"e-m:o-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128"
  target triple = "x86_64-apple-macosx11.0.0"
  
  @a = global i29 0, align 4
  
  define i32 @foo() local_unnamed_addr {
  entry:
    %a = load i29, i29* @a, align 4
    %r = zext i29 %a to i32
    ret i32 %r
  }

compiles to:

  _foo:
        movl    _a(%rip), %eax
        retq

So if you're generating `iN` without extension attributes for parameters, and 
you're doing loads and stores of `iN`, then the effective ABI is that the high 
bits of arguments (at least in registers?) are undefined, but the high bits of 
values in memory are defined to be zero, even for signed types.  That, uh, 
makes zero sense as an ABI: I can see no reason to treat these cases 
differently, and if we're going to assume extension, it should certainly match 
the signedness of the type.  So I think *something* has to change in the 
implementation here.

I'm not sure if there's a way to get LLVM to treat loaded values as only having 
N valid bits.

Do you have resources on the patterns of code that you expect to see for 
`_BitInt` types?  Like, what operations are most important here?

If addition, subtraction, and comparison are the most important operations — 
especially if we don't consider shifts or multiplication important — the best 
ABI might actually be to keep the value left-shifted.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D108643/new/

https://reviews.llvm.org/D108643

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to