mtrofin added a comment.

In D88114#2288749 <https://reviews.llvm.org/D88114#2288749>, @steven_wu wrote:

> In D88114#2288737 <https://reviews.llvm.org/D88114#2288737>, @mtrofin wrote:
>
>> In D88114#2288732 <https://reviews.llvm.org/D88114#2288732>, @steven_wu 
>> wrote:
>>
>>> I am not sure what exactly is expected here. What is your definition for 
>>> pre-optimized bitcode and how your test case ensures that? Can you explain 
>>> a bit more for context?
>>
>> Pre-optimized meaning before the llvm optimization pipeline is called. 
>> That's the current implementation, and the test explicitly checks that the 
>> inlining of bar into foo doesn't happen.
>>
>> I could add an "alwaysinline" to bar, to further stress that.
>
> I think the current implementation does run optimization passes if the input 
> is c family language and we need to keep it that way (just so that we don't 
> do most of the optimization again). The reason you don't see it running 
> because you are using IR as input. For Apple's implementation, we actually 
> pass `-disable-llvm-passes` when the input is IR to ensure no optimization 
> passes are running.

Afaik, today's implementation has 2 parts: driver and cc1. The cc1 part always 
emits before opt passes. The driver part, upon seeing -fembed-bitcode, splits 
compilation in 2 stages. Stage 1 performs ops and emits bc to a file 
(-emit-llvm). Stage 1 doesn't expose -fembed-bitcode to cc1. Stage 2 takes the 
output from stage1, disables optimizations, and adds -fembed-bitcode. So 
together, this gives the semantics you mentioned, but it happens that if you 
skip the driver and pass -fembed-bitcode to cc1, we get the pre-opt bitcode, 
which helps my scenario.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D88114/new/

https://reviews.llvm.org/D88114

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to