================
@@ -43,11 +85,28 @@ mlir::Value CIRGenFunction::emitX86BuiltinExpr(unsigned
builtinID,
// Find out if any arguments are required to be integer constant expressions.
assert(!cir::MissingFeatures::handleBuiltinICEArguments());
+ // The operands of the builtin call
+ llvm::SmallVector<mlir::Value, 4> ops;
+
+ // `ICEArguments` is a bitmap indicating whether the argument at the i-th bit
+ // is required to be a constant integer expression.
+ unsigned ICEArguments = 0;
+ ASTContext::GetBuiltinTypeError error;
+ getContext().GetBuiltinType(builtinID, error, &ICEArguments);
+ assert(error == ASTContext::GE_None && "Error while getting builtin type.");
+
+ const unsigned numArgs = e->getNumArgs();
+ for (unsigned i = 0; i != numArgs; i++) {
+ ops.push_back(emitScalarOrConstFoldImmArg(ICEArguments, i, e));
+ }
+
switch (builtinID) {
default:
return {};
case X86::BI_mm_prefetch:
+ return emitPrefetch(*this, e, ops[0], getIntValueFromConstOp(ops[1]));
----------------
andykaylor wrote:
Oh, I see what's going on now. The version I showed above was from
`clang/lib/Headers/ppc_wrappers/xmmintrin.h` and so isn't the definition we
normally find. In the usual version (`clang/lib/Headers/xmmintrin.h`) I find
this:
```
#ifndef _MSC_VER
// If _MSC_VER is defined, we use the builtin variant of _mm_prefetch.
// Otherwise, we provide this macro, which includes a cast, allowing the user
// to pass a pointer of any time. The _mm_prefetch accepts char to match MSVC.
/// Loads one cache line of data from the specified address to a location
/// closer to the processor.
///
/// \headerfile <x86intrin.h>
///
/// \code
/// void _mm_prefetch(const void *a, const int sel);
/// \endcode
///
/// This intrinsic corresponds to the <c> PREFETCHNTA </c> instruction.
///
/// \param a
/// A pointer to a memory location containing a cache line of data.
/// \param sel
/// A predefined integer constant specifying the type of prefetch
/// operation: \n
/// _MM_HINT_NTA: Move data using the non-temporal access (NTA) hint. The
/// PREFETCHNTA instruction will be generated. \n
/// _MM_HINT_T0: Move data using the T0 hint. The PREFETCHT0 instruction will
/// be generated. \n
/// _MM_HINT_T1: Move data using the T1 hint. The PREFETCHT1 instruction will
/// be generated. \n
/// _MM_HINT_T2: Move data using the T2 hint. The PREFETCHT2 instruction will
/// be generated.
#define _mm_prefetch(a, sel) (__builtin_prefetch((const void *)(a), \
((sel) >> 2) & 1, (sel) & 0x3))
#endif
```
So, we hit the `BI_mm_prefetch` built-in if-and-only-if `_MSC_VER` is defined
(otherwise, the macro above maps it to the general prefetch builtin).
Note that in the test you linked, several of the RUN lines contain
`-triple=x86_64-windows-msvc` and `-fms-compatibility`, which will cause us to
fall back on the `BI_mm_prefetch` builtin rather than mapping directly to
`builtin_prefetch`.
I also noticed that there are `_m_prefectch` and `_m_prefetchw` builtins that
were added [here](https://github.com/llvm/llvm-project/pull/138360) and are
missing from the incubator.
We're missing a lot of support for Windows, but mostly ABI-related things so
the builtin handling may work. However, we should be generating the
`cir.prefetch` operation rather than a call to the LLVM intrinsic. I'd
suggested moving this into a separate PR.
https://github.com/llvm/llvm-project/pull/167401
_______________________________________________
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits