yaoyaoding opened a new pull request, #73:
URL: https://github.com/apache/tvm-ffi/pull/73
This PR adds the `tvm_ffi.cpp.build_inline` utility function.
Example:
```python
import torch
from tvm_ffi import Module
import tvm_ffi.cpp
# define the cpp source code
cpp_source = '''
void add_one_cpu(tvm::ffi::Tensor x, tvm::ffi::Tensor y) {
// implementation of a library function
TVM_FFI_ICHECK(x->ndim == 1) << "x must be a 1D tensor";
DLDataType f32_dtype{kDLFloat, 32, 1};
TVM_FFI_ICHECK(x->dtype == f32_dtype) << "x must be a float tensor";
TVM_FFI_ICHECK(y->ndim == 1) << "y must be a 1D tensor";
TVM_FFI_ICHECK(y->dtype == f32_dtype) << "y must be a float tensor";
TVM_FFI_ICHECK(x->shape[0] == y->shape[0]) << "x and y must have the
same shape";
for (int i = 0; i < x->shape[0]; ++i) {
static_cast<float*>(y->data)[i] = static_cast<float*>(x->data)[i] +
1;
}
}
'''
# compile the cpp source code and load the module
lib_path: str = tvm_ffi.cpp.load_inline(
name='hello',
cpp_sources=cpp_source,
functions='add_one_cpu',
# build_directory='./add_one/', # can we optionally specify the build
directory
)
# load the module
mod: Module = tvm_ffi.load_module(lib_path)
# use the function from the loaded module to perform
x = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float32)
y = torch.empty_like(x)
mod.add_one_cpu(x, y)
torch.testing.assert_close(x + 1, y)
```
The `build_inline` function is similar to `tvm_ffi.cpp.load_inline` but only
build the module without loading it after build. It returns the path to the
build shared library (e.g.,
`~/.cache/tvm-ffi/hello_95b50659cc3e9b6d/hello.so`).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]