discuss-archive
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [PR] [Relax] Add bfloat16 Support for Metal SIMD Matrix Operations [tvm]
via GitHub
[PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
[PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
Re: [PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
Re: [PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
Re: [PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
[GH] (tvm-ffi/2025-12-28/docs-release-process): Workflow run "CI" failed!
GitBox
[PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
Re: [PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
Re: [PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
Re: [PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
[PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
Re: [PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
Re: [PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
Re: [PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
[I] [RESULT][VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
Re: [I] [RESULT][VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
[PR] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] [Relax] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] [Relax] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] [Relax] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
[PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
[PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Re: [PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Re: [PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Re: [PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Re: [PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
[I] [Bug] Segfault in `tvm.compile` on **LLVM (CPU) target** when `tir.ptx_ldg32=1`: unexpectedly runs `tir::transform::InjectPTXLDG32` / `PTXRewriter` and crashes in `BufferStore` [tvm]
via GitHub
[PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
Re: [PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
Re: [PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
Re: [PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
Re: [PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
Re: [PR] [Relax] Replaced call_pure_packed with tensor_to_shape operator [tvm]
via GitHub
[GH] (tvm/onnx-edge-pad): Workflow run "CI" is working again!
GitBox
[PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
Re: [PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
Re: [PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
Re: [PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
Re: [PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
Re: [PR] [Relax] Add gpu-generic fallback for unrecognized GPU targets [tvm]
via GitHub
[PR] [Relax] Refactoring Duplicate cuBLAS/hipBLAS Tests [tvm]
via GitHub
Re: [PR] [Relax] Refactoring Duplicate cuBLAS/hipBLAS Tests [tvm]
via GitHub
Re: [PR] [Relax] Refactoring Duplicate cuBLAS/hipBLAS Tests [tvm]
via GitHub
Re: [PR] [Relax] Refactoring Duplicate cuBLAS/hipBLAS Tests [tvm]
via GitHub
[PR] [Relax] Remove duplicated test case: test_if_branch_var_scope [tvm]
via GitHub
Re: [PR] [Relax] Remove duplicated test case: test_if_branch_var_scope [tvm]
via GitHub
Re: [PR] [Relax] Remove duplicated test case: test_if_branch_var_scope [tvm]
via GitHub
Re: [PR] [Relax] Remove duplicated test case: test_if_branch_var_scope [tvm]
via GitHub
Re: [PR] [Relax] Remove duplicated test case: test_if_branch_var_scope [tvm]
via GitHub
[I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Feature Request] Support mapping from C++ to DLDataType [tvm-ffi]
via GitHub
Re: [I] [Bug] MetaSchedule tune_tir crashes with ScheduleError in RewriteFuseSplitParallelVectorize [tvm]
via GitHub
Re: [I] [Bug] MetaSchedule tune_tir crashes with ScheduleError in RewriteFuseSplitParallelVectorize [tvm]
via GitHub
[I] [Bug] Segfault in `tvm.compile` (Relax→TIR, CUDA target) inside `tir::transform::InjectPTXLDG32` / `PTXRewriter::VisitStmt_(BufferStore)` when compiling `torch.export` model returning `(tril, triu)` tuple [tvm]
via GitHub
[GH] (tvm/2025-12-24/change-type-key-func-name): Workflow run "CI" failed!
GitBox
[GH] (tvm/2025-12-24/change-type-key-func-name): Workflow run "CI" failed!
GitBox
[GH] (tvm/2025-12-24/change-type-key-func-name): Workflow run "CI" failed!
GitBox
[GH] (tvm/2025-12-24/change-type-key-func-name): Workflow run "CI" failed!
GitBox
[GH] (tvm/2025-12-24/change-type-key-func-name): Workflow run "CI" failed!
GitBox
[PR] [BREAKING CHANGE] Prefix all type keys and function names with "tvm." [tvm]
via GitHub
Re: [PR] [BREAKING CHANGE] Prefix all type keys and function names with "tvm." [tvm]
via GitHub
Re: [PR] [BREAKING CHANGE] Prefix all type keys and function names with "tvm." [tvm]
via GitHub
[GH] (tvm/junrus/2025-12-24/update-tvm-ffi): Workflow run "CI" failed!
GitBox
[GH] (tvm/junrus/2025-12-24/update-tvm-ffi): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/2025-12-24/use-slots): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-12-24/use-slots): Workflow run "CI" failed!
GitBox
[PR] feat: Restrict `__slots__=()` for all subclasses of `tvm_ffi.Object` [tvm-ffi]
via GitHub
Re: [PR] feat: Restrict `__slots__=()` for all subclasses of `tvm_ffi.Object` [tvm-ffi]
via GitHub
Re: [PR] feat: Restrict `__slots__=()` for all subclasses of `tvm_ffi.Object` [tvm-ffi]
via GitHub
[PR] Update TVM-FFI to v0.1.6 [tvm]
via GitHub
Re: [PR] Update TVM-FFI to v0.1.6 [tvm]
via GitHub
Re: [PR] Integrate with `tvm-ffi-stubgen` [tvm]
via GitHub
Re: [PR] Integrate with `tvm-ffi-stubgen` [tvm]
via GitHub
[PR] [WIP] doc: Tensor and DLPack [tvm-ffi]
via GitHub
Re: [PR] [WIP] doc: Tensor and DLPack [tvm-ffi]
via GitHub
Re: [PR] [WIP] doc: Tensor and DLPack [tvm-ffi]
via GitHub
Re: [PR] doc: Tensor and DLPack [tvm-ffi]
via GitHub
Re: [PR] doc: Tensor and DLPack [tvm-ffi]
via GitHub
Re: [PR] doc: Tensor and DLPack [tvm-ffi]
via GitHub
Re: [PR] doc: Tensor and DLPack [tvm-ffi]
via GitHub
Re: [PR] doc: Tensor and DLPack [tvm-ffi]
via GitHub
[PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
Re: [PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
Re: [PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
Re: [PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
Re: [PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
Re: [PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
Re: [PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
Re: [PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
Re: [PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
Re: [PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
Re: [PR] [Relax] Fix batch normalization computation logic [tvm]
via GitHub
[I] [Bug] Resize N-D import failure: TVM only supports 4D resize2d, but ONNX Resize supports N-D tensors [tvm]
via GitHub
[I] [Bug] PRelu import/compile fails when slope initializer is broadcastable rank-2 (1,1): topi.nn.prelu requires 1-D slope [tvm]
via GitHub
[I] [Bug] PRelu with 1-D input fails to import in Relax: relax.nn.prelu uses axis=1 out of range [tvm]
via GitHub
[I] [Bug] ONNX Cast treats NaN inconsistently in TVM LLVM codegen: Constant(NaN)->True but computed NaN->False [tvm]
via GitHub
[I] Why does @c_class require Python field annotations, but not method declarations? [tvm-ffi]
via GitHub
Re: [I] Why does @c_class require Python field annotations, but not method declarations? [tvm-ffi]
via GitHub
Re: [I] Why does @c_class require Python field annotations, but not method declarations? [tvm-ffi]
via GitHub
Re: [I] Why does @c_class require Python field annotations, but not method declarations? [tvm-ffi]
via GitHub
[PR] [CUDA][FFI] Add support for Programmatic Dependent Kernel Launch (PDL) in TVM CUDA FFI [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Add support for Programmatic Dependent Kernel Launch (PDL) in TVM CUDA FFI [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Add support for Programmatic Dependent Kernel Launch (PDL) in TVM CUDA FFI [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Add support for Programmatic Dependent Kernel Launch (PDL) in TVM CUDA FFI [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Add support for Programmatic Dependent Kernel Launch (PDL) in TVM CUDA FFI [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Add support for Programmatic Dependent Kernel Launch (PDL) in TVM CUDA FFI [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Add support for Programmatic Dependent Kernel Launch (PDL) in TVM CUDA FFI [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Add support for Programmatic Dependent Kernel Launch (PDL) in TVM CUDA FFI [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Extend kernel launch config to support Programmatic Dependent Launch and cuLaunchCooperativeKernel [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Extend kernel launch config to support Programmatic Dependent Launch and cuLaunchCooperativeKernel [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Extend kernel launch config to support Programmatic Dependent Launch and cuLaunchCooperativeKernel [tvm]
via GitHub
Re: [PR] [CUDA][FFI] Extend kernel launch config to support Programmatic Dependent Launch and cuLaunchCooperativeKernel [tvm]
via GitHub
[I] [Bug] TVM crashes: InternalError: LLVM module verification failed for LayerNormalization [tvm]
via GitHub
[I] [Bug] TVM crashes: InternalError: LLVM module verification failed for ArgMax and ArgMin [tvm]
via GitHub
[PR] chore: Suppress latest clang-tidy warnings [tvm-ffi]
via GitHub
Re: [PR] chore: Suppress latest clang-tidy warnings [tvm-ffi]
via GitHub
Re: [PR] chore: Suppress latest clang-tidy warnings [tvm-ffi]
via GitHub
Re: [PR] fix(lint): Suppress latest clang-tidy warnings [tvm-ffi]
via GitHub
[I] [Bug] inconsistent shapes of results produced by TVM and ONNXRuntime due to the ConvTranspose operator [tvm]
via GitHub
Re: [I] [Bug] inconsistent shapes of results produced by TVM and ONNXRuntime due to the ConvTranspose operator [tvm]
via GitHub
[PR] Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] fix(build): Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] fix(build): Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] fix(build): Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
Re: [PR] fix(build): Link dl and pthread dependencies for tvm_ffi [tvm-ffi]
via GitHub
[I] [Bug] Error converting operator ConvTranspose: InternalError: In Op(relax.add), the first input shape at dim 1 is T.int64(16) and the second input shape at dim 1 is T.int64(32), which are not broadcastable. [tvm]
via GitHub
[I] [Bug] Segfault in `tvm.compile (Relax, target=llvm)` during TIR pass `InjectPTXLDG32` / `PTXRewriter::VisitStmt_(BufferStore)` even though target is CPU-only [tvm]
via GitHub
[I] [Bug] TVM produces wrong results due to the PRelu operator [tvm]
via GitHub
[I] [Tracking Issue] Dogfood `tvm-ffi-stubgen` in TVM Codebase [tvm]
via GitHub
[I] [VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
[PR] doc: Final tweaks on Python Packaging [tvm-ffi]
via GitHub
Re: [PR] doc: Final tweaks on Python Packaging [tvm-ffi]
via GitHub
Re: [PR] doc: Final tweaks on Python Packaging [tvm-ffi]
via GitHub
Re: [PR] doc: Final tweaks on Python Packaging [tvm-ffi]
via GitHub
[Apache TVM Discuss] [Questions] torch.nn.Dropout is not converted into Relax
Yang Yuanzhao via Apache TVM Discuss
[I] [Bug] TVM crashes when the size of the shape of slope tensor for PRelu operator is larger than 1 [tvm]
via GitHub
[I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
Re: [I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
Re: [I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
Re: [I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
Re: [I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
Re: [I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
Re: [I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
Re: [I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
Re: [I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
Re: [I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
Re: [I] [BUG] int64_t cannot represent the full range of size_t [tvm-ffi]
via GitHub
[I] [Bug] InternalError: LLVM module verification failed for Hardmax operator [tvm]
via GitHub
[I] [Bug] AveragePool operator produces wrong shape when ceil mode is set to 1 [tvm]
via GitHub
Re: [I] [Bug] AveragePool operator produces wrong shape when ceil mode is set to 1 [tvm]
via GitHub
[I] How far are we from `@dataclass` [tvm-ffi]
via GitHub
[PR] doc: Improve python packaging doc [tvm-ffi]
via GitHub
Re: [PR] doc: Improve python packaging doc [tvm-ffi]
via GitHub
Re: [PR] doc: Improve python packaging doc [tvm-ffi]
via GitHub
[PR] build: Rename Import Targets [tvm-ffi]
via GitHub
Re: [PR] build: Rename Import Targets [tvm-ffi]
via GitHub
Re: [PR] build: Rename Import Targets [tvm-ffi]
via GitHub
Re: [PR] build: Rename Import Targets [tvm-ffi]
via GitHub
Re: [PR] build: Rename Import Targets [tvm-ffi]
via GitHub
Re: [PR] build: Rename Import Targets [tvm-ffi]
via GitHub
Re: [PR] build: Rename Import Targets [tvm-ffi]
via GitHub
Re: [PR] build: Rename Import Targets [tvm-ffi]
via GitHub
[PR] chore: Run examples on `main` commits [tvm-ffi]
via GitHub
Re: [PR] chore: Run examples on `main` commits [tvm-ffi]
via GitHub
Re: [PR] chore: Run examples on `main` commits [tvm-ffi]
via GitHub
Re: [PR] chore: Run examples on `main` commits [tvm-ffi]
via GitHub
Re: [PR] chore: Run examples on `main` commits [tvm-ffi]
via GitHub
[GH] (tvm-ffi/2025-12-18/doc): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-12-18/doc): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-12-18/doc): Workflow run "CI" failed!
GitBox
Earlier messages
Later messages