# Introduction

The TVM community has worked since the last release to deliver the following 
new exciting improvements!

The main tags are below (**bold text is with lots of progress**):

- **Frontend**: PyTorch's ExportedProgram is supported in the relax frontend ( 
https://github.com/apache/tvm/issues/17346)
- Community, RFCs
- AOT, Hexagon, OpenCL & CLML, Web, Metal
- **Relax**, Dlight, Disco
- TIR, TVMScript
- Docs, Docker, CI, Misc, BugFix

Please visit the full listing of commits for a complete view: 
[v0.18.dev0...v0.18.0.rc0](https://github.com/apache/tvm/compare/v0.18.dev0...v0.18.0.rc0).

### Community

 * [#17450](https://github.com/apache/tvm/pull/17450) - update contributors

### RFCs

The new RFC introduces a new backend Android Neural Network API (NNAPI) for 
BYOC. It is a graph-level neural network inference API provided by the Android 
runtime. Prior to this RFC, TVM on Android mobile devices mainly relies on 
OpenCL for GPU acceleration. This RFC aims to add a new codegen and a runtime 
via the BYOC framework, which enables execution on custom accelerators from SoC 
vendors on mobile devices.

 * [#109](https://github.com/apache/tvm-rfcs/pull/109) - [RFC] NNAPI 
Integration via BYOC

----


### BYOC
 * [#17385](https://github.com/apache/tvm/pull/17385) - [NNAPI] Add NNAPI 
backend for BYOC

### BugFix
 * [#17440](https://github.com/apache/tvm/pull/17440) - [TIR][Schedule] 
TileWithTensorIntrin skip ComputeInline if bu…
 * [#17419](https://github.com/apache/tvm/pull/17419) - [FFI]Grab GIL when 
check env signals
 * [#17403](https://github.com/apache/tvm/pull/17403) - [Fix][LLVM] Fix 
getHostCPUFeatures LLVM version cutoff
 * [#17383](https://github.com/apache/tvm/pull/17383) - [ONNX] Skip constant If 
node generated by PyTorch
 * [#17360](https://github.com/apache/tvm/pull/17360) - [FIX] fix bug when 
normalize iter with different lower bounds
 * [#17148](https://github.com/apache/tvm/pull/17148) - [Relax] Preserve 
existing DataflowBlock in ConvertToDataflow
 * [#17345](https://github.com/apache/tvm/pull/17345) - [Fix][Relax] Add the 
missing tree-attn func arg for KV cache creation
 * [#17073](https://github.com/apache/tvm/pull/17073) - [Relax]FCallPacked not 
checked in CodegenVMTIR
 * [#17315](https://github.com/apache/tvm/pull/17315) - [MSC]Bugfix for 
strided_slice op
 * [#17335](https://github.com/apache/tvm/pull/17335) - [Relax][PyTorch][Fix] 
use`_convert_torch_tensor_to_relax()` where possible
 * [#17330](https://github.com/apache/tvm/pull/17330) - [Relax][PyTorch]Update 
`layer_norm` converter to support `immutable_list` for `normalized_shape`
 * [#17324](https://github.com/apache/tvm/pull/17324) - [Fix] Remove `tvm.` 
prefix from image name when `./docker/build.sh`
 * [#17308](https://github.com/apache/tvm/pull/17308) - [TVM4J]Fix unhandled 
return type in JNI
 * [#17307](https://github.com/apache/tvm/pull/17307) - [Fix][TIR] 
LowerThreadAllreduce warp reduction mask
 * [#17312](https://github.com/apache/tvm/pull/17312) - [Relax]Infer TIR values 
from shapes inside a tuple
 * [#17292](https://github.com/apache/tvm/pull/17292) - [Relax]Support 
torch.unbind op and fix bugs for expand && split
 * [#17263](https://github.com/apache/tvm/pull/17263) - [Relax]Preserve dtype 
in ToMixedPrecision for kNever ops
 * [#17229](https://github.com/apache/tvm/pull/17229) - [Cutlass] fix cutlass 
instantiate attention template bugs
 * [#17121](https://github.com/apache/tvm/pull/17121) - [Relax]Fix a bug about 
the IR construction in test file
 * [#17142](https://github.com/apache/tvm/pull/17142) - Allow import of TVM 
when current directory is read-only

### CI
 * [#17444](https://github.com/apache/tvm/pull/17444) - [Docs] Upgrade Sphinx
 * [#17425](https://github.com/apache/tvm/pull/17425) - Upgrade CI to Python 3.9
 * [#17410](https://github.com/apache/tvm/pull/17410) - Upgrade unity image tag 
to `20240917-153130-9f281758`
 * [#17409](https://github.com/apache/tvm/pull/17409) - [Windows] Workaround 
for error in FindLLVM
 * [#17397](https://github.com/apache/tvm/pull/17397) - Update image tag to 
20240917-153130-9f281758
 * [#17338](https://github.com/apache/tvm/pull/17338) - Upgrade PyTorch to 2.4.1
 * [#17337](https://github.com/apache/tvm/pull/17337) - Disable NNPACK build 
and fix error on Android SDK installaion
 * [#17355](https://github.com/apache/tvm/pull/17355) - Upgrade github 
upload-artifact action
 * [#17334](https://github.com/apache/tvm/pull/17334) - [Hexagon] Forward gtest 
tests into pytest as separate tests
 * [#17271](https://github.com/apache/tvm/pull/17271) - Resolve CI compilation 
failures on MacOSX
 * [#17221](https://github.com/apache/tvm/pull/17221) - Reduce logging level 
when checking if docker image exists
 * [#17206](https://github.com/apache/tvm/pull/17206) - Update dummy-variable 
regex for pylint
 * [#17117](https://github.com/apache/tvm/pull/17117) - [CLML]Fix for few clml 
regression issues
 * [#17155](https://github.com/apache/tvm/pull/17155) - Remove lint step from 
`unity/pr-head` step

### Disco
 * [#17398](https://github.com/apache/tvm/pull/17398) - Enable float8 data type 
in disco
 * [#17275](https://github.com/apache/tvm/pull/17275) - Fix double free of nccl 
communicator
 * [#17264](https://github.com/apache/tvm/pull/17264) - Disable splitting nccl 
communicator in single-group
 * [#17182](https://github.com/apache/tvm/pull/17182) - Implement SocketSession
 * [#17191](https://github.com/apache/tvm/pull/17191) - Cross-group and p2p 
send/receive primitives
 * [#17180](https://github.com/apache/tvm/pull/17180) - Group-wise operation

### Dlight
 * [#17430](https://github.com/apache/tvm/pull/17430) - [GPU] Improve matmul 
schedule for adreno
 * [#17363](https://github.com/apache/tvm/pull/17363) - Fix Matmul rule for 
Conv3D
 * [#17259](https://github.com/apache/tvm/pull/17259) - [ADRENO] Fix for opencl 
adreno matmul schedule
 * [#17187](https://github.com/apache/tvm/pull/17187) - [GPU] Add OpenCL 
dequant matmul schedule

### Docker
 * [#17433](https://github.com/apache/tvm/pull/17433) - [CI] Add NNEF 
dependency to CI images

### Docs
 * [#17436](https://github.com/apache/tvm/pull/17436) - [Relax][PyTorch]Use 
`torch.export` insteamd of `fx.symbolic_trace` for tutorial
 * [#17402](https://github.com/apache/tvm/pull/17402) - [Doc] Update 
Architecture Overview
 * [#17382](https://github.com/apache/tvm/pull/17382) - More clarity on 
security model of RPC server
 * [#17380](https://github.com/apache/tvm/pull/17380) - [Doc] Relax Deep Dive
 * [#17377](https://github.com/apache/tvm/pull/17377) - Update document to 
include security model of RPC server
 * [#17378](https://github.com/apache/tvm/pull/17378) - Link to 
project-specific security page
 * [#17352](https://github.com/apache/tvm/pull/17352) - TVM pip Installation fix
 * [#17343](https://github.com/apache/tvm/pull/17343) - Minor fix typo in 
developer howto guide
 * [#17328](https://github.com/apache/tvm/pull/17328) - [Doc] Deep Dive TensorIR
 * [#17327](https://github.com/apache/tvm/pull/17327) - [Doc] How to Optimize a 
Language Model
 * [#17320](https://github.com/apache/tvm/pull/17320) - [Doc] Customize 
Optimization
 * [#17319](https://github.com/apache/tvm/pull/17319) - [Doc] Fix doc build 
error in e2e_opt_model.py
 * [#17306](https://github.com/apache/tvm/pull/17306) - [Doc] Refactor How-To
 * [#17296](https://github.com/apache/tvm/pull/17296) - [Doc] Overview
 * [#17298](https://github.com/apache/tvm/pull/17298) - [Doc] IRModule
 * [#17286](https://github.com/apache/tvm/pull/17286) - Introduce Relax API and 
move legacy part to standalone page
 * [#17289](https://github.com/apache/tvm/pull/17289) - [Doc] Quick Start
 * [#17287](https://github.com/apache/tvm/pull/17287) - [Doc] Refactor install 
docs

### Frontend
 * [#17431](https://github.com/apache/tvm/pull/17431) - [Relax][Onnx] Add 
support for pad-2
 * [#17447](https://github.com/apache/tvm/pull/17447) - [ONNX] Move relax 
related tests to the correct file
 * [#17427](https://github.com/apache/tvm/pull/17427) - [Relax][ONNX] Expand op 
support for ONNX frontend
 * [#17429](https://github.com/apache/tvm/pull/17429) - [Relax][PyTorch] 
Support tensor manipulation and creation ops for ExportedProgram importer
 * [#17426](https://github.com/apache/tvm/pull/17426) - [Relax][PyTorch] 
Support neural network ops for ExportedProgram importer
 * [#17424](https://github.com/apache/tvm/pull/17424) - [Relax][PyTorch] 
Support binary, statistical and search ops for ExportedProgram importer
 * [#17421](https://github.com/apache/tvm/pull/17421) - [Relax][PyTorch] 
Support more unary ops for ExportedProgram importer
 * [#17396](https://github.com/apache/tvm/pull/17396) - [Relax][PyTorch] Add 
support for `torch.export.ExportedProgram` in Relax PyTorch Frontend
 * [#17379](https://github.com/apache/tvm/pull/17379) - [Relax][PyTorch] Fix 
output shape of `torch.nn.functional.scaled_dot_product_attention`
 * [#17376](https://github.com/apache/tvm/pull/17376) - [Relax][PyTorch] 
Cleanup Tensor Manipulation and Creation op converters
 * [#17372](https://github.com/apache/tvm/pull/17372) - [Relax][PyTorch] 
Cleanup Statistical, Search and DataType op converters
 * [#17369](https://github.com/apache/tvm/pull/17369) - [Relax][PyTorch] 
Cleanup Neural Network op converters
 * [#17366](https://github.com/apache/tvm/pull/17366) - [Relax][PyTorch] 
Cleanup binary op converters
 * [#17356](https://github.com/apache/tvm/pull/17356) - [Relax][PyTorch] 
Cleanup unary op converters
 * [#17350](https://github.com/apache/tvm/pull/17350) - [Relax][Onnx] fix 
params name bug in onnx frontend
 * [#17342](https://github.com/apache/tvm/pull/17342) - [Relax][PyTorch] Add 
support for `torch.ops.aten.sym_size.int`
 * [#17300](https://github.com/apache/tvm/pull/17300) - [Relax][PyTorch] Add 
support for torchvision.ops.stochastic_depth
 * [#17325](https://github.com/apache/tvm/pull/17325) - [Relax][PyTorch] Add 
support for `torch.nn.functional.conv*`
 * [#17309](https://github.com/apache/tvm/pull/17309) - [Relax][Onnx] fix 
expand bug in onnx frontend
 * [#17304](https://github.com/apache/tvm/pull/17304) - [Relax][PyTorch] Add 
support for torch.repeat
 * [#17291](https://github.com/apache/tvm/pull/17291) - [Relax][PyTorch] Add 
support for torch.tile
 * [#17277](https://github.com/apache/tvm/pull/17277) - [Relay][Pytorch] Add 
support for `aten::tile`
 * [#17228](https://github.com/apache/tvm/pull/17228) -  [Unity]Add Sqrt Op
 * [#17189](https://github.com/apache/tvm/pull/17189) - [Relax][PyTorch] Add 
support for `torch.nn.functional.max_pool2d`
 * [#17186](https://github.com/apache/tvm/pull/17186) - [Relax][PyTorch] Add 
support for torch.einsum
 * [#17184](https://github.com/apache/tvm/pull/17184) - [Relax][PyTorch] Add 
support for torch.permute
 * [#17167](https://github.com/apache/tvm/pull/17167) - [Relax] [ONNX] Add 
support for Sign and Not

### Hexagon
 * [#17204](https://github.com/apache/tvm/pull/17204) - Fix LWP assembly 
handler (predicate register)
 * [#17169](https://github.com/apache/tvm/pull/17169) - [CMake] Fix v66 build 
issue
 * [#17162](https://github.com/apache/tvm/pull/17162) - Support RPC execution 
of existing shared lib

### LLVM
 * [#17347](https://github.com/apache/tvm/pull/17347) - [RUNTIME] Fix RISC-V 
CodeModel propagation to ORCJIT runtime executor
 * [#17199](https://github.com/apache/tvm/pull/17199) - Fix for 
getHostCPUFeatures API change

### MetaSchedule
 * [#17166](https://github.com/apache/tvm/pull/17166) - Replace `xgboost.rabit` 
with `xgboost.collective` because it's deprecated
 * [#17171](https://github.com/apache/tvm/pull/17171) - Add a testcase for 
padded conv2d in meta_schedule

### OpenCL & CLML
 * [#17273](https://github.com/apache/tvm/pull/17273) - [CODEGEN][OPENCL] Fix 
opencl codegen for few ops

### ROCm
 * [#17295](https://github.com/apache/tvm/pull/17295) - Fix non-standard rocm 
path
 * [#17290](https://github.com/apache/tvm/pull/17290) - hipBLAS integration
 * [#17256](https://github.com/apache/tvm/pull/17256) - Support ROCm 6

### Relax
 * [#17449](https://github.com/apache/tvm/pull/17449) - Add scatter_nd op 
support
 * [#17453](https://github.com/apache/tvm/pull/17453) - Add NonZero op
 * [#17448](https://github.com/apache/tvm/pull/17448) - Support left_shift and 
right_shift op
 * [#17432](https://github.com/apache/tvm/pull/17432) - [KVCACHE] Improved 
schedule for prefill attention
 * [#17428](https://github.com/apache/tvm/pull/17428) - Introduce static shape 
tuning pipeline
 * [#17401](https://github.com/apache/tvm/pull/17401) - [KVCache] Attention 
func accepting over-padded qkv and output NDArray
 * [#17331](https://github.com/apache/tvm/pull/17331) - Validate StructInfo 
annotations in well-formed check
 * [#17368](https://github.com/apache/tvm/pull/17368) - [Transform] Add 
SelectNode handling in SymbolicMatcher
 * [#17353](https://github.com/apache/tvm/pull/17353) - Fix BYOC removing 
existing ext mods
 * [#17359](https://github.com/apache/tvm/pull/17359) - Add new NN allgather 
operator
 * [#17362](https://github.com/apache/tvm/pull/17362) - [KV Cache] Refactor 
`_attention_sequence_prefill` function to …
 * [#17332](https://github.com/apache/tvm/pull/17332) - Validate StructInfo of 
variable bindings
 * [#17354](https://github.com/apache/tvm/pull/17354) - Fix inline source 
module cause path too long error
 * [#17213](https://github.com/apache/tvm/pull/17213) - Refactor RealizeVDevice 
to remove in-place mutation
 * [#17253](https://github.com/apache/tvm/pull/17253) - [Transform] Handle 
tuple return in RemoveUnusedOutputs
 * [#17285](https://github.com/apache/tvm/pull/17285) - Require correct 
input/output shapes `R.call_tir`
 * [#17202](https://github.com/apache/tvm/pull/17202) - Update GlobalVar name 
in AttachGlobalSymbol
 * [#17218](https://github.com/apache/tvm/pull/17218) - Allow dynamic shape 
argument to R.reshape
 * [#17326](https://github.com/apache/tvm/pull/17326) - [KVCache] Add tree 
attention with paged cache support
 * [#17314](https://github.com/apache/tvm/pull/17314) - [Transform] Compose 
preproc functions in LiftTransformParams
 * [#17313](https://github.com/apache/tvm/pull/17313) - Identify tuple 
unpack/repack in CanonicalizeBindings
 * [#17305](https://github.com/apache/tvm/pull/17305) - [Python]Rotary 
positional embedding scaling
 * [#17243](https://github.com/apache/tvm/pull/17243) - Avoid wrapping 
TupleStructInfo into a Tuple for R.call_tir
 * [#17224](https://github.com/apache/tvm/pull/17224) - [Analysis] Handle 
recursive functions in CollectVarUsage
 * [#17280](https://github.com/apache/tvm/pull/17280) - [KVCache] Increase 
coalesce threshold
 * [#17261](https://github.com/apache/tvm/pull/17261) - Add KVCache Interface 
for Relax NNModule
 * [#17145](https://github.com/apache/tvm/pull/17145) - Implement 
R.ensure_zero_offset and update memory planning for R.view
 * [#17242](https://github.com/apache/tvm/pull/17242) - Remove segfault in 
R.call_tir_inplace validation
 * [#17234](https://github.com/apache/tvm/pull/17234) - FuseTransposeMatmul Pass
 * [#17226](https://github.com/apache/tvm/pull/17226) - Fix segfault in 
rewrite_bindings for MatchCast node
 * [#17220](https://github.com/apache/tvm/pull/17220) - Handle presence of 
R.call_tir in MergeCompositeFunctions
 * [#17201](https://github.com/apache/tvm/pull/17201) - [Transform]Handle 
`is_group` argument in IPC AllReduce
 * [#17198](https://github.com/apache/tvm/pull/17198) - Disable fusion for 
fetching from the packed params in FuseOps
 * [#17149](https://github.com/apache/tvm/pull/17149) -  Implement Rewriter 
class for pattern-rewrite
 * [#17192](https://github.com/apache/tvm/pull/17192) - [KVCache] Partial 
layers support
 * [#17157](https://github.com/apache/tvm/pull/17157) - Integrate cuDNN 
attention
 * [#17160](https://github.com/apache/tvm/pull/17160) - Fix fuseOps via pattern

### Relay
 * [#17339](https://github.com/apache/tvm/pull/17339) - [qnn]: Fix 
qnn.avg_pool2d layout inference
 * [#17177](https://github.com/apache/tvm/pull/17177) - [FQ2I]: Use appropriate 
dtype while quantizing relay.op.nn.pad…

### Runtime
 * [#17407](https://github.com/apache/tvm/pull/17407) - Add property 
Module.is_device_module
 * [#17294](https://github.com/apache/tvm/pull/17294) - Support KV cache with 
RoPE extension factor array
 * [#17240](https://github.com/apache/tvm/pull/17240) - [FFI]Use 
TVMValue::v_int64 to represent boolean values
 * [#17252](https://github.com/apache/tvm/pull/17252) - Revert "[FFI]Introduce 
runtime boxed types for int/float/bool"
 * [#16183](https://github.com/apache/tvm/pull/16183) - [FFI]Introduce runtime 
boxed types for int/float/bool
 * [#17237](https://github.com/apache/tvm/pull/17237) - Reorganize PagedKVCache 
attn kernel invocation
 * [#17227](https://github.com/apache/tvm/pull/17227) - Allow aborting 
fetchWithCache through AbortSignal
 * [#17208](https://github.com/apache/tvm/pull/17208) - Allow aborting 
fetchNDArray through AbortSignal

### TIR
 * [#17443](https://github.com/apache/tvm/pull/17443) - Add `is_vector` Method 
to DataType class and update usages across Codebase
 * [#17411](https://github.com/apache/tvm/pull/17411) - [NarrowDataType] 
Bufferload's index should not inherit bits constraint of value
 * [#17219](https://github.com/apache/tvm/pull/17219) - Validate tir::Buffer 
axis_separators on construction
 * [#17158](https://github.com/apache/tvm/pull/17158) - [Analyzer] Simplify 
`x==x` expressions for all dtypes

### TOPI
 * [#17274](https://github.com/apache/tvm/pull/17274) - [ADRENO] Add Group 
Conv2d texture schedule

### TVMScript
 * [#17435](https://github.com/apache/tvm/pull/17435) - Enable T.macro 
decorateing class method
 * [#17434](https://github.com/apache/tvm/pull/17434) - [TIR] Add source kernel 
intetration via call_kernel
 * [#17395](https://github.com/apache/tvm/pull/17395) - [TIR, TVMScript] Add 
TIR - Triton integration
 * [#17131](https://github.com/apache/tvm/pull/17131) - [Relax] Allow return 
statement in DataflowBlock
 * [#17373](https://github.com/apache/tvm/pull/17373) - Avoid segfault from 
invalid TVMScript

### cuda & cutlass & tensorrt
 * [#17408](https://github.com/apache/tvm/pull/17408) - [CUTLASS] Add FP8 gemm 
kernels

### web
 * [#17420](https://github.com/apache/tvm/pull/17420) - Allow deprecated API 
requestAdapterInfo with any cast
 * [#17404](https://github.com/apache/tvm/pull/17404) - [WASM] Implement concat 
embeddings
 * [#17251](https://github.com/apache/tvm/pull/17251) - Add TVMArgBool to 
ArgTypeCode

### Misc
 * [#17457](https://github.com/apache/tvm/pull/17457) - Try to fix windows CI 
conda build issue
 * [#17415](https://github.com/apache/tvm/pull/17415) - [NVSHMEM] Enable 
nvshmem memory allocation
 * [#17422](https://github.com/apache/tvm/pull/17422) - [CMake] Add NCCL/RCCL 
header directory to include path
 * [#17405](https://github.com/apache/tvm/pull/17405) - [TVMjs] Modify web 
package description
 * [#17400](https://github.com/apache/tvm/pull/17400) - [3rdparty] Bump 
FlashInfer for tmp workspace reduction
 * [#17394](https://github.com/apache/tvm/pull/17394) - [MSC] Support concat 
with constant inputs
 * [#17351](https://github.com/apache/tvm/pull/17351) - [MSC][Refactor] Support 
dynamic shape
 * [#17371](https://github.com/apache/tvm/pull/17371) - [WEBGPU] Update runtime 
to remove deprecated API
 * [#17361](https://github.com/apache/tvm/pull/17361) - [IR] Expose 
ReplaceGlobalVars utility in the Python API
 * [#17358](https://github.com/apache/tvm/pull/17358) - Update 
tvmc_command_line_driver.py, modify the sentence, remove the duplicate "as"
 * [#17344](https://github.com/apache/tvm/pull/17344) - [MSC] Reconstruct 
tensorrt module
 * [#17297](https://github.com/apache/tvm/pull/17297) - [Apps] Remove mxnet 
dependency from /apps/android_camera/models
 * [#17299](https://github.com/apache/tvm/pull/17299) - [Apps] Remove mxnet 
dependency from /apps/ios_rpc
 * [#17293](https://github.com/apache/tvm/pull/17293) - [Rust] Remove mxnet 
dependency and re-enable rust example
 * [#17321](https://github.com/apache/tvm/pull/17321) - [Target] Refine 
equality check on TargetKind instances
 * [#17317](https://github.com/apache/tvm/pull/17317) - Add NVSHMEM support
 * [#17301](https://github.com/apache/tvm/pull/17301) - [TE][CreatePrimFunc] 
Fix create reduce block with spatial iter dependent init value
 * [#17284](https://github.com/apache/tvm/pull/17284) - [Support] Fix the 
Read/Write of socket stream
 * [#17302](https://github.com/apache/tvm/pull/17302) - [Codegen][WebGPU] 
LetNode common subexpr override
 * [#17246](https://github.com/apache/tvm/pull/17246) - [Cleanup] Remove `using 
namespace tvm::runtime` from headers
 * [#17278](https://github.com/apache/tvm/pull/17278) - [Codegen] Emit 
`tir::Let` as var assignment explicitly
 * [#17260](https://github.com/apache/tvm/pull/17260) - [WINDOWS] Compiler 
options for non x86 targets
 * [#17249](https://github.com/apache/tvm/pull/17249) - [IR] Handle NaN in 
StructuralEqual and StructuralHash
 * [#17257](https://github.com/apache/tvm/pull/17257) -  [FFI] Re-introduce the 
boxed primitive values
 * [#17265](https://github.com/apache/tvm/pull/17265) - 
[CompileBugfix][contrib] meet 'base64.h: No such file or directory' and 
'‘tvm::runtime::vm::AllocatorType’ has not been declared' while compiling
 * [#17214](https://github.com/apache/tvm/pull/17214) - Replacing unary ops 
with LookUpTable and Take op to improve performance
 * [#17250](https://github.com/apache/tvm/pull/17250) - [WebGPU] Fix unexpected 
device lost error when intentional dispose
 * [#17236](https://github.com/apache/tvm/pull/17236) - [3rdparty] Bump 
FlashInfer
 * [#17233](https://github.com/apache/tvm/pull/17233) - [Runtime Patch] Add 
AbortSignal to fetchWithCache in ArtifactCacheTemplate interface
 * [#17190](https://github.com/apache/tvm/pull/17190) - [Cython][FFI] Fix crash 
when call del operator for handle
 * [#17170](https://github.com/apache/tvm/pull/17170) - Pass to eliminate 
redundant branch and overcompute
 * [#17185](https://github.com/apache/tvm/pull/17185) - Remove and replace 
deprecated `distutils.util.strtobool()`
 * [#17188](https://github.com/apache/tvm/pull/17188) - Add `packaging` to 
`python/gen_requirements.py`
 * [#17181](https://github.com/apache/tvm/pull/17181) - [FFI] Add python signal 
handler for ctypes FFI
 * [#17173](https://github.com/apache/tvm/pull/17173) - Use 
`packaging.version.parse` instead of `distutils.version.LooseVersion`
 * [#17174](https://github.com/apache/tvm/pull/17174) - [TVMJS] Check 
DataType.NUMPY2STR when saving array
 * [#17168](https://github.com/apache/tvm/pull/17168) - [Meta 
Schedule][XGBoost] enable custom callback func test with xgboost>=1.6.0




-- 
View it on GitHub:
https://github.com/apache/tvm/releases/tag/v0.18.0.rc0
You are receiving this because you are subscribed to this thread.

Message ID: <apache/tvm/releases/180484...@github.com>

Reply via email to