# Feature Improvement

## Accelerator and Microcontroller Support

- Cleanup legacy verilog code 
([#4576](https://github.com/apache/incubator-tvm/pull/4576))
- uTVM support for ARM STM32F746XX boards 
([#4274](https://github.com/apache/incubator-tvm/pull/4274))
- Add --runtime=c, remove micro_dev target, enable LLVM backend 
[#6145](https://github.com/apache/incubator-tvm/pull/6145)

## Arithmetic Analysis

- Linear system and equation solver 
([#5171](https://github.com/apache/incubator-tvm/pull/5171))
- Inequalities solver [#5618](https://github.com/apache/incubator-tvm/pull/5618)
- Improve IntervalSet's floormod 
([#5367](https://github.com/apache/incubator-tvm/pull/5367))
- Remove legacy const pattern functions 
([#5387](https://github.com/apache/incubator-tvm/pull/5387))
- Handle likely in IRMutatorWithAnalyzer 
[#5665](https://github.com/apache/incubator-tvm/pull/5665)
- ExtendedEuclidean merge impl to int_operator 
[#5625](https://github.com/apache/incubator-tvm/pull/5625)
- Rewrite simplify fix for Vectorized Cooperative Fetching 
[#5924](https://github.com/apache/incubator-tvm/pull/5924)

## AutoTVM and Graph Tuner

- Adding ROCM schedules for TOPI 
([#4507](https://github.com/apache/incubator-tvm/pull/4507))
- NHWC conv2d schedule templates for ARM 
([#3859](https://github.com/apache/incubator-tvm/pull/3859))
- Use VM compile to extract autotvm tasks 
[#4328](https://github.com/apache/incubator-tvm/issues/4328)
- Download fallback schedule file if it does not exist 
[#4671](https://github.com/apache/incubator-tvm/issues/4671)
- Ignore error when removing tmpdir 
[#4781](https://github.com/apache/incubator-tvm/issues/4781)
- Fix a bug in generating the search space 
[#4779](https://github.com/apache/incubator-tvm/issues/4779)
- Minor bug fixes in AutoTVM for QNN graphs 
[#4797](https://github.com/apache/incubator-tvm/issues/4797)
- Fix autotvm customized template 
[#5034](https://github.com/apache/incubator-tvm/pull/5034)
- Add opt out operator for has_multiple_inputs for graph tuner 
[#5000](https://github.com/apache/incubator-tvm/pull/5000)
- Customize SI prefix in logging 
([#5411](https://github.com/apache/incubator-tvm/pull/5411))
- Update XGBoost verbosity option 
[#5649](https://github.com/apache/incubator-tvm/pull/5649)
- Support range in index based tuners 
[#4870](https://github.com/apache/incubator-tvm/pull/4870)

## BYOC

- [BYOC] Bind constant tuples in graph partitioner 
([#5476](https://github.com/apache/incubator-tvm/pull/5476))
- [BYOC] Add support for composite functions in BYOC 
([#5261](https://github.com/apache/incubator-tvm/pull/5261))
- [BYOC] Register pattern tables from external codegens 
([#5262](https://github.com/apache/incubator-tvm/pull/5262))
- [BYOC] Enhance partitioning and external codegen 
([#5310](https://github.com/apache/incubator-tvm/pull/5310))
- [BYOC] Refine AnnotateTarget and MergeCompilerRegion Passes 
([#5277](https://github.com/apache/incubator-tvm/pull/5277))
- [BYOC] Use Non-Recursive Visitor/Mutator 
([#5410](https://github.com/apache/incubator-tvm/pull/5410))
- [BYOC] Refine DNNL Codegen 
([#5288](https://github.com/apache/incubator-tvm/pull/5288))
- [BYOC] Add example of Composite + Annotate for DNNL fused op 
([#5272](https://github.com/apache/incubator-tvm/pull/5272))
- [BYOC] Prevent duplicate outputs in subgraph Tuple 
([#5320](https://github.com/apache/incubator-tvm/pull/5320))

## Codegen

- Intrinsic dispatching with OCML instead of LLVM for ROCm 
([#4499](https://github.com/apache/incubator-tvm/pull/4499))
- Make target codegen take IRModule and PrimFunc. 
[#5107](https://github.com/apache/incubator-tvm/pull/5107)
- Enhance CUDA codegen for SelectNode 
[#4983](https://github.com/apache/incubator-tvm/pull/4983)
- Vectorization for intrinsics 
[#5101](https://github.com/apache/incubator-tvm/pull/5101)
- [LLVM] Do not use x86_vcvtph2ps_256 intrinsic with LLVM 11+ 
([#5267](https://github.com/apache/incubator-tvm/pull/5267))
- [LLVM] Use llvm::ElementCount with LLVM 11+ when creating vectors 
([#5265](https://github.com/apache/incubator-tvm/pull/5265))
- [LLVM] Use llvm::FunctionCallee in IRBuilder::CreateCall with LLVM 11+ 
([#5338](https://github.com/apache/incubator-tvm/pull/5338))
- [LLVM] Include Support/Host.h for declaration of getDefaultTargetTriple 
([#5268](https://github.com/apache/incubator-tvm/pull/5268))
- [LLVM] Replace calls to Type::getVectorNumElements 
([#5398](https://github.com/apache/incubator-tvm/pull/5398))
- [LLVM] Use ArrayRef in calls to CreateShuffleVector 
([#5399](https://github.com/apache/incubator-tvm/pull/5399))
- [LLVM] Use llvm::Align with LLVM 11+ to avoid warnings 
([#5264](https://github.com/apache/incubator-tvm/pull/5264))
- [CodeGen] Cleanup generated code (#5424)
- Rename target_id => target_kind 
[#6199](https://github.com/apache/incubator-tvm/pull/6199)
- 64-bit RPi4b target [#6211](https://github.com/apache/incubator-tvm/pull/6211)
- Creating Target from JSON-like Configuration 
[#6218](https://github.com/apache/incubator-tvm/pull/6218)
- Add python binding to new JSON target construction 
[#6315](https://github.com/apache/incubator-tvm/pull/6315)
- Use target class in all codegens 
[#6347](https://github.com/apache/incubator-tvm/pull/6347)
- Initial support for Hexagon codegen 
[#6261](https://github.com/apache/incubator-tvm/pull/6261)
- Add --runtime=c, remove micro_dev target, enable LLVM backend 
[#6145](https://github.com/apache/incubator-tvm/pull/6145)
- Add tvm::support::hexdump() debug utility 
[#6154](https://github.com/apache/incubator-tvm/pull/6154)
- Adding AMD codegen unit tests 
([#4509](https://github.com/apache/incubator-tvm/pull/4509))
- Support cuda tensorcore subbyte int data type in auto tensorcore 
[#4546](https://github.com/apache/incubator-tvm/pull/4546)
- Handle empty LLVMModule in GetFunction 
[#5146](https://github.com/apache/incubator-tvm/pull/5146)
- Support int4/int8 conv2d tensor core with HWNC layout 
[#6121](https://github.com/apache/incubator-tvm/pull/6121)

## Dynamism Support

- Add shape function for zero, zeros_like, ones, ones_like 
([#4448](https://github.com/apache/incubator-tvm/pull/4448)), tile 
([#4441](https://github.com/apache/incubator-tvm/pull/4441/files))
- Support symbolic newshape for Reshape #5429
- Support symbolic TopK, Ones, Zeros and Full 
[#5459](https://github.com/apache/incubator-tvm/pull/5459)
- Add shape_of instruction 
[#5855](https://github.com/apache/incubator-tvm/pull/5855)
- symbolic max_output_size 
[#5844](https://github.com/apache/incubator-tvm/pull/5844)
- Dynamic TopK Op [#6008](https://github.com/apache/incubator-tvm/pull/6008)
- Dynamic broadcast_to, zeros, ones 
[#6007](https://github.com/apache/incubator-tvm/pull/6007)
- Add dynamic reshape grad 
[#6080](https://github.com/apache/incubator-tvm/pull/6080)
- Keep fixed dim when unifying dynamic shape 
[#5795](https://github.com/apache/incubator-tvm/pull/5795)
- OneHot operation [#6209](https://github.com/apache/incubator-tvm/pull/6209)
- Add Dynamic Resize Op 
[#6198](https://github.com/apache/incubator-tvm/pull/6198)
- Dynamic full operator 
[#6260](https://github.com/apache/incubator-tvm/pull/6260)
- Dynamic upsampling relay op 
[#6273](https://github.com/apache/incubator-tvm/pull/6273)
- Dynamic Tile Op [#5983](https://github.com/apache/incubator-tvm/pull/5983)

## Frontend and User Interface

- TFLite parser support for transpose_conv 
([#4440](https://github.com/apache/incubator-tvm/pull/4440)), unpack 
([#4447](https://github.com/apache/incubator-tvm/pull/4447))
- LLDB pretty printers for relay 
([#4453](https://github.com/apache/incubator-tvm/pull/4453))
- ONNX to Relay converter op support: expand op 
([#4483](https://github.com/apache/incubator-tvm/pull/4483))
- ONNX auto_pad in conv and convtranspose 
([#4563](https://github.com/apache/incubator-tvm/pull/4563/))
- TF to Relay converter op support: bilinear and neighbour implementation 
refactor ([#4504](https://github.com/apache/incubator-tvm/pull/4504)), 
max_pool3d ([#4551](https://github.com/apache/incubator-tvm/pull/4551)), 
conv2d_transpose with “same” padding support for larger than 1x1 kernels 
([#4484](https://github.com/apache/incubator-tvm/pull/4484))
- Remove unnecessary cast of constants in ONNX converter 
([#4573](https://github.com/apache/incubator-tvm/pull/4573))
- Add support for tf.Keras networks in Relay Keras frontend 
[#4630](https://github.com/apache/incubator-tvm/issues/4630)
- Add conv3d [#4604](https://github.com/apache/incubator-tvm/issues/4604)
- Fix incorrect calculations in tf SLICE 
[#4518](https://github.com/apache/incubator-tvm/issues/4518)
- Dynamically calculate input_stats of any fake_quant range 
[#4789](https://github.com/apache/incubator-tvm/pull/4789)
- LSTM Support [#4825](https://github.com/apache/incubator-tvm/pull/4825)
- Add MIRROR_PAD operator 
[#4822](https://github.com/apache/incubator-tvm/pull/4822)
- use qnn helper function in softmax 
[#4840](https://github.com/apache/incubator-tvm/pull/4840)
- Add Resize op converter 
[#4838](https://github.com/apache/incubator-tvm/pull/4838)
- Add support for TFLite_Detection_PostProcess 
[#4543](https://github.com/apache/incubator-tvm/pull/4543)
- Fix tests for tflite unary elemwise operations 
[#4913](https://github.com/apache/incubator-tvm/pull/4913)
- GaussianDropout/Noise parsing support 
[#4928](https://github.com/apache/incubator-tvm/pull/4928)
- Add parser support for 'square' operator 
[#4915](https://github.com/apache/incubator-tvm/pull/4915)
- make_loss operator support 
[#4930](https://github.com/apache/incubator-tvm/pull/4930)
- Add parser support for l2_normalization 
[#4966](https://github.com/apache/incubator-tvm/pull/4966)
- ReadVariableOp operator support 
[#4952](https://github.com/apache/incubator-tvm/pull/4952)
- Check graph inputs match expected 
[#4992](https://github.com/apache/incubator-tvm/pull/4992)
- support multiply outputs 
[#4980](https://github.com/apache/incubator-tvm/pull/4980)
- TFLite: Using real image for QNN testing. 
[#4816](https://github.com/apache/incubator-tvm/pull/4816)
- TFLite: FLOOR_MOD & FLOOR_DIV support 
[#4971](https://github.com/apache/incubator-tvm/pull/4971)
- PyTorch: Upsampling op support and enable registering a user defined op 
conversion map [#4961](https://github.com/apache/incubator-tvm/pull/4961)
- PyTorch: fix unordered dictionary problem for python version under 3.6 
[#4982](https://github.com/apache/incubator-tvm/pull/4982)
- Operator support NonZero 
[#5073](https://github.com/apache/incubator-tvm/pull/5073)
- Upsampling op support and enable registering a user defined op conversion map 
[#4961](https://github.com/apache/incubator-tvm/pull/4961)
- Check graph inputs match expected 
[#4992](https://github.com/apache/incubator-tvm/pull/4992)
- Add support for quantized models via QNN 
[#4977](https://github.com/apache/incubator-tvm/pull/4977)
- Add initial control flow support 
[#4964](https://github.com/apache/incubator-tvm/pull/4964)
- Remove FP32 piggy back and use QNN add/mul/concatenate 
[#5061](https://github.com/apache/incubator-tvm/pull/5061)
- Add missing upcast to uint8 avg_pool conversion 
[#5089](https://github.com/apache/incubator-tvm/pull/5089)
- Add initial 3D op support and test on Resnet 3D 
[#5075](https://github.com/apache/incubator-tvm/pull/5075)
- Fix conv2d conversion for group conv (group > 1 but != in channels) 
[#5132](https://github.com/apache/incubator-tvm/pull/5132)
- Add support for max_pool1d 
[#5142](https://github.com/apache/incubator-tvm/pull/5142)
- Add support for split 
[#5174](https://github.com/apache/incubator-tvm/pull/5174)
- FLOOR_MOD & FLOOR_DIV support 
[#4971](https://github.com/apache/incubator-tvm/pull/4971)
- Activation functions support 
[#4978](https://github.com/apache/incubator-tvm/pull/4978)
- Round op parsing support added 
[#5022](https://github.com/apache/incubator-tvm/pull/5022)
- DepthToSpace and SpaceToDepth support 
[#5041](https://github.com/apache/incubator-tvm/pull/5041)
- TOP_K op parser support 
[#5051](https://github.com/apache/incubator-tvm/pull/5051)
- ReadVariableOp operator support 
[#4952](https://github.com/apache/incubator-tvm/pull/4952)
- Support multiply outputs 
[#4980](https://github.com/apache/incubator-tvm/pull/4980)
- Reduce_any op parsing support 
[#4926](https://github.com/apache/incubator-tvm/pull/4926)
- TensorFlow Parser Control Flow Enhancement 
[#5020](https://github.com/apache/incubator-tvm/pull/5020)
- TensorFlow Frontend support with shared params 
[#5042](https://github.com/apache/incubator-tvm/pull/5042)
- Support for AddV2 in Relay Tensorflow frontend converter. 
[#5046](https://github.com/apache/incubator-tvm/pull/5046)
- conv3d frontend operator support 
[#5080](https://github.com/apache/incubator-tvm/pull/5080)
- Max_pool3d and Averagepool3d operator support 
[#5085](https://github.com/apache/incubator-tvm/pull/5085)
- Support for Atan/Atan2 in Relay Tensorflow frontend converter. 
[#5104](https://github.com/apache/incubator-tvm/pull/5104)
- Use leaky by default for LeakyReLU 
[#5192](https://github.com/apache/incubator-tvm/pull/5192)
- Conv3D ONNX support and conv3D_ncdhw x86 schedules 
[#4949](https://github.com/apache/incubator-tvm/pull/4949)
- Add support for FusedBatchNormV3 
[#5065](https://github.com/apache/incubator-tvm/pull/5065)
- Activations for pytorch 
[#5194](https://github.com/apache/incubator-tvm/pull/5194)
- Dropouts And InstanceNorm support added 
[#5203](https://github.com/apache/incubator-tvm/pull/5203)
- [Frontend] Asymmetric padding of convolution support 
([#4803](https://github.com/apache/incubator-tvm/pull/4803))
- [ONNX]Pool3d & upsample3d op support 
([#5135](https://github.com/apache/incubator-tvm/pull/5135))
- Add TopK to ONNX Frontend 
([#5441](https://github.com/apache/incubator-tvm/pull/5441))
- Add RoiAlign to Onnx frontend 
([#5454](https://github.com/apache/incubator-tvm/pull/5454))
- [PYTORCH]AvgPool3d, MaxPool3d and Squeeze op support 
([#5220](https://github.com/apache/incubator-tvm/pull/5220))
- [PYTORCH]celu, gelu, selu activations 
([#5263](https://github.com/apache/incubator-tvm/pull/5263))
- [Pytorch]layernorm bug fix and testcase updated 
([#5257](https://github.com/apache/incubator-tvm/pull/5257))
- [PYTORCH]LayerNorm support added 
([#5249](https://github.com/apache/incubator-tvm/pull/5249))
- [RELAY-OP][PYTORCH]GroupNorm op support added 
([#5358](https://github.com/apache/incubator-tvm/pull/5358))
- [TOPI][PYTORCH]Logical & Bitwise operator support 
([#5341](https://github.com/apache/incubator-tvm/pull/5341))
- [PYTORCH]Tensor creation ops support 
([#5347](https://github.com/apache/incubator-tvm/pull/5347))
- [RELAY][PYTORCH]cosh,sinh,log2,log10,log1p op support 
([#5395](https://github.com/apache/incubator-tvm/pull/5395))
- [PYTORCH]Rsub, Embedded, OneHot ops support 
([#5434](https://github.com/apache/incubator-tvm/pull/5434))
- [PYTORCH]Abs, Arange, Softplus ops 
([#5295](https://github.com/apache/incubator-tvm/pull/5295))
- [RELAY][PYTORCH]isNan, isinf, isfinite, ceil, clamp, round ops 
([#5316](https://github.com/apache/incubator-tvm/pull/5316))
- [PYTORCH]Activations for pytorch 
([#5194](https://github.com/apache/incubator-tvm/pull/5194))
- [PYTORCH]Repeat, Reciprocal & Reshape Op support 
([#5280](https://github.com/apache/incubator-tvm/pull/5280))
- [PYTORCH]Reduce_ops support added 
([#5308](https://github.com/apache/incubator-tvm/pull/5308))
- [PYTORCH]Take, Topk op support 
([#5332](https://github.com/apache/incubator-tvm/pull/5332))
- [PYTORCH]Dropouts And InstanceNorm support added 
([#5203](https://github.com/apache/incubator-tvm/pull/5203))
- [PYTORCH]Unary Ops frontend support. 
([#5378](https://github.com/apache/incubator-tvm/pull/5378))
- [Torch] Support Python list, more realistic recurrent networks 
([#5306](https://github.com/apache/incubator-tvm/pull/5306))
- [PYTORCH]where, addcdiv, addcmul op support 
([#5383](https://github.com/apache/incubator-tvm/pull/5383))
- [Torch] Add support for split 
([#5174](https://github.com/apache/incubator-tvm/pull/5174))
- [Frontend][Torch] Fix up graph input handling 
([#5204](https://github.com/apache/incubator-tvm/pull/5204))
- [FRONTEND][TFLITE]Logical not op support 
([#5475](https://github.com/apache/incubator-tvm/pull/5475))
- [TFLITE]Hard Swish & MobilnetV3 model testing 
([#5239](https://github.com/apache/incubator-tvm/pull/5239))
- [FRONTEND][TFLITE]Gather, StridedSlice op support added 
([#4788](https://github.com/apache/incubator-tvm/pull/4788))
- [TFLITE] Match TFLite shape for SSD custom op 
([#5473](https://github.com/apache/incubator-tvm/pull/5473))
- Factor out import of common tflite.Operator in tflite frontend. 
([#5355](https://github.com/apache/incubator-tvm/pull/5355))
- [Frontend][TFLite] support for FILL and SPLIT_V operators 
([#5330](https://github.com/apache/incubator-tvm/pull/5330))
- [Frontend][TFLite] L2_POOL_2D operator 
([#5452](https://github.com/apache/incubator-tvm/pull/5452))
- [TFLite] Add config option to specify FlatBuffers location 
([#5425](https://github.com/apache/incubator-tvm/pull/5425))
- [FRONTEND][TFLITE]Logical not op support 
([#5475](https://github.com/apache/incubator-tvm/pull/5475))
- [TENSORFLOW]reduce ops updated 
([#5180](https://github.com/apache/incubator-tvm/pull/5180))
- [FRONTEND][TENSORFLOW] Fix gather_nd indices 
([#5279](https://github.com/apache/incubator-tvm/pull/5279))
- [Frontend][TensorFlow]Improve TensorFlow Static Shape Tensor Array 
([#5243](https://github.com/apache/incubator-tvm/pull/5243))
- [KERAS]Minimum & AlphaDropout op support 
([#5380](https://github.com/apache/incubator-tvm/pull/5380))
- [KERAS]Embedding layer 
([#5444](https://github.com/apache/incubator-tvm/pull/5444))
- [FRONTEND][KERAS]Max_pool3d and Averagepool3d operator support 
([#5085](https://github.com/apache/incubator-tvm/pull/5085))
- [RELAY][FRONTEND][CAFFE2] add Mul and ConvTranspose operator 
([#5302](https://github.com/apache/incubator-tvm/pull/5302))
- [MXNET]DepthToSpace & SpaceToDepth Operator 
([#5408](https://github.com/apache/incubator-tvm/pull/5408))
- [MXNET]broadcast and logical op support 
([#5461](https://github.com/apache/incubator-tvm/pull/5461))
- [FRONTEND][MXNET] Use leaky by default for LeakyReLU 
([#5192](https://github.com/apache/incubator-tvm/pull/5192))
- [FRONTEND][MXNET] support elemwise logic ops 
([#5361](https://github.com/apache/incubator-tvm/pull/5361))
- [Frontend|MXNet] SwapAxis operator support 
([#5246](https://github.com/apache/incubator-tvm/pull/5246))
- [RELAY] Move frontend utils (#5345)
- [Fontend][Pytorch] Fix translation of transpose when axis argument is as a 
list (#5451)
- LpPool Support added 
[#5696](https://github.com/apache/incubator-tvm/pull/5696)
- Skip ADD inside Gemm op when vector is zero 
[#5697](https://github.com/apache/incubator-tvm/pull/5697)
- ReduceL1, ReduceL2, ReduceSumSquare, ReduceLogSum ops added 
[#5721](https://github.com/apache/incubator-tvm/pull/5721)
- MaxRoiPool, Mod & Xor op support added 
[#5729](https://github.com/apache/incubator-tvm/pull/5729)
- Skip multiply with 1.0f constant for GEMM import 
[#5800](https://github.com/apache/incubator-tvm/pull/5800)
- StatefulPartitionedCall/PartitionedCall Ops support added 
[#5617](https://github.com/apache/incubator-tvm/pull/5617)
- Don't add cast for batch norm when type isn't changing 
[#5731](https://github.com/apache/incubator-tvm/pull/5731)
- Conv3d Transpose OP added 
[#5775](https://github.com/apache/incubator-tvm/pull/5775)
- expand bug fix [#5576](https://github.com/apache/incubator-tvm/pull/5576)
- Support max_pool2d_with_indices 
[#5549](https://github.com/apache/incubator-tvm/pull/5549)
- Add prim::device op [#5584](https://github.com/apache/incubator-tvm/pull/5584)
- ImplicitTensorToNum support added 
[#5603](https://github.com/apache/incubator-tvm/pull/5603)
- Matmul fix for batch_matmul 
[#5604](https://github.com/apache/incubator-tvm/pull/5604)
- ReflectionPad2d op [#5624](https://github.com/apache/incubator-tvm/pull/5624)
- Padding op support [#5638](https://github.com/apache/incubator-tvm/pull/5638)
- Minor bug fixes [#5683](https://github.com/apache/incubator-tvm/pull/5683)
- floor_divide support for squeezenet 
[#5702](https://github.com/apache/incubator-tvm/pull/5702)
- ReplicationPad support added 
[#5708](https://github.com/apache/incubator-tvm/pull/5708)
- aten::norm support added 
[#5776](https://github.com/apache/incubator-tvm/pull/5776)
- broadcast and logical op support 
[#5461](https://github.com/apache/incubator-tvm/pull/5461)
- MaxPool3d and AvgPool3d Ops support added 
[#5614](https://github.com/apache/incubator-tvm/pull/5614)
- Softmin, trunc op support added 
[#5715](https://github.com/apache/incubator-tvm/pull/5715)
- conv3d and conv3d_transpose addedx 
[#5814](https://github.com/apache/incubator-tvm/pull/5814)
- Model importer to be compatible with tflite 2.1.0 
[#5497](https://github.com/apache/incubator-tvm/pull/5497)
- Nit: Function names made consistent 
[#5515](https://github.com/apache/incubator-tvm/pull/5515)
- Select op support for tflite frontend 
[#5486](https://github.com/apache/incubator-tvm/pull/5486)
- GATHER_ND [#5508](https://github.com/apache/incubator-tvm/pull/5508)
- Quantize & Dequantize op 
[#5394](https://github.com/apache/incubator-tvm/pull/5394)
- Fully connected op conversion made in sync with TFLite 
[#5510](https://github.com/apache/incubator-tvm/pull/5510)
- ADD_N operator [#5474](https://github.com/apache/incubator-tvm/pull/5474)
- onnx, mxnet, pytorch mathops added 
[#5561](https://github.com/apache/incubator-tvm/pull/5561)
- abs, round, reciprocal, sign, softsign, hard_sigmoid ops support 
[#5587](https://github.com/apache/incubator-tvm/pull/5587)
- Gather nd bug fix for one dim support in tensorflow 
[#5588](https://github.com/apache/incubator-tvm/pull/5588)
- Add parser support for shape and range 
[#5329](https://github.com/apache/incubator-tvm/pull/5329)
- Darknet support batch size for yolo 
[#5688](https://github.com/apache/incubator-tvm/pull/5688)
- Improve Control Flow and TensorArray 
[#5699](https://github.com/apache/incubator-tvm/pull/5699)
- MXNet: Softmin, trunc op support added 
[#5715](https://github.com/apache/incubator-tvm/pull/5715)
- MXNet: conv3d and conv3d_transpose addedx 
[#5814](https://github.com/apache/incubator-tvm/pull/5814)
- MXNet: Add parser for contrib.box_decode 
[#5967](https://github.com/apache/incubator-tvm/pull/5967)
- Onnx: ReduceL1, ReduceL2, ReduceSumSquare, ReduceLogSum ops added 
[#5721](https://github.com/apache/incubator-tvm/pull/5721)
- Onnx: MaxRoiPool, Mod & Xor op support added 
[#5729](https://github.com/apache/incubator-tvm/pull/5729)
- Onnx: Skip multiply with 1.0f constant for GEMM import 
[#5800](https://github.com/apache/incubator-tvm/pull/5800)
- Onnx: Fix an issue with #5755 and add Batch norm unit tests. 
[#5845](https://github.com/apache/incubator-tvm/pull/5845)
- TensorFlow: StatefulPartitionedCall/PartitionedCall Ops support added 
[#5617](https://github.com/apache/incubator-tvm/pull/5617)
- TensorFlow: Don’t add cast for batch norm when type isn’t changing 
[#5731](https://github.com/apache/incubator-tvm/pull/5731)
- TensorFlow: Conv3d Transpose OP added 
[#5775](https://github.com/apache/incubator-tvm/pull/5775)
- Add parser support for shape and range 
[#5329](https://github.com/apache/incubator-tvm/pull/5329)
- Darknet support batch size for yolo 
[#5688](https://github.com/apache/incubator-tvm/pull/5688)
- Improve Control Flow and TensorArray 
[#5699](https://github.com/apache/incubator-tvm/pull/5699)
- Improve TF Parser to keep output nodes for saved_model 
[#5794](https://github.com/apache/incubator-tvm/pull/5794)
- Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax 
[#4805](https://github.com/apache/incubator-tvm/pull/4805)
- Fix TF Dynamic input shape 
[#5825](https://github.com/apache/incubator-tvm/pull/5825)
- Support a few contrib ops in mxnet 
[#5819](https://github.com/apache/incubator-tvm/pull/5819)
- Improve TF Parser to keep output nodes for saved_model 
[#5794](https://github.com/apache/incubator-tvm/pull/5794)
- Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax 
[#4805](https://github.com/apache/incubator-tvm/pull/4805)
- Check all unsupported ops before raising an exception 
[#5929](https://github.com/apache/incubator-tvm/pull/5929)
- Add Pytorch advanced indexing 
[#6318](https://github.com/apache/incubator-tvm/pull/6318)
- Support index_select 
[#6295](https://github.com/apache/incubator-tvm/pull/6295)
- Fix cast to long [#6301](https://github.com/apache/incubator-tvm/pull/6301)
- Fix dtype handling for modules with integer parameters 
[#6311](https://github.com/apache/incubator-tvm/pull/6311)
- pytorch frontend support conv1d 
[#6203](https://github.com/apache/incubator-tvm/pull/6203)
- Add cast to double, fix flatten conversion 
[#6357](https://github.com/apache/incubator-tvm/pull/6357)
- Fix aten::max and aten::min conversion 
[#6372](https://github.com/apache/incubator-tvm/pull/6372)
- Match pytorch 1.6 googlenet pretrained model (#6201) 
[#6212](https://github.com/apache/incubator-tvm/pull/6212)Add unbiased variance 
op and corresponding support in pytorch frontend 
[#6232](https://github.com/apache/incubator-tvm/pull/6232)
- Implemented PADV2 Operator for TFLite and added support for constant values 
in PAD. [#6167](https://github.com/apache/incubator-tvm/pull/6167)
- Implemented ONE_HOT Operator for TFLite. 
[#6223](https://github.com/apache/incubator-tvm/pull/6223)
- Implemented EXPAND_DIMS Operator for TFLite. 
[#6243](https://github.com/apache/incubator-tvm/pull/6243)
- Implemented REVERSE_V2 Operator for TFLite. 
[#6304](https://github.com/apache/incubator-tvm/pull/6304)
- Implemented MATRIX_SET_DIAG Operator for Relay/TOPI and TFLite Frontend. 
[#6303](https://github.com/apache/incubator-tvm/pull/6303)
- RESHAPE with dynamic shape arg in TFLite frontend 
[#6208](https://github.com/apache/incubator-tvm/pull/6208)
- Constant input attr added to fully connected operation in TFLite frontend 
[#6228](https://github.com/apache/incubator-tvm/pull/6228)
- Gather operation with indices as tensor expr in TFLite frontend 
[#6168](https://github.com/apache/incubator-tvm/pull/6168)
- Added support for tflite quantized maximum and minimum 
[#6018](https://github.com/apache/incubator-tvm/pull/6018)
- Unary ops support added in frontend 
[#6196](https://github.com/apache/incubator-tvm/pull/6196)
- Introduce caffe frontend for tvm 
[#6206](https://github.com/apache/incubator-tvm/pull/6206)
- Keras softmax and prelu fix under NHWC 
[#6278](https://github.com/apache/incubator-tvm/pull/6278)
- add support for MXNET numpy operators 
[#6054](https://github.com/apache/incubator-tvm/pull/6054)
- Refine tensorflow frontend 1.x & 2.x compatibility 
[#6240](https://github.com/apache/incubator-tvm/pull/6240)
- Reduceops support added to frontend 
[#6252](https://github.com/apache/incubator-tvm/pull/6252)
- Update precision in the ONNX strided_slice, update precision of ToScalar 
[#6272](https://github.com/apache/incubator-tvm/pull/6272)
- NHWC import support. 
[#4899](https://github.com/apache/incubator-tvm/pull/4899)
- Refine tensorflow frontend 1.x & 2.x compatibility 
[#6240](https://github.com/apache/incubator-tvm/pull/6240)
- Fix node indices attribute error for tensorflow 2.3 
[#6288](https://github.com/apache/incubator-tvm/pull/6288)
- Support NMSv4 [#6085](https://github.com/apache/incubator-tvm/pull/6085)
- Support for PyTorch Non-Maximum Suppression 
[#6314](https://github.com/apache/incubator-tvm/pull/6314)
- ReplicationPad support added 
[#5708](https://github.com/apache/incubator-tvm/pull/5708)
- MXNet pre-quantized BERT 
[#6039](https://github.com/apache/incubator-tvm/pull/6039)
- Keep parameter names from PyTorch 
[#5887](https://github.com/apache/incubator-tvm/pull/5887)
- Refine LSTMBlockCell to support dynamic rnn 
[#5963](https://github.com/apache/incubator-tvm/pull/5963)

## Relay

- Add function attributes to IR hash 
([#4479](https://github.com/apache/incubator-tvm/pull/4479/files))
- Relay passes lookup overhead optimization 
([#4594](https://github.com/apache/incubator-tvm/pull/4594))
- Add half_pixel option to Resize op 
[#4610](https://github.com/apache/incubator-tvm/issues/4610)
- Skip example json runtime test when config is not set 
[#4614](https://github.com/apache/incubator-tvm/issues/4614)
- Test tensor_array in vm 
[#4608](https://github.com/apache/incubator-tvm/issues/4608)
- Improve memory_allocation pass to support multiple i/o dynamic kernels 
[#4595](https://github.com/apache/incubator-tvm/issues/4595)
- Add unit test for tensor_array_split 
[#4619](https://github.com/apache/incubator-tvm/issues/4619)
- Add parses support for unary elemwise ops 
[#4634](https://github.com/apache/incubator-tvm/issues/4634)
- Add parses support for SLICE 
[#4502](https://github.com/apache/incubator-tvm/issues/4502)
- Added pool autopadding and simplified converters. 
[#4672](https://github.com/apache/incubator-tvm/issues/4672)
- Fix meaning of conv2d_transpose output_padding parameter 
[#4318](https://github.com/apache/incubator-tvm/issues/4318)
- Use packed func macro for external codegen 
[#4710](https://github.com/apache/incubator-tvm/issues/4710)
- Fix _parse_param bug 
[#4711](https://github.com/apache/incubator-tvm/issues/4711)
- Add constant input support for elemwise ops 
[#4666](https://github.com/apache/incubator-tvm/issues/4666)
- Add parser support for squared difference 
[#4652](https://github.com/apache/incubator-tvm/issues/4652)
- Add type check to dense 
[#4724](https://github.com/apache/incubator-tvm/issues/4724)
- Invoke tvm::build from relay compile_engine and interpreter 
[#4723](https://github.com/apache/incubator-tvm/issues/4723)
- Broadcast condition, x, and y for Where op 
[#4774](https://github.com/apache/incubator-tvm/issues/4774)
- Add parser support for relational ops 
[#4695](https://github.com/apache/incubator-tvm/issues/4695)
- Remove duplicated BindParamByName function in VM compiler 
[#4793](https://github.com/apache/incubator-tvm/issues/4793)
- Use SimplifyInference for L2 Normalization. 
[#4795](https://github.com/apache/incubator-tvm/issues/4795)
- Expose vm OptimizeModule to Python 
[#4800](https://github.com/apache/incubator-tvm/issues/4800)
- Add parser support for logical operators 
[#4642](https://github.com/apache/incubator-tvm/pull/4642)
- Conv2D padding representation 
[#4787](https://github.com/apache/incubator-tvm/pull/4787)
- Add support for quantized LOGISTIC 
[#4696](https://github.com/apache/incubator-tvm/pull/4696)
- Fix VM compiler for while loop with free vars 
[#4889](https://github.com/apache/incubator-tvm/pull/4889)
- Fix bug in re-processing call node in MergeComposite pass 
[#4879](https://github.com/apache/incubator-tvm/pull/4879)
- Expose FunctionGetAttr to Python 
[#4905](https://github.com/apache/incubator-tvm/pull/4905)
- Add a PyTorch to Relay Parser 
[#4497](https://github.com/apache/incubator-tvm/pull/4497)
- Support data types for CSourceModuleCodegen args and output 
[#4934](https://github.com/apache/incubator-tvm/pull/4934)
- Clean up and refactor PyTorch frontend 
[#4944](https://github.com/apache/incubator-tvm/pull/4944)
- Relay pass to use fast exp/tanh 
[#4873](https://github.com/apache/incubator-tvm/pull/4873)
- BatchNorm support with run-time mean and variance calculation 
[#4990](https://github.com/apache/incubator-tvm/pull/4990)
- Reduce plevel of conv2d winograd implementation on cuda 
[#4987](https://github.com/apache/incubator-tvm/pull/4987)
- Add operation tan to TVM 
[#4938](https://github.com/apache/incubator-tvm/pull/4938)
- Outline and inline lifted functions for external codegen 
[#4996](https://github.com/apache/incubator-tvm/pull/4996)
- Remove primitive attribute from composite function 
[#5014](https://github.com/apache/incubator-tvm/pull/5014)
- Refactor Relay Python to use new FFI 
[#5077](https://github.com/apache/incubator-tvm/pull/5077)
- Fix relay node registration after refactor 
[#5083](https://github.com/apache/incubator-tvm/pull/5083)
- Codegen_c.h should include relay.function 
[#5093](https://github.com/apache/incubator-tvm/pull/5093)
- Move expr.Function to function.py 
[#5087](https://github.com/apache/incubator-tvm/pull/5087)
- Propagate constant to subgraphs 
[#5094](https://github.com/apache/incubator-tvm/pull/5094)
- Adjust strategy plevel to achieve expected performance by default 
[#5118](https://github.com/apache/incubator-tvm/pull/5118)
- Added a AnnotatedRegion utility class 
[#5030](https://github.com/apache/incubator-tvm/pull/5030)
- Support TupleGetItem in body of pattern 
[#5106](https://github.com/apache/incubator-tvm/pull/5106)
- Partition graph codestyle fixes 
[#5202](https://github.com/apache/incubator-tvm/pull/5202)
- Re-wrote the Graph Partitioner to support multiple outputs 
[#5143](https://github.com/apache/incubator-tvm/pull/5143)
- Fixes to MergeCompilerRegions 
[#5195](https://github.com/apache/incubator-tvm/pull/5195)
- Refactor build module to take IRModule 
[#4988](https://github.com/apache/incubator-tvm/pull/4988)
- Separate analysis and transform passes 
[#5035](https://github.com/apache/incubator-tvm/pull/5035)
- Relay Node::make to constructor 
[#5128](https://github.com/apache/incubator-tvm/pull/5128)
- relay::StructuralHash to tvm::StructuralHash 
[#5166](https://github.com/apache/incubator-tvm/pull/5166)
- Conditions updated to cover better user scenarios 
[#5043](https://github.com/apache/incubator-tvm/pull/5043)
- Replace UseDefaultCompiler with GetAttr 
[#5088](https://github.com/apache/incubator-tvm/pull/5088)
- Return empty CSourceModule when no lowered_funcs exists in Relay mod 
[#4847](https://github.com/apache/incubator-tvm/pull/4847)
- [Runtime][Relay][Cleanup] Clean up for memory pass to enable heterogenous 
execution support. (#5324)
- [RELAY] Remove re-exports of tvm.transform (#5337)
- [Refactor] Add memoized expr translator for use by backend codegen (#5325)
- Legalize - Use Non-recursive Rewriter. (#5296)
- Add additional check before re-using the cached match 
[#5552](https://github.com/apache/incubator-tvm/pull/5552)
- Remove kCompiler attr from external functions 
[#5615](https://github.com/apache/incubator-tvm/pull/5615)
- Pattern Language MergeComposite 
[#5656](https://github.com/apache/incubator-tvm/pull/5656)
- Support Tuple Output in C/DNNL Codegen 
[#5701](https://github.com/apache/incubator-tvm/pull/5701)
- Infer types in MergeComposite 
[#5766](https://github.com/apache/incubator-tvm/pull/5766)
- Convert PatternGrouper to do pre-order, non-recursive analysis 
[#5653](https://github.com/apache/incubator-tvm/pull/5653)
- Remove constants from partitioned functions 
[#5663](https://github.com/apache/incubator-tvm/pull/5663)
- Add a check for null function attributes 
[#5674](https://github.com/apache/incubator-tvm/pull/5674)
- Add ConstantPattern [#5689](https://github.com/apache/incubator-tvm/pull/5689)
- Conditionally Embedding Constants in Partitioned Functions 
[#5693](https://github.com/apache/incubator-tvm/pull/5693)
- Simplify Pattern API Implementations 
[#5703](https://github.com/apache/incubator-tvm/pull/5703)
- Add ShapePattern and DataTypePattern 
[#5760](https://github.com/apache/incubator-tvm/pull/5760)
- Remove unnecessary print 
[#5642](https://github.com/apache/incubator-tvm/pull/5642)
- Improve Shape Func handling for Tuple inputs 
[#5467](https://github.com/apache/incubator-tvm/pull/5467)
- Relay updated with String 
[#5578](https://github.com/apache/incubator-tvm/pull/5578)
- Fix the creation of tuple of tuples in PartitionGraph 
[#5616](https://github.com/apache/incubator-tvm/pull/5616)
- Preserve type information in Merge Composite 
[#5640](https://github.com/apache/incubator-tvm/pull/5640)
- Move compiler_begin/end_op to local static objects 
[#5622](https://github.com/apache/incubator-tvm/pull/5622)
- Fix dataflow_pattern.rewrite() hang if Match in IR 
[#5680](https://github.com/apache/incubator-tvm/pull/5680)
- Fix segfault in pretty print when ObjectRef is null 
[#5681](https://github.com/apache/incubator-tvm/pull/5681)
- Move fallback_device to config 
[#5690](https://github.com/apache/incubator-tvm/pull/5690)
- Replace build_config with PassContext 
[#5698](https://github.com/apache/incubator-tvm/pull/5698)
- Clear compile engine after task extraction 
[#5724](https://github.com/apache/incubator-tvm/pull/5724)
- Add storage_order ignore in pooling layer. 
[#5781](https://github.com/apache/incubator-tvm/pull/5781)
- Tweak cublas/cudnn priority level 
[#5820](https://github.com/apache/incubator-tvm/pull/5820)
- Skip Unknown Function Symbols 
[#5888](https://github.com/apache/incubator-tvm/pull/5888)
- Allow every runtime module to handle constants 
[#5885](https://github.com/apache/incubator-tvm/pull/5885)
- handle Tuple/TupleGetItem in first order gradient 
[#5946](https://github.com/apache/incubator-tvm/pull/5946)
- Add resnet-3d & Update network definitions for NHWC layout 
[#5945](https://github.com/apache/incubator-tvm/pull/5945)
- Use TargetNode::attrs for Target serialization 
[#5993](https://github.com/apache/incubator-tvm/pull/5993)
- each option of target str should only contain one ‘=’ 
[#5988](https://github.com/apache/incubator-tvm/pull/5988)
- Rename target_id => target_kind 
[#6199](https://github.com/apache/incubator-tvm/pull/6199)
- 64-bit RPi4b target [#6211](https://github.com/apache/incubator-tvm/pull/6211)
- Add resnet-3d & Update network definitions for NHWC layout 
[#5945](https://github.com/apache/incubator-tvm/pull/5945)
- Small bug fix for Conv1D imports. 
[#5995](https://github.com/apache/incubator-tvm/pull/5995)
- Move invoke_tvm_op and shape_func to vm dialect 
[#5958](https://github.com/apache/incubator-tvm/pull/5958)
- GRU Layer Support [#6020](https://github.com/apache/incubator-tvm/pull/6020)
- Add pass for getting calibration data from a relay module 
[#5997](https://github.com/apache/incubator-tvm/pull/5997)
- Merge two consecutive reshape ops 
[#6052](https://github.com/apache/incubator-tvm/pull/6052)
- Add operation scatter_add to relay, based on scatter implementation. 
[#6030](https://github.com/apache/incubator-tvm/pull/6030)
- i64 indices [#5235](https://github.com/apache/incubator-tvm/pull/5235)
- Port eliminate_common_subexpr to non-recursive form 
[#6134](https://github.com/apache/incubator-tvm/pull/6134)
- Fix interpreter for dyanmic shape input of ndarray_size 
[#6086](https://github.com/apache/incubator-tvm/pull/6086)
- Allow to config allocator type and refactor vm code structure 
[#6105](https://github.com/apache/incubator-tvm/pull/6105)
- Handle ndarray_size in FoldConstant 
[#6156](https://github.com/apache/incubator-tvm/pull/6156)
- when converting constant nodes with types of int64 or float64 
[#6159](https://github.com/apache/incubator-tvm/pull/6159)
- Add ReshapeTensor instruction in the VM to replace the reshape op 
[#6089](https://github.com/apache/incubator-tvm/pull/6089)
- Support combine multiple dense op just into dense 
[#6062](https://github.com/apache/incubator-tvm/pull/6062)
- Add unbiased variance op and corresponding support in pytorch frontend 
[#6232](https://github.com/apache/incubator-tvm/pull/6232)
- Specify additional layouts in convert layout pass 
[#5422](https://github.com/apache/incubator-tvm/pull/5422)
- Safe check added for Merge Composite Call Node 
[#5562](https://github.com/apache/incubator-tvm/pull/5562)
- Non recursive partitioning 
[#5493](https://github.com/apache/incubator-tvm/pull/5493)
- Support combine multiple dense op just into dense 
[#6062](https://github.com/apache/incubator-tvm/pull/6062)
- Make the max number of fused ops configurable 
[#6327](https://github.com/apache/incubator-tvm/pull/6327)
- Implementation of the dynamic pad operator 
[#6284](https://github.com/apache/incubator-tvm/pull/6284)
- change device annotation from post DFS to recursive 
[#6124](https://github.com/apache/incubator-tvm/pull/6124)
- Make check stricter: disallow inserting function with free vars into module 
[#6313](https://github.com/apache/incubator-tvm/pull/6313)
- Make check stricter by using Feature. Fixed multiple bugs 
[#6326](https://github.com/apache/incubator-tvm/pull/6326)
- Resize support for NCHW-convertible layouts 
[#6293](https://github.com/apache/incubator-tvm/pull/6293)
- Make AutoDiff thread through global function 
[#6336](https://github.com/apache/incubator-tvm/pull/6336)
- Create Interpreter for each constant subgraph 
[#6195](https://github.com/apache/incubator-tvm/pull/6195)
- Add Dynamic reshape to a dynamic namespace and add DynamicToStatic Pass 
[#5826](https://github.com/apache/incubator-tvm/pull/5826)
- Expose relay BindParamsByName to Python 
[#4751](https://github.com/apache/incubator-tvm/issues/4751)
- Implement pass manager tracing API 
[#4782](https://github.com/apache/incubator-tvm/issues/4782)
- Move Ops in relay.op.contrib.* 
[#4942](https://github.com/apache/incubator-tvm/pull/4942)
- Conditions updated to cover better user scenarios 
[#4951](https://github.com/apache/incubator-tvm/pull/4951)
- [External codegen] Add test cases for fused ops with manual annotation 
([#4741](https://github.com/apache/incubator-tvm/pull/4741))
- Multiple output support, reshape, split ops added 
[#6296](https://github.com/apache/incubator-tvm/pull/6296)

## Operator Coverage

- Allow empty tensor for reshape, tile and strided_slice 
[#4618](https://github.com/apache/incubator-tvm/issues/4618)
- Fix meaning of conv2d_transpose output_padding parameter"; 
[#4708](https://github.com/apache/incubator-tvm/issues/4708)
- Remove cpp upsampling and resize op 
[#4769](https://github.com/apache/incubator-tvm/issues/4769)
- upsample operator 'NCHWinic' format support. 
[#4791](https://github.com/apache/incubator-tvm/pull/4791)
- Injective schedule improvement 
[#4786](https://github.com/apache/incubator-tvm/pull/4786)
- Enable vectorization on fp16 type 
[#4867](https://github.com/apache/incubator-tvm/pull/4867)
- Support for Int8 schedules - CUDA/x86 
[#5031](https://github.com/apache/incubator-tvm/pull/5031)
- New PR to re-add tan to TVM 
[#5025](https://github.com/apache/incubator-tvm/pull/5025)
- Register topi schedule for Relay fast_exp and fast_tanh 
[#5131](https://github.com/apache/incubator-tvm/pull/5131)
- Move Dilation2d from nn to image namespace 
[#5110](https://github.com/apache/incubator-tvm/pull/5110)
- Use Thrust sort for argsort and topk 
[#5097](https://github.com/apache/incubator-tvm/pull/5097)
- Conv2d and Dense ops support on Tensor Core 
[#5099](https://github.com/apache/incubator-tvm/pull/5099)
- Setting workload correctly for Depthwise Spatial conv ARM. 
[#5182](https://github.com/apache/incubator-tvm/pull/5182)
- Adding a few missing math intrin 
[#5011](https://github.com/apache/incubator-tvm/pull/5011)
- Missing vectorize for depthwise conv2d. 
[#5196](https://github.com/apache/incubator-tvm/pull/5196)
- [TOPI] Using x86 schedules for ARM conv2d 
([#5334](https://github.com/apache/incubator-tvm/pull/5334))
- [TOPI-ARM] Do not alter layout if layout is NHWC 
([#5350](https://github.com/apache/incubator-tvm/pull/5350))
- [TOPI] Setting workload correctly for Depthwise Spatial conv ARM. 
([#5182](https://github.com/apache/incubator-tvm/pull/5182))
- [Relay][OP] Add fast_erf implementation 
([#5241](https://github.com/apache/incubator-tvm/pull/5241))
- [Topi] Tensorcore support for Conv3D 
([#5284](https://github.com/apache/incubator-tvm/pull/5284))
- [intrin] a few more math functions 
([#5468](https://github.com/apache/incubator-tvm/pull/5468))
- [Intrinsic] Add log1p, ldexp, atan2, hypot, nextafter, copysign 
([#5312](https://github.com/apache/incubator-tvm/pull/5312))
- [relay][topi] Add operation relay.nn.dilate() which calls topi.nn.dilate() 
([#5331](https://github.com/apache/incubator-tvm/pull/5331))
- [Topi x86] Missing vectorize for depthwise conv2d. 
([#5196](https://github.com/apache/incubator-tvm/pull/5196))
- [TOPI x86] Adding unroll_kw config option for depthwise conv2d. (#5197)
- [Topi] Breakdown [topi.cc](http://topi.cc/) into smaller files 
([#5253](https://github.com/apache/incubator-tvm/pull/5253))
- ReduceLogSumExp Operator support 
[#5453](https://github.com/apache/incubator-tvm/pull/5453)
- Math ops added [#5502](https://github.com/apache/incubator-tvm/pull/5502)
- Enable blocking format in x86 conv2d and fold scale axis 
[#5357](https://github.com/apache/incubator-tvm/pull/5357)
- Add operation gather to relay. 
[#5716](https://github.com/apache/incubator-tvm/pull/5716)
- Add storage_order ignore in pooling layer. 
[#5781](https://github.com/apache/incubator-tvm/pull/5781)
- Fix bifrost spatial packing conv2d auto tune 
[#5684](https://github.com/apache/incubator-tvm/pull/5684)
- Fix reshape usage in ARM schedule 
[#5732](https://github.com/apache/incubator-tvm/pull/5732)
- Block sparse dense on cuda 
[#5746](https://github.com/apache/incubator-tvm/pull/5746)
- Improve CUDA softmax scheduling 
[#5600](https://github.com/apache/incubator-tvm/pull/5600)
- block sparse dense on cuda 
[#5746](https://github.com/apache/incubator-tvm/pull/5746)
- pass-by-value -> pass-by-const-reference 
[#5783](https://github.com/apache/incubator-tvm/pull/5783)
- Using MKL blas for quantized dense 
[#6115](https://github.com/apache/incubator-tvm/pull/6115)
- topi -> tvm/topi [#6186](https://github.com/apache/incubator-tvm/pull/6186)
- Use auto-tuner to improve conv2d_gemm performance 
[#6117](https://github.com/apache/incubator-tvm/pull/6117)
- Improve CUDA conv2d_transpose_nchw 
[#4762](https://github.com/apache/incubator-tvm/issues/4762)
- Add CUDA conv2d for NHWC layout 
[#4737](https://github.com/apache/incubator-tvm/issues/4737)
- conv3d_ndhwc schedule 
[#4775](https://github.com/apache/incubator-tvm/issues/4775)
- Fast exponent [#4790](https://github.com/apache/incubator-tvm/pull/4790)
- Add Scatter to Topi/Relay/ONNX via hybrid script 
[#5619](https://github.com/apache/incubator-tvm/pull/5619)
- Split MKL from BLAS. 
[#6182](https://github.com/apache/incubator-tvm/pull/6182)
- Change the meaning of conv3d_transpose output_padding to match 
conv{1,2}d_transpose [#6065](https://github.com/apache/incubator-tvm/pull/6065)
- Gather op support added 
[#6013](https://github.com/apache/incubator-tvm/pull/6013)

## Runtime and Backend

- Cythonize NDArray.copyto 
([#4549](https://github.com/apache/incubator-tvm/pull/4549))
- Unified Object System runtime refactor 
([#4578](https://github.com/apache/incubator-tvm/pull/4578), 
[#4581](https://github.com/apache/incubator-tvm/pull/4581), 
[#4603](https://github.com/apache/incubator-tvm/pull/4603))
- VM profiler: sort VM stats by time 
([#4601](https://github.com/apache/incubator-tvm/pull/4601))
- Update RPC runtime to allow remote module as arg 
([#4462](https://github.com/apache/incubator-tvm/pull/4462))
- Refactorying system lib and dso lib into library module 
([#4481](https://github.com/apache/incubator-tvm/pull/4481))
- Improve TSIM virtual memory mapping 
([#4545](https://github.com/apache/incubator-tvm/pull/4545))
- make adt tag signed 
[#4605](https://github.com/apache/incubator-tvm/issues/4605)
- Improve TVMBackendPackedCFunc to allow return val 
[#4637](https://github.com/apache/incubator-tvm/issues/4637)
- EdgeTPU runtime for Coral Boards 
[#4698](https://github.com/apache/incubator-tvm/issues/4698)
- Fix memory leak when using openMP 
[#4811](https://github.com/apache/incubator-tvm/issues/4811)
- Fix memory leakage of TVMByteArray 
[#4856](https://github.com/apache/incubator-tvm/pull/4856)
- Fix TVM_DLL_EXPORT_TYPED_FUNC to work on Windows 
[#4955](https://github.com/apache/incubator-tvm/pull/4955)
- Fix memory leak when using openMP 
[#4811](https://github.com/apache/incubator-tvm/pull/4811)
- Export GraphRuntime in tvm_runtime.dll 
[#5002](https://github.com/apache/incubator-tvm/pull/5002)
- MISRA-C compliant TVM runtime 
[#3934](https://github.com/apache/incubator-tvm/pull/3934)
- Update the type_keys to reflect the code-org 
[#5074](https://github.com/apache/incubator-tvm/pull/5074)
- Fix AttrEqual for Array and StrMap, double 
[#5054](https://github.com/apache/incubator-tvm/pull/5054)
- Export GraphRuntime in tvm_runtime.dll 
[#5002](https://github.com/apache/incubator-tvm/pull/5002)
- Fix unused-value warning 
[#5140](https://github.com/apache/incubator-tvm/pull/5140)
- crt error handling [#5147](https://github.com/apache/incubator-tvm/pull/5147)
- Bundle deployment with static linking 
[#5158](https://github.com/apache/incubator-tvm/pull/5158)
- Implemented kDLCPUPinned (cudaMallocHost) 
[#4985](https://github.com/apache/incubator-tvm/pull/4985)
- Explicitly cast min/max operands 
[#5090](https://github.com/apache/incubator-tvm/pull/5090)
- ref_counter -> ref_counter_ 
[#5184](https://github.com/apache/incubator-tvm/pull/5184)
- Expose runtime::String to Python 
([#5212](https://github.com/apache/incubator-tvm/pull/5212))
- [PY][FFI] Refactor runtime.String to subclass str 
([#5426](https://github.com/apache/incubator-tvm/pull/5426))
- [RUNTIME] Auto conversion from str to runtime::String in PackedFUnc 
([#5251](https://github.com/apache/incubator-tvm/pull/5251))
- [RUNTIME] Improved Packed FFI for optional. 
([#5478](https://github.com/apache/incubator-tvm/pull/5478))
- [Hexagon] Add hexagon_posix.cc to TVM/RT sources in the right place (#5346)
- [PY][FFI] Refactor runtime.String to subclass str (#5426)
- Fix workspace [#5503](https://github.com/apache/incubator-tvm/pull/5503)
- Store nullptr PackedFunc as nullptr for better error propagation 
[#5540](https://github.com/apache/incubator-tvm/pull/5540)
- Improve PackedFunc robustness 
[#5517](https://github.com/apache/incubator-tvm/pull/5517)
- Seg fault in WorkspacePool's destructor (#5632) 
[#5636](https://github.com/apache/incubator-tvm/pull/5636)
- Resolve constexpr issue in debug mode. 
[#5651](https://github.com/apache/incubator-tvm/pull/5651)
- Add compile_shared option to linux compile utility fn 
[#5751](https://github.com/apache/incubator-tvm/pull/5751)
- Call sync in CopyFromRemote and CopyToRemote 
[#5512](https://github.com/apache/incubator-tvm/pull/5512)
- Fix the multihop cpu case 
[#5522](https://github.com/apache/incubator-tvm/pull/5522)
- Improve RPCServer AsyncIO support. 
[#5544](https://github.com/apache/incubator-tvm/pull/5544)
- Modularize the RPC infra 
[#5484](https://github.com/apache/incubator-tvm/pull/5484)
- Add compile_shared option to linux compile utility fn 
[#5751](https://github.com/apache/incubator-tvm/pull/5751)
- Overload string operators 
[#5806](https://github.com/apache/incubator-tvm/pull/5806)
- Only initialize required module 
[#5926](https://github.com/apache/incubator-tvm/pull/5926)
- if a param not in input, we should still consume it’s data 
[#5990](https://github.com/apache/incubator-tvm/pull/5990)
- init TVMPackedFunc’s name 
[#6044](https://github.com/apache/incubator-tvm/pull/6044)
- Enable auto conversion String->DLDataType 
[#6214](https://github.com/apache/incubator-tvm/pull/6214)
- Support random fill [#5913](https://github.com/apache/incubator-tvm/pull/5913)
- Use new to avoid exit-time de-allocation order 
[#6292](https://github.com/apache/incubator-tvm/pull/6292)
- Add parallel_for support to run a loop in parallel 
[#6275](https://github.com/apache/incubator-tvm/pull/6275)
- Solve ARM BIG.LITTLE heterogeneous multicores 
[#4747](https://github.com/apache/incubator-tvm/issues/4747)
- [RUNTIME] Quick fix PackedFunc String passing 
([#5266](https://github.com/apache/incubator-tvm/pull/5266))
- Introduce runtime::String::CanConvertFrom 
[#5718](https://github.com/apache/incubator-tvm/pull/5718)
- Restore the StrMap behavior in JSON/SHash/SEqual 
[#5719](https://github.com/apache/incubator-tvm/pull/5719)
- Support overriding RPCWatchdog termination behavior on Android and other 
platforms [#6216](https://github.com/apache/incubator-tvm/pull/6216)
- Set NDArray::Container.shape_ in NDArray::FromDLPack 
([#5301](https://github.com/apache/incubator-tvm/pull/5301))
- Enable x86 cpu cache flush 
[#5914](https://github.com/apache/incubator-tvm/pull/5914)

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6486#issuecomment-698659713

Reply via email to