Re: [apache/incubator-tvm] [RELEASE] Bump version to 0.7.0 (#6614)

2020-10-02 Thread Tianqi Chen
Merged #6614 into master.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6614#event-3834007173

[Apache TVM Discuss] [Development] Block Diagram for Chisel VTA

2020-10-02 Thread Minisparrow via Apache TVM Discuss


Hi, liangfu, Could you give a guide about  how to generate this block diagram 
from chisel?





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/block-diagram-for-chisel-vta/4664/8) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/d04bfac10643de008d3845c2c1a38ddbb7c7a06a0a511279499c8ef1e6562943).


Re: [apache/incubator-tvm] [RELEASE] Update NEWS.md for v0.7 (#6613)

2020-10-02 Thread Tianqi Chen
Merged #6613 into master.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6613#event-3834804366

Re: [apache/incubator-tvm] [RFC] v0.7 Release Planning (#6421)

2020-10-02 Thread Tianqi Chen
https://github.com/apache/incubator-tvm/tree/v0.7 

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6421#issuecomment-702882967

[apache/incubator-tvm] Pre-release v0.7.0.rc0 - Apache TVM (incubating) v0.7.0.rc0

2020-10-02 Thread ziheng
# Introduction
v0.7 is brings many major features. The community works together to refactor 
the internal code base to bring an unified IR code structure with unified 
IRModule, type system and pass infrastructure. We have also bought many 
exciting new features, some highlights include:

* Initial automatic scheduling support
* Initial command line driver interface
* WebGPU and webassembly support
* Better first class rust support in the codebase
* Intial Hexagon support
* Bring your own codegen (BYOC) support

The community also continues to bring high quality improvements to the existing 
modules including, but not limited to: better frontend coverage, performance, 
quantization, uTVM and dynamic shape support.

# New Features
## Automatic Scheduling (Experimental)
* Phase 0: Ansor minimum system for auto schedule generating #5962
* Phase 1: Access Analyzer #6103
* Phase 1: Add `follow_split` and `follow_fused_split` steps #6142
* Phase 1: Add `pragma`/`storage_align`/`rfactor` steps #6141
* Phase 1: Add RPC Runner #6077
* Phase 1: Add `annotation`/`compute_at`/`compute_root`/`compute_inline` steps 
#6073
* Phase 1: Add `cache_read`/`cache_write` steps #6107
* Phase 1: Rename namspace form `auto_schedule` to `auto_scheduler` #6059
* Phase 1: The base class for cost models #6187
* Phase 1: feature extraction for cost models #6190
* Phase 1: XGBoost Cost Model #6270
* Phase 2: Basic GPU Sketch Search Policy #6269
* Phase 2: Evolutionary Search #6310
* Phase 2: Update heavy operations with `parallel_for` #6348
* Parallel the InitPopulation (#6512)
* Tutorial: Using the template-free auto-scheduler on CPU (#6488)

## BYOC
* External codegen support in Relay (#4482),(#4544)
* Bring Your Own Codegen Guide -- Part 1 #4602
* Bring Your Own Codegen Guide -- Part 2 #4718
* Relay annotation and partitioning for external compilers #4570
* JSON Runtime with DNNL End-to-End Flow #5919
* Handle one symbol for each runtime #5989
* Run accelerator specific optimizations #6068
* Arm Compute Library integration #5915
* Retire the example json runtime #6177
* `json_node.h` should include `data_type.h` #6224
* Improve installation tutorial #6170
* Add support for dense (fully connected) layer #6254
* Introduce the Ethos-N BYOC integration #6222
* Enable remote device via environment variables #6279
* Improved pooling support #6248
* Add support for quantized convolution #6335
* CoreML codegen #5634

## Operator Coverage
* Add `strided_set` operation (#4303)
* Add support for conv3d (#4400), pool3d (#4478), 3d upsampling ops (#4584)
* Add group convolution for VTA (#4421)
* Add 1d deconvolution op (#4476)
* Allow batch matmul to be fused into injective ops (#4537)
* Add native depthtospace and spacetodepth operators (#4566)
* Add CUDNN conv3d support (#4418)
* Dilation2D operator support #5033
* Isfinite operator #4981
* Unravel Index operator #5082
* Add thrust support for nms #5116
* Resize3d, Upsample3d op support #5633
* Add operator Correlation #5628
* `affine_grid` and `grid_sample` #5657
* Sparse to dense operator #5447
* `Conv3d_transpose` op support added #5737
* add op `crop_and_resize` #4417
* Add bitwise ops #4815
* Sparse to dense operator #5447
* support dynamic NMS(Non Maximum Suppression), symbolic begin, end, and 
strides for strided_slice #4312
* `Conv3d_transpose` op support added #5737
* ReverseSequence operator #5495
* Conv1D #4639
* 1D Pooling #4663

## Quantization
* Channel wise quantization - Quantize & Requantize #4629
* Support QNN ops. #5066
* Adding support for QNN subtract op #5153
* TFLite QNN Tutorial #5595
* Tutorial: Deploy Quantized Model on CUDA #4667
* Support asymmetric per-layer quantized operators #6109

## Relay
* Add convertlayout pass in Relay (#4335, #4600)
* Added Merge Composite pass #4771
* Call graph for relay #4922
* Add inline pass #4927
* Target annotation for external codegen #4933
* GradientCell Relay Pass #5039
* Add MergeCompilerRegions pass #5134
* Non-recursive Graph Vistor and Rewriter (#4886)
* [Blocksparse] Pipeline for lowering dense model to sparse-dense (#5377)
* Relay op strategy #4644
* Static Tensor Array (#5103)
* Memory planner (part 1) #5144
* ONNX codegen #5052
* Add Parser 2.0 #5932, part 2 #6162
* Basic block normal form #6152
* Convert Layout pass. #4664
* Pattern Language, Matcher, Rewriter, and Function Paritioner #5231

## Runtime and Backend
* Add ADTObject POD container type (#4346)
* TFLite RPC runtime (#4439)
* Standardized graph runtime export (#4532)
* MISRA-C compliant TVM runtime #3934
* Add String container #4628
* Introduce Virtual Memory Allocator to CRT (#5124)
* Initial implementation of Hexagon runtime support (#5252)
* FastRPC interface for Hexagon runtime (#5353)
* CoreML Runtime (#5283)
* AutoTVM + uTVM for Cortex-M7 (#5417)
* Windows Support for cpp_rpc (#4857)
* Implement TVMDSOOp(TensorFlow custom op) for TVM runtime (#4459)
* WebGPU support #5545
* TVM WebAssembly JS Runtime #5506
* Hexagon driver for offloading kernels to simula

[apache/incubator-tvm] [WIP/RFC] Add TVM dependencies to pyproject.toml (#6620)

2020-10-02 Thread Andrew Reusch
Still a WIP, posting for people to take a look at. This should maybe become an 
RFC, happy to write one up. I just needed a `pyproject.toml` for a project 
I'm working on now so I'm posting it.

It's really hard to reproducibly use TVM outside the docker containers, and 
depending on what hardware you have at work/home (are they different now?), you 
can't always run the docker container you need to (I.e. ci-gpu). Clearly 
documenting our python dependencies will help.

I think *just* checking this in as-is would be bad because:
 - I'm just copying dependencies from one place to another. There isn't 
anything to enforce that our docker images, and therefore our CI, actually uses 
the selected versions
 - That aside, it would be great to generate a constraints file or some other 
log *outside* the docker images, so that you can have a clear record of e.g. 
which version of `pylint` is used to lint without needing to pull the container.
 - In order to actually use `poetry`, I had to hack `setup.py` to hardcode the 
library location. Maybe we should just do this. Right now if you have 
`libtvm_runtime.so` in `PATH` then it just uses that one--very bad in 
development! The reason for this is that dependency tracking tools installing 
packages in editable mode really want to run something like `python setup.py 
egg_info` to determine the library version, and unless you've built tvm, 
poetry will just crash trying to do anything. that's a bad developer 
experience.
 - Making `pyproject.toml` the authoritative source of dependencies (I.e. 
rewriting the docker/install scripts to install only using `poetry`) would be 
bad because non-users of poetry would have to transcribe the dependencies into 
e.g. `requirements.txt`. Yes, tools exist to do this; we should do it for them.

Here are some future things I'd like to do to address these issues:
- [ ] Write a script that uses e.g. dephell to auto-generate 
`python/requirements.txt` and `python/requirements-.txt` for each 
extra section included.
- [ ] Write a unit test that asserts that `python/requirements*.txt` is in sync 
with `pyproject.toml`.
- [ ] Make it easy to install all extras in one command (likely needs output 
from previous script)
- [ ] Rewrite `docker/install/ubuntu_install_.sh` to just run 
`poetry install -E `.
- [ ] Add `poetry lock` to the docker build process and export the generated 
lockfile somehow...
- [ ] Decide what to do about `setup.py` library location hack.

Here's why I think we should use something other than requirements.txt as 
the authoritative source of dependencies:
 - In order to ensure a smooth user experience, we do have to pin our 
dependencies to *some* degree. Here, I've pinned to the package minor or 
major version where that made sense.
 - However, of course pinning is bad unless you update your pinned deps. so 
some dependency management tool provides that capability. I'm not at all 
attached to poetry as the tool to use for that, but it can facilitate it.

You can view, comment on, or merge this pull request online at:

  https://github.com/apache/incubator-tvm/pull/6620

-- Commit Summary --

  * obliterate USE_ANTLR from cmake.config
  * add poetry deps to pyproject.toml
  * initial attempt at setup.py + autodetect libtvm_runtime SO path
  * hack to hardcode in build
  * make pyproject lock

-- File Changes --

M cmake/config.cmake (7)
M pyproject.toml (89)
M python/setup.py (87)
M tests/scripts/task_config_build_cpu.sh (1)
M tests/scripts/task_config_build_gpu.sh (1)
M tests/scripts/task_config_build_wasm.sh (1)

-- Patch Links --

https://github.com/apache/incubator-tvm/pull/6620.patch
https://github.com/apache/incubator-tvm/pull/6620.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6620