Re: [apache/incubator-tvm] [DISCUSS][RFC] Apache TVM Graduation (#6299)

2020-08-18 Thread Sebastian
+1 from me as well!

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6299#issuecomment-675857185

[TVM Discuss] [Development] OpenCL AutoTVM questions

2019-05-21 Thread Sebastian via TVM Discuss


I'm trying to enable AutoTVM for OpenCL (`intel_graphics` target). So far I 
managed to have some success in that area, but values are multiple times worst 
than for generic scheduler.

To begin with I am focusing only on conv2d operation (since this is also only 
one currently present in `intel_graphics` TOPI). I've used `conv2d_direct.py` 
file from CUDA to use it as a dummy test file (this scheduler seemed to be the 
easiest) and get some idea what is required to write my own one. There are few 
things I don't understand and I'd appreciate guidance on how such scheduler 
should be written/what values should I provide. Two most pressing questions for 
now are:

1. From where you have all of the splitting and other numerical values in 
`schedule_direct_cuda`?

1. How you decided about `tvm.thread_axis` threads?





---
[Visit Topic](https://discuss.tvm.ai/t/opencl-autotvm-questions/2665/1) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/b527b1296934833ce41beda0c13583fae846f8fa9bbf7a803e07670fa3433d6b).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=FN0H3-unCsVvE4avJnv8jg2

[TVM Discuss] [Development] OpenCL AutoTVM questions

2019-06-03 Thread Sebastian via TVM Discuss


Thank you for answer. I'll check those materials.





---
[Visit Topic](https://discuss.tvm.ai/t/opencl-autotvm-questions/2665/6) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/7c084339c93cb690a70dfebd6c49c2b87767bb0ecaef799f029e83b380ee3f16).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=jwL7NIOMWiZuT5-nA4kSvA2

[TVM Discuss] [Development] OpenCL AutoTVM questions

2019-06-04 Thread Sebastian via TVM Discuss


Thank you @Laurawly for info.
Would you be able to explain a bit more on the topic? Especially I'd like to 
better understand which parts of the intel_graphics scheduler are getting main 
benefits from subgroups?





---
[Visit Topic](https://discuss.tvm.ai/t/opencl-autotvm-questions/2665/8) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/a3d2c9d00c433bffced20dc6da67abfcee40623b29a0ed97f340f88fd2938446).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=K6V4nmcgilfkimyxa3U91A2

[TVM Discuss] [Development] Use block data format for whole model

2019-08-09 Thread Sebastian via TVM Discuss


I'd like to ask if there is any plan or any simple way to enable block (NCHWc) 
data format for whole model instead of plain (NCHW)?

After some playing with TVM it seems that the preferred data type is plain. 
Significant exceptions are for Intel CPU and Intel Graphics scheduler which are 
capable of converting from plain to block and back, but it's not optimal as 
overall approach.

It appears that Intel HW prefers  block data format and it would be profitable 
for performance to remove those conversions all together. The idea was to add 
on the model reading stage decision if model should be converted to block or 
plain, but it requires a lot of other changes down the compilation pipeline.

Could anyone comment here what's the plan or solution?





---
[Visit 
Topic](https://discuss.tvm.ai/t/use-block-data-format-for-whole-model/3693/1) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/7253a99ac35bd9126ce8395a7f34abfc84720c4378b8fb733f941bfc197795d8).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=ht_vxQhU7PXcyVoX-puCVQ2

[TVM Discuss] [Development] Use block data format for whole model

2019-08-12 Thread Sebastian via TVM Discuss


Ok, if I understood this example correctly here is a way to take advantage of 
NCHWc in case of model already being in this format. What I'm looking for is 
more universal approach, allowing to convert whole model to NCHWc **and not 
each operation**.





---
[Visit 
Topic](https://discuss.tvm.ai/t/use-block-data-format-for-whole-model/3693/3) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/6b2d6cc1a5dbb8d55f258e4e335034c7bc36d840d9aefd582ea5eda1e066fb12).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=TgxFA7tQMlaj0l-EEGQIEQ2

[Apache TVM Discuss] [Development/pre-RFC] Export TIR to json

2022-03-16 Thread Sebastian Boblest Etas via Apache TVM Discuss


We propose to implement an API to TVM that allows users to export the TIR 
representation of the generated kernels as json. 

**Motivation** 

Json is a standard file format and can easily be processed in python, for 
example. A json interface allows to analyze the generated kernels target 
agnostic. For example, this works for both llvm and c targets. 

The goal that we have is to be able to extract the full TIR representation of 
the host module and all device modules regardless of the specific executor, 
target or runtime. 

In the case of the AOT executor pipeline, this would also contain the 
tvmgen_default___main__ function. This allows to reconstruct the graph topology 
of the neural network. For Graph executors, the graph can still be accessed via 
module.graph_json. 


**Proposed Implementation**  

We have already conducted some experiments with a JsonScriptPrinter class 
similar to the TVMScriptPrinter class in src/printer/tvmscript_printer.cc based 
on the discussion here: 
https://discuss.tvm.apache.org/t/why-c-unpacked-api-not-work-anymore/11845/19?u=sebastianboblestetas
 

We use it in driver_api.cc in the build function like this: 
![Screenshot from 2022-03-15 
12-18-47|690x160](upload://1NBROM71B0k0qp1BEDwOZHBGO1r.png) 
 
We make it accessible in python via json = module.lib.get_json() 
We do not know how to best export the TIR representation of the device modules. 
We would add a new boolean tir configuration option “tir.tir_export_to_json” to 
make this export functionality optional.

We think we can provide a working prototype within a few weeks but would 
appreciate feedback already now.
@Khoi @MJKlaiber @UlrikHjort @areusch





---
[Visit Topic](https://discuss.tvm.apache.org/t/export-tir-to-json/12329/1) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/ad7414c9b68f7bcc27f4bec4ecd3a06bf9c541b3379701916706cd8c6ba59ff0).


[Apache TVM Discuss] [Development/pre-RFC] Export TIR to json

2022-03-16 Thread Sebastian Boblest Etas via Apache TVM Discuss


Hi, 

thanks for the reference to this function. I was not aware of it.
However I tried it on a Module(c) and got this:
`{
  "root": 1, 
  "nodes": [
{
  "type_key": ""
}, 
{
  "type_key": "runtime.Module"
}
  ], 
  "b64ndarrays": [], 
  "attrs": {"tvm_version": "0.9.dev0"}
}`
This looks like only a very small overview. What we would like to have is 
really the full representation that the code generators get to write the code. 
Do I miss an option for this function to get more detailed output?





---
[Visit Topic](https://discuss.tvm.apache.org/t/export-tir-to-json/12329/4) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/1a2fe36d8136482a9d7d6bbc57d128f5c2f2e7786a8b1ddbe6274231e3ce942d).


[Apache TVM Discuss] [Development/pre-RFC] Export TIR to json

2022-03-25 Thread Sebastian Boblest Etas via Apache TVM Discuss


1. You are right, we actually tried to save the runtime.Module. We actually 
also added the TIR export to the runtime.Module because the generated source 
code is also put to it. We will try the different options you proposed as soon 
as we can find the time.
2. In the meantime we have written an early prototype of our export.
I attached a small sample image of what it looks like, compared to what 
save_json gives for the same IRModule. I do not think that save_json gives us 
the required level of detail but to be honest, I did not look at it in great 
detail.

![save_json|129x500](upload://ul2AYQ2SaJIcAP3BEOwxOz1UXst.png) 
![our_prototype|334x500](upload://eSth3pA8r1EB5C0zqb2oRC0FCii.png) 

3. We did not yet look into this, sorry
4. As you can probably see, we want to extract TIR just before the code 
generator is invoked. We would like to get all the details that also go into 
the final code generation step.
5. We want to make this useful for as many use cases as possible, so if we can 
make the point in time of the export configurable, we are definitely in favor 
of doing so. I am not sure however, how much this will affect the 
implementation of the TIR export printer. It might then possibly need 
configuration options as well, right?





---
[Visit Topic](https://discuss.tvm.apache.org/t/export-tir-to-json/12329/7) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/c34f12263ea2c2c94ded8aebd5c47c9e0db77f4e83ee06fc1443dfe1f2087864).