[TVM Discuss] [Questions] What to do with passes

2020-07-01 Thread Thomas V via TVM Discuss
I have a small collection of relay passes that I found useful and that might be interesting for more general use: - Merge consecutive transpose ops (useful for converted models when the other source framework had different conventions, I'm using PyTorch), - merge equal (shape) constants to enab

[TVM Discuss] [Questions] Where is the batch_normalization implementation specified in TVM?

2020-06-29 Thread Thomas V via TVM Discuss
I think it is not implemented per se. There is a [`BatchNormToInferUnpack` function](https://github.com/apache/incubator-tvm/blob/78d79923756ea9ed4545d2faef7d514a300d3452/src/relay/transforms/simplify_inference.cc#L34), part of the [SimplifyInference pass](https://tvm.apache.org/docs/api/pytho

[TVM Discuss] [Questions] Pattern matching for TupleGetItem

2020-06-24 Thread Thomas V via TVM Discuss
Thank you. Best regards Thomas --- [Visit Topic](https://discuss.tvm.ai/t/pattern-matching-for-tuplegetitem/7069/6) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/e2c0a2c

[TVM Discuss] [Questions] Pattern matching for TupleGetItem

2020-06-23 Thread Thomas V via TVM Discuss
Hi, is it currently possible to match TupleGetItem for an arbitrary index? if not, would it be a permissible patch to add this (maybe with index=-1 internally and a second constructor facing the public)? Best regards Thomas --- [Visit Topic](https://discuss.tvm.ai/t/pattern-matching-for

[TVM Discuss] [Questions] Why doesn't nn.layer_norm have TOpPattern?

2020-06-22 Thread Thomas V via TVM Discuss
I think the reason is that you typically want to split the op into the statistics gathering and elementwise operations to fuse the parts it with the surrounding ops and having an op prevents that. That said, I don't think anyone keeps you from changing that, it's just that the other case (spli

[TVM Discuss] [Questions] Same shape pattern

2020-06-19 Thread Thomas V via TVM Discuss
Yeah, it all wants to be static static to operate on. But so what I'm after is the next step, eliminate all ops not needed in a static setting. This seems important for anything where the graph is created automatic - with the frontend converters as well as differentiation. Best regards Thomas

[TVM Discuss] [Questions] Same shape pattern

2020-06-18 Thread Thomas V via TVM Discuss
[quote="mbrookhart, post:13, topic:7012"] I don’t particular want to force users to type their problems before using the pattern language in all cases. [/quote] I can see why. But so it seems that the shape processing gets really tedious here - with the inability to pass .shape back to relay b

[TVM Discuss] [Questions] Same shape pattern

2020-06-18 Thread Thomas V via TVM Discuss
The above ZeroZapper code snippet also has the problem. --- [Visit Topic](https://discuss.tvm.ai/t/same-shape-pattern/7012/10) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/

[TVM Discuss] [Questions] Same shape pattern

2020-06-18 Thread Thomas V via TVM Discuss
Oh, that is very likely the case for me here. --- [Visit Topic](https://discuss.tvm.ai/t/same-shape-pattern/7012/8) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/3e50fe8fbc5

[TVM Discuss] [Questions] Same shape pattern

2020-06-18 Thread Thomas V via TVM Discuss
Thank you Matt! Oh no. :man_facepalming: (But `checked_type` isn't the solution, unfortunately.) I must admit the ffi is too clever for me. Without the tab completion I'm lost. I even have a 2-line patch to fix that for classes, but I don't know where to put the unittest... --- [Visit

[TVM Discuss] [Questions] Same shape pattern

2020-06-18 Thread Thomas V via TVM Discuss
So with the following rewrites and passes ```python class ZeroZapp(tvm.relay.dataflow_pattern.DFPatternCallback): def __init__(self): self.zeros = tvm.relay.dataflow_pattern.is_op("zeros")(tvm.relay.dataflow_pattern.wildcard()) self.other_tensor = tvm.relay.dataflow_pattern.

[TVM Discuss] [Questions] Same shape pattern

2020-06-18 Thread Thomas V via TVM Discuss
Thank you, yes. So I have this graph produced by gradient (and graph normal form and removing the forward outputs) of a dense + bias_add. Obviously, the gradients would be `ones_like(output).collapse_like(bias)` and a couple of `dense( )` with `grad_out` or its transpose replacing weight and i

[TVM Discuss] [Questions] Same shape pattern

2020-06-17 Thread Thomas V via TVM Discuss
Now I'm trying to produce a pattern that matches nodes if they have the same shape. Is such a pattern available? I only saw has_shape which seems to compare to a fixed shape (which I don't know). I'm trying to use rewrite and so it seems checking after the matching (an returning an unchanged e

[TVM Discuss] [Questions] Relay Gradients

2020-06-17 Thread Thomas V via TVM Discuss
So I'm slowly wrapping my head around this. To not only contribute questions all the time: If I wanted to use the pattern language to simplify e.g. the Let, I would need to make a Let pattern, right? If that would be useful to have, I could submit a patch for that, maybe using TuplePattern as

[TVM Discuss] [Questions] Relay Gradients

2020-06-17 Thread Thomas V via TVM Discuss
Hello, I have been toying around with the gradient relay transformation and wondered if I am doing something wrong to get a rather elaborate gradient: ![linear|690x247](upload://wgrWF1xcKY67TwFabk6Rn5MAC4c.png) gets transformed into: ![grad_linear|469x500](upload://mTnPXYdgholApZHbIaN6W1mjIc

[TVM Discuss] [Questions] Tvm.relay.build modifying its argument

2020-06-16 Thread Thomas V via TVM Discuss
Yes. Thank you for your input! Best regards Thomas https://github.com/apache/incubator-tvm/pull/5822 --- [Visit Topic](https://discuss.tvm.ai/t/tvm-relay-build-modifying-its-argument/6958/7) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from

[TVM Discuss] [Questions] Tvm.relay.build modifying its argument

2020-06-15 Thread Thomas V via TVM Discuss
@tqchen What do you think, bug or feature? --- [Visit Topic](https://discuss.tvm.ai/t/tvm-relay-build-modifying-its-argument/6958/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsu

[TVM Discuss] [Questions] Jupyter Notebooks and Autotuning

2020-06-14 Thread Thomas V via TVM Discuss
HI, I've been hitting _This event loop is already running_ when trying to run autotuning from Jupyter notebooks. Is this something that is easily avoided? (I do realize that Jupyter and long-running things are ... special.) For now I have worked around by moving the invocation of the autotuner

[TVM Discuss] [Questions] Tvm.relay.build modifying its argument

2020-06-12 Thread Thomas V via TVM Discuss
Turns out that binding the variables looks suspicious: https://github.com/apache/incubator-tvm/blob/65224d9a67fc93919421e485771ec67e50c58543/src/relay/backend/build_module.cc#L247 --- [Visit Topic](https://discuss.tvm.ai/t/tvm-relay-build-modifying-its-argument/6958/4) to respond. You ar

[TVM Discuss] [Questions] Tvm.relay.build modifying its argument

2020-06-12 Thread Thomas V via TVM Discuss
OK, thanks! I just wanted to know whether it's intentional but then I can track this down. --- [Visit Topic](https://discuss.tvm.ai/t/tvm-relay-build-modifying-its-argument/6958/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails

[TVM Discuss] [Questions] Tvm.relay.build modifying its argument

2020-06-12 Thread Thomas V via TVM Discuss
Hi, I noticed that tvm.relay.build is modifying the module I pass to it. Is that expected? This surprised me as the discussion of TVM passes appears (to me) to emphasize that the passes are functional rather than in-place. Best regards Thomas --- [Visit Topic](https://discuss.tvm.ai/t/

[TVM Discuss] [Questions] Optimizing matrix multiplication for GPU

2020-06-08 Thread Thomas V via TVM Discuss
I got very close to matching PyTorch's bmm on Vega 20 (Radeon VII) and to about to 1.5x on 1080Ti for the 1024 example (with fixed dims). One of the limiting things on the path ahead is the "-1" issue in the output configurations of course. Best regards Thomas --- [Visit Topic](https:/

[TVM Discuss] [Questions] ROCm 'segmentation fault' error when auto-tuning

2020-06-03 Thread Thomas V via TVM Discuss
Currently, we use the CUDA schedule (and op) on ROCm: https://github.com/apache/incubator-tvm/blob/2cd987d92724be0f859bfb624ce797f9c70167bb/python/tvm/relay/op/strategy/rocm.py#L47-L50 --- [Visit Topic](https://discuss.tvm.ai/t/rocm-segmentation-fault-error-when-auto-tuning/6402/8) to res

[TVM Discuss] [Questions] How to see actual cuda file generated by tvm?

2020-05-03 Thread Thomas V via TVM Discuss
I could be wrong (and I don't always have access to cuda to check), but my impression was that the library you pass to graph_runtime has a specialization to the precise schedule. --- [Visit Topic](https://discuss.tvm.ai/t/how-to-see-actual-cuda-file-generated-by-tvm/6562/4) to respond.

[TVM Discuss] [Questions] ROCm 'segmentation fault' error when auto-tuning

2020-05-01 Thread Thomas V via TVM Discuss
Given that it happens after 60 steps, this might not be ROCm but rather the xgboost module. In that case, upgrading to the pre-release or downgrading helps. https://github.com/apache/incubator-tvm/issues/4953#issuecomment-619255802 That said we also fixed a potential segfault in the AMDGPU llvm

[TVM Discuss] [Questions] How to see actual cuda file generated by tvm?

2020-05-01 Thread Thomas V via TVM Discuss
You can get the code from the device module as in the [Tensor Expression tutorial](https://docs.tvm.ai/tutorials/tensor_expr_get_started.html#inspect-the-generated-code). Best regards Thomas --- [Visit Topic](https://discuss.tvm.ai/t/how-to-see-actual-cuda-file-generated-by-tvm/6562/2)