I have a small collection of relay passes that I found useful and that might be
interesting for more general use:
- Merge consecutive transpose ops (useful for converted models when the other
source framework had different conventions, I'm using PyTorch),
- merge equal (shape) constants to enab
I think it is not implemented per se.
There is a [`BatchNormToInferUnpack`
function](https://github.com/apache/incubator-tvm/blob/78d79923756ea9ed4545d2faef7d514a300d3452/src/relay/transforms/simplify_inference.cc#L34),
part of the [SimplifyInference
pass](https://tvm.apache.org/docs/api/pytho
Thank you.
Best regards
Thomas
---
[Visit
Topic](https://discuss.tvm.ai/t/pattern-matching-for-tuplegetitem/7069/6) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/e2c0a2c
Hi,
is it currently possible to match TupleGetItem for an arbitrary index?
if not, would it be a permissible patch to add this (maybe with index=-1
internally and a second constructor facing the public)?
Best regards
Thomas
---
[Visit
Topic](https://discuss.tvm.ai/t/pattern-matching-for
I think the reason is that you typically want to split the op into the
statistics gathering and elementwise operations to fuse the parts it with the
surrounding ops and having an op prevents that. That said, I don't think anyone
keeps you from changing that, it's just that the other case (spli
Yeah, it all wants to be static static to operate on.
But so what I'm after is the next step, eliminate all ops not needed in a
static setting.
This seems important for anything where the graph is created automatic - with
the frontend converters as well as differentiation.
Best regards
Thomas
[quote="mbrookhart, post:13, topic:7012"]
I don’t particular want to force users to type their problems before using the
pattern language in all cases.
[/quote]
I can see why. But so it seems that the shape processing gets really tedious
here - with the inability to pass .shape back to relay b
The above ZeroZapper code snippet also has the problem.
---
[Visit Topic](https://discuss.tvm.ai/t/same-shape-pattern/7012/10) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/
Oh, that is very likely the case for me here.
---
[Visit Topic](https://discuss.tvm.ai/t/same-shape-pattern/7012/8) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/3e50fe8fbc5
Thank you Matt!
Oh no. :man_facepalming: (But `checked_type` isn't the solution,
unfortunately.)
I must admit the ffi is too clever for me. Without the tab completion I'm lost.
I even have a 2-line patch to fix that for classes, but I don't know where to
put the unittest...
---
[Visit
So with the following rewrites and passes
```python
class ZeroZapp(tvm.relay.dataflow_pattern.DFPatternCallback):
def __init__(self):
self.zeros =
tvm.relay.dataflow_pattern.is_op("zeros")(tvm.relay.dataflow_pattern.wildcard())
self.other_tensor = tvm.relay.dataflow_pattern.
Thank you, yes.
So I have this graph produced by gradient (and graph normal form and removing
the forward outputs) of a dense + bias_add. Obviously, the gradients would be
`ones_like(output).collapse_like(bias)` and a couple of `dense( )` with
`grad_out` or its transpose replacing weight and i
Now I'm trying to produce a pattern that matches nodes if they have the same
shape.
Is such a pattern available? I only saw has_shape which seems to compare to a
fixed shape (which I don't know).
I'm trying to use rewrite and so it seems checking after the matching (an
returning an unchanged e
So I'm slowly wrapping my head around this.
To not only contribute questions all the time:
If I wanted to use the pattern language to simplify e.g. the Let, I would need
to make a Let pattern, right? If that would be useful to have, I could submit a
patch for that, maybe using TuplePattern as
Hello,
I have been toying around with the gradient relay transformation and wondered
if I am doing something wrong to get a rather elaborate gradient:

gets transformed into:

to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from
@tqchen What do you think, bug or feature?
---
[Visit
Topic](https://discuss.tvm.ai/t/tvm-relay-build-modifying-its-argument/6958/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsu
HI,
I've been hitting _This event loop is already running_ when trying to run
autotuning from Jupyter notebooks. Is this something that is easily avoided?
(I do realize that Jupyter and long-running things are ... special.)
For now I have worked around by moving the invocation of the autotuner
Turns out that binding the variables looks suspicious:
https://github.com/apache/incubator-tvm/blob/65224d9a67fc93919421e485771ec67e50c58543/src/relay/backend/build_module.cc#L247
---
[Visit
Topic](https://discuss.tvm.ai/t/tvm-relay-build-modifying-its-argument/6958/4)
to respond.
You ar
OK, thanks! I just wanted to know whether it's intentional but then I can track
this down.
---
[Visit
Topic](https://discuss.tvm.ai/t/tvm-relay-build-modifying-its-argument/6958/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails
Hi,
I noticed that tvm.relay.build is modifying the module I pass to it. Is that
expected?
This surprised me as the discussion of TVM passes appears (to me) to emphasize
that the passes are functional rather than in-place.
Best regards
Thomas
---
[Visit
Topic](https://discuss.tvm.ai/t/
I got very close to matching PyTorch's bmm on Vega 20 (Radeon VII) and to about
to 1.5x on 1080Ti for the 1024 example (with fixed dims).
One of the limiting things on the path ahead is the "-1" issue in the output
configurations of course.
Best regards
Thomas
---
[Visit
Topic](https:/
Currently, we use the CUDA schedule (and op) on ROCm:
https://github.com/apache/incubator-tvm/blob/2cd987d92724be0f859bfb624ce797f9c70167bb/python/tvm/relay/op/strategy/rocm.py#L47-L50
---
[Visit
Topic](https://discuss.tvm.ai/t/rocm-segmentation-fault-error-when-auto-tuning/6402/8)
to res
I could be wrong (and I don't always have access to cuda to check), but my
impression was that the library you pass to graph_runtime has a specialization
to the precise schedule.
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-see-actual-cuda-file-generated-by-tvm/6562/4)
to respond.
Given that it happens after 60 steps, this might not be ROCm but rather the
xgboost module. In that case, upgrading to the pre-release or downgrading helps.
https://github.com/apache/incubator-tvm/issues/4953#issuecomment-619255802
That said we also fixed a potential segfault in the AMDGPU llvm
You can get the code from the device module as in the [Tensor Expression
tutorial](https://docs.tvm.ai/tutorials/tensor_expr_get_started.html#inspect-the-generated-code).
Best regards
Thomas
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-see-actual-cuda-file-generated-by-tvm/6562/2)
26 matches
Mail list logo