@comaniac - are you assuming that user needs to extend from the ExprMutator
class?
I have been mostly user of TVM, and now, I'd like to spend some time to
understand relay.
How does this method differs from the post_order_visit function provided by
TVM?
[quote="comaniac, post:3, topic:6
Thanks a lot for your help @comaniac, I forgot to mention that I save the logs
in different files to avoid problems. After 4000 trials, I get the same results
for the 'direct' method, so it seems to be a problem when applying the best
configuration. I will try the steps you mentioned above and
There are some possibilities:
1. Try to use `pick_best` to identify the best config for each workload in a
log file. AutoTVM will apply the best config over all tasks for the same
workload. In other words, if you tune `direct` and `winograd` for the same
conv2d workload and put them in the lo
Hi @comaniac ,
Thank you for your prompt reply. I have updated the question a little bit so
that things are more clear. Basically I use the same program and
comment/uncomment one line, which is the following:
`task[0] = autotvm.task.create(task[0].name, task[0].args, task[0].target,
task[0]
You just tuned for 100 trials? If so please try 3,000 or 4,000 trials.
---
[Visit
Topic](https://discuss.tvm.ai/t/relay-conv2d-layer-performance-after-auto-tuning-same-as-fallback/6888/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these
Hi everyone,
I was trying to obtain the execution time for each one of the layers in
resnet-18 (after auto-tuning). I obtain very similar results to the ones you
obtain when running the whole architecture in the tutorial for the GPU
(~1.10ms).
However, when I optimize a single layer and appl
Oh, I got it.
Just using _op.sigmoid() will solve this. I misunderstand backward and forward
functions.
```
def _mx_logistic_regression_output(inputs, attrs):
loss = _op.sigmoid(inputs[0])
return loss
```
Thank you again!~
---
[Visit
Topic](https://discuss.tvm.ai/t/unsupported-op-l