In addition to registering the compute and schedule to Relay op strategy, you 
also need to register them as an AutoTVM task so that they can be extracted via 
`extract_from_program` and tuned. Specifically, you need to add the following 
decorators to your compute and schedule functions. Here I use 
`conv2d_nchw.cuda` as an example.

```python
@autotvm.register_topi_compute("conv2d_nchw.cuda")
def conv2d_nchw(cfg, data, kernel, strides, padding, dilation, 
out_dtype="float32"):
    # Compute function.

@autotvm.register_topi_schedule("conv2d_nchw.cuda")
def schedule_conv2d_nchw(cfg, outs):
    # Schedule function.
```
In this example, we registered an AutoTVM task `conv2d_nchw.cuda`. Since we 
also have the corresponding op strategy at 
https://github.com/apache/incubator-tvm/blob/main/python/tvm/relay/op/strategy/cuda.py#L128,
 this task will be extracted by `extract_from_program`.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/how-to-add-new-scheduler-in-auto-tvm/8433/3)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/4764e589ed6f7f1a75f4425f72c6ea1e00ffadff212387c4b77cee99b6fcc663).

Reply via email to