Polyhedral analysis would be an approach to generate the constraints in this scenario. On the other hand, the runtime validation sounds not a general solution, because it might affect the tuner. For example, throwing invalid configs in `next_batch` would result in no measurement results for those records, which means the learning based tuner won't get the feedback of invalid configs. I would prefer either of the following:
1. Propose a new config space representation to support non-grid config space. 2. Let verify passes pluggable. Currently, we have `VerifyGPU` pass that traverses TIR to estimate the memory usage and rejects invalid configs before sending them for compilation. Since this is at the evaluation stage, the rejected configs will still appear at the log file with proper error code, so that the tuner can benefit from it. We can make this mechanism as a callback so that users can bring their own verifier. The problem is that the verifier does not have config space information but just a graph in TIR, so it might be more difficult to check if it's valid or not. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/5809#issuecomment-650312544