medismailben wrote:

> > > > LGTM. This has definitely come up in the past. If you feel motivated, 
> > > > I'm sure there must be a way to detect this issue in Python and we 
> > > > could have assert/warning/error that captures this at the dotest level.
> > > 
> > > 
> > > Agreed, making it part of `dotest` would be amazing. Maybe someone with 
> > > better python knowledge has some ideas (@medismailben @kastiglione ?). In 
> > > the meantime I'll have a think of how one might do that
> > 
> > 
> > I think instead of having this be part of our test-suite, I'd run it as 
> > part of PR testing in a GitHub action
> 
> The test suite would/will run as part of PR testing. Are you saying you would 
> **only** run it at PR time? Why wait and not find out at your test? We 
> already collect all the test names to run them as separate lit tests (similar 
> but different problem) and I bet we also already iterate over the tests 
> inside a file too to build the variants.

I don't mind running as part of the test suite but I think it would be harder 
to integrate with it rather than running the script standalone as part of the 
CI. Also I think these categories of errors are more related to linting than 
actually test failures.

https://github.com/llvm/llvm-project/pull/97043
_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to