dmpots wrote:

> That's not to say that things cannot be flakey sometimes: because of how we 
> test the debugger, we depend on a lot of things, many of which are out of our 
> control and can cause a test to fail. But that's different from a specific 
> test being flakey, which is what this decorator would be used for.

Thanks, I appreciate your thoughts. For some context here, I am looking to 
replace some internal scripts that handle failures by re-running tests. I 
thought we might be able to leverage the built-in features of the dotest to 
handle some of this. Let me collect some more data to see how much/what kind of 
flakiness we have.

Do you have any suggestions on how we should handle the "expected" flakiness 
because of how we test the debugger? Do you think this is something we should 
try to solve as part of the lldb testing framework?

https://github.com/llvm/llvm-project/pull/129817
_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to