On 23 February 2018 at 15:42, Jim Ingham <jing...@apple.com> wrote: > BTW, one thing I like about writing dotest.py tests is that it is easy to > craft fairly rich failure messages so if you get errors on systems you don't > have access to or are dealing with something that fails intermittently on a > bot somewhere, you have a hope of figuring out what went wrong. Is this > possible with FileCheck tests?
I'm not sure this is what you had in mind, but for tests like this I would try to do two things: - dump out as much information as possible in the output: FileCheck will display the snippet around the code it was not able to match, so seeing that should give you some idea about what is broken - make the test as hermetic as possible (i.e. avoid the situation where it fails only on the buildbot in the first place). For example for these tests we could disable dependent module loading/avoid including any system headers, etc. to make sure that the test run locally is as close to the run on the buildbots as possible. _______________________________________________ lldb-commits mailing list lldb-commits@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits