On Fri, Jan 22, 2021 at 5:37 AM Pavel Labath <pa...@labath.sk> wrote:
> On 19/01/2021 23:23, David Blaikie wrote: > > On Tue, Jan 19, 2021 at 1:12 AM Pavel Labath <pa...@labath.sk> wrote: > > Yeah - I have mixed feelings about debugger testing here - it is nice > > to have end-to-end tests which makes for handy debugger testing > > (though I guess in theory, debuginfo-tests is the place for > > intentional end-to-end testing), though also being able to test > > different features (DWARF version, split DWARF, dsym V object > > debugging, etc) when they're written as end-to-end tests. > > Yeah, it would be nice if there was a clearer separation between the two > categories. The current setup has evolved organically, as the end-to-end > API tests used to be the only kinds of tests. > > > > > > Can we write non-end-to-end API tests, then? > > Kind of. There is no fundamental reason why one couldn't run llvm-mc or > whatever as a part of an API test. The main issue is that we don't have > the infrastructure for that set up right now. I think the reason for > that is that once you start dealing with "incomplete" executables which > cannot be run on the host platform, the usefulness of interactivity goes > down sharply. It is hard for such a test to do something other than load > up some executable and query its state. This is a perfect use case for a > shell test. > > There are exceptions though. For example we have a collection of "API" > tests which test the gdb-remote communication layer, by mocking one end > of the connection. Such tests are necessarily interactive, which is why > they ended up in the API category, but they are definitely not > end-to-end tests, and they either don't use any executables, or just use > a static yaml2objed executable. This is why our API tests have the > ability to run yaml2obj and one could add other llvm tools in a similar > fashion. > Though then you get the drawback that appears to be Jim's original/early motivation: Avoiding checking the textual output of the debugger as much as possible (so that updating that output doesn't involve changing every single test case - which can cause stagnation due to it being too expensive to update all the tests, so improvements to the output format are not made). Which I have some sympathy for - though LLVM and clang seem to get by without this to some degree - we do/have significantly changed the LLVM IR format, for instance, and done regex updates to the FileCheck/lit tests to account for that - Clang, well, if we changed the format of error messages we might have a bad time, except that we rarely test the literal warning/error text, instead using clang's testing functionality with the // expected-error, etc, syntax. I guess lldb doesn't have a machine readable form, like gdb's machine interface that might make for a more robust thing to test against most of the time (& then leaving a limited number of tests that test the user-textual output)? Instead the python API is the machine interface? > Another aspect of end-to-endness is being able to test a specific > component of lldb, instead of just the debugger as a whole. Here the API > tests cannot help because the "API" is the lldb public API. Not sure I followed here - you mean the API tests aren't more narrowly targeted than the Shell tests, because the API is the public API, so it's mostly/pretty close to what you can interact with from the Shell anyway - doesn't give you lower-level access akin to unit testing? Fair enough. > However, > there are also various tricks you can do by using the low-level > (debugging) commands (like the "image lookup" thing I mentioned) to > interact with the lower debugger layers in some manner. > > > pl >
_______________________________________________ lldb-commits mailing list lldb-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits