On 25/01/2021 05:30, David Blaikie wrote:
On Fri, Jan 22, 2021 at 5:37 AM Pavel Labath <pa...@labath.sk
<mailto:pa...@labath.sk>> wrote:
On 19/01/2021 23:23, David Blaikie wrote:
> On Tue, Jan 19, 2021 at 1:12 AM Pavel Labath <pa...@labath.sk
<mailto:pa...@labath.sk>> wrote:
> Yeah - I have mixed feelings about debugger testing here - it is nice
> to have end-to-end tests which makes for handy debugger testing
> (though I guess in theory, debuginfo-tests is the place for
> intentional end-to-end testing), though also being able to test
> different features (DWARF version, split DWARF, dsym V object
> debugging, etc) when they're written as end-to-end tests.
Yeah, it would be nice if there was a clearer separation between the
two
categories. The current setup has evolved organically, as the
end-to-end
API tests used to be the only kinds of tests.
>
> Can we write non-end-to-end API tests, then?
Kind of. There is no fundamental reason why one couldn't run llvm-mc or
whatever as a part of an API test. The main issue is that we don't have
the infrastructure for that set up right now. I think the reason for
that is that once you start dealing with "incomplete" executables which
cannot be run on the host platform, the usefulness of interactivity
goes
down sharply. It is hard for such a test to do something other than
load
up some executable and query its state. This is a perfect use case
for a
shell test.
There are exceptions though. For example we have a collection of "API"
tests which test the gdb-remote communication layer, by mocking one end
of the connection. Such tests are necessarily interactive, which is why
they ended up in the API category, but they are definitely not
end-to-end tests, and they either don't use any executables, or just
use
a static yaml2objed executable. This is why our API tests have the
ability to run yaml2obj and one could add other llvm tools in a similar
fashion.
Though then you get the drawback that appears to be Jim's original/early
motivation: Avoiding checking the textual output of the debugger as much
as possible (so that updating that output doesn't involve changing every
single test case - which can cause stagnation due to it being too
expensive to update all the tests, so improvements to the output format
are not made). Which I have some sympathy for - though LLVM and clang
seem to get by without this to some degree - we do/have significantly
changed the LLVM IR format, for instance, and done regex updates to the
FileCheck/lit tests to account for that - Clang, well, if we changed the
format of error messages we might have a bad time, except that we rarely
test the literal warning/error text, instead using clang's testing
functionality with the // expected-error, etc, syntax.
Yeah, I am aware of Jim's objections to this. :) I haven't been around
here as long as he has, and I definitely don't, and definitely haven't
been around to witness the gdb format change attempt, but with the shell
tests we have, I haven't seen any indication that they inhibit the
velocity of lldb development (we haven't done any major format recently
though -- but if we did, I don't think the shell tests would be a big
problem). So I tend to not put so much emphasis on this aspect, and
choose to prioritise other aspects of tests (e.g. ease of writing).
With different priorities it's not surprising (though still unfortunate)
that we reach different conclusions.
I guess lldb doesn't have a machine readable form, like gdb's machine
interface that might make for a more robust thing to test against most
of the time (& then leaving a limited number of tests that test the
user-textual output)? Instead the python API is the machine interface?
Fun fact: We used to have an lldb-mi tool in the repository, which
implemented the gdb-mi interface on top of lldb. Although the
implementation of the tool itself was bad, the main reason why the tool
was removed was that its tests were horribly flaky and hard to
understand&maintain.
Another aspect of end-to-endness is being able to test a specific
component of lldb, instead of just the debugger as a whole. Here the
API
tests cannot help because the "API" is the lldb public API.
Not sure I followed here - you mean the API tests aren't more narrowly
targeted than the Shell tests, because the API is the public API, so
it's mostly/pretty close to what you can interact with from the Shell
anyway - doesn't give you lower-level access akin to unit testing? Fair
enough.
What I wanted to say here is that the Shell tests can (and some, do) go
lower level than the lldb public API, though the "lldb-test" tool. It
sits below the public api, and exposes some useful features of the
underlying apis. The idea was for it to be the compromise solution for
the "have a stringly matchable interface but still be free to change
lldb's text output", by separating the interface used for testing from
the one that the user uses.
It hasn't exactly taken the world by storm, I would say due to a
combination of two factors:
- lldb is very monolithic, so it is hard to expose/test just one
component without creating/mocking the whole world
- lldb already has a bunch of low-level commands (like the image lookup
thingy) that can already be used to get to most of the things you'd need.
I like how the object-file subcommand of lldb-test (the first use case)
has worked out and I think it brings pretty good value for money. The
others, I fear, are failed experiments, for one reason or another.
pl
_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits