clayborg added a comment.

In D121631#3386244 <https://reviews.llvm.org/D121631#3386244>, @labath wrote:

> In D121631#3384124 <https://reviews.llvm.org/D121631#3384124>, @yinghuitan 
> wrote:
>
>>> unify with `target.preload-symbols` feature
>>
>> I do not have strong opinion on this. One concern is that 
>> `target.preload-symbols` provides full experience but 
>> `symbols.load-on-demand` provides trade-off experience (for example some 
>> global expression evaluation may fail because we won't automatically hydrate 
>> based on type query).
>
> That is a fair point, but the having them separate also has downsides, as we 
> have to figure out what is the meaning of on-demand=true&preload=true combo 
> going to be. I think it would be strange to have on-demand take precedence in 
> this setup, but then if it doesn't, then you need to change *two* settings to 
> switch from "preload everything" (the current default) to the "on demand 
> mode". I am not particularly invested in any solution, but I think we should 
> make a conscious choice one way or the other.

These two settings can co-exist pretty easily. If preload is on, preloading can 
be done for anything that has debug info enabled, and would be done as soon as 
debug info was enabled for on demand. It would be like saying "please preload 
symbols as soon as debug info is enabled". Right now if we enable debug info 
due to a breakpoint, function, or global lookup that matches a symbol, we can 
immediately preload the symbols so they are ready regardless of wether a name 
lookup happens just like we do normally.

If preload is off, then we wait until someone makes a global name lookup call.

>>> I don't think that "run everything with the setting turned on" is a good 
>>> testing strategy
>>
>> I do not have strong opinion either. I understand the concern is extra test 
>> time. I do want to share that there are many tests that do not set 
>> breakpoints which exercised symbol ondemand code path and helped to catch 
>> real bugs in the implementation.
>
> How many tests/bugs are we talking about here? Were there more than say ten 
> distinct bugs caught there?
>
> I see a large difference between running tests to find /new/ bugs, and 
> running tests to find /regressions/. Our (API) test suite allows you to run 
> it in a lot of different ways, which can be used to find new bugs. I have 
> used that myself on several occasions. Even without making any changes to the 
> test suite, you could run it with your feature enabled through the 
> `--setting` argument to dotest. But simply because you manage to find a bug 
> using some combination of dotest arguments it does not mean that everyone 
> should be running the test suite with those arguments to ensure it doesn't 
> regress. It just doesn't scale. When you find (and fix) a bug, you can make a 
> dedicated regression test for that bug. Maybe even more than one -- to test 
> similar edge cases that happened to work already, but could be broken in the 
> future.
>
> Please don't take this personally. This isn't about you or your feature -- 
> that's my standard response to anyone tempted (in a way, it was a mistake to 
> make it too easy to add new flavours) to add new test flavours. All of what I 
> said applies (maybe even more so than here) to some of the existing test 
> categories (`gmodules` for one), but since they're already here it becomes 
> much harder to remove them -- which is why I'm trying to avoid adding any new 
> ones.

We knew adding a new flavor would be a contentious part of the patch and we are 
fine testing this however the community feels it is best tested. That being 
said, it would be good to catch regressions for this feature on at least one 
Buildbot so we don't regress the feature, I am not particular on how it happens.

Is there any interest in a VC call that anyone can choose to attend about this, 
or do we prefer an RFC on the website that hosts what the mailing list used to 
be and I can't remember right now?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D121631/new/

https://reviews.llvm.org/D121631

_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to