LLDB has TaskRunner and TaskPool. TaskPool is nearly the same as
llvm::ThreadPool. TaskRunner itself is a layer on top, though, and doesn't
seem to have an analogy in llvm. Not that I'm defending TaskRunner
I have written a new one called TaskMap. The idea is that if all you want
is to cal
Fwiw I haven't even followed the discussion closely enough to know what the
issues with the lldb task runner even are.
My motivation is simple though: don't reinvent the wheel.
Iirc LLDB task runner was added before llvm's thread pool existed (I
haven't checked, so i may be wrong about this). If
On 1 May 2017 at 22:58, Scott Smith wrote:
> On Mon, May 1, 2017 at 2:42 PM, Pavel Labath wrote:
>>
>> Besides, hardcoding the nesting logic into "add" is kinda wrong.
>> Adding a task is not the problematic operation, waiting for the result
>> of one is. Granted, generally these happen on the sa
I think that's all the more reason we *should* work on getting something
into LLVM first. Anything we already have in LLDB, or any modifications we
make will likely not be pushed up to LLVM, especially since LLVM already
has a ThreadPool, so any changes you make to LLDB's thread pool will likely
h
IMO we should start with proving a better version in the lldb codebase, and
then work on pushing it upstream. I have found much more resistance
getting changes in to llvm than lldb, and for good reason - more projects
depend on llvm than lldb.
On Mon, May 1, 2017 at 9:48 PM, Zachary Turner wrot
I would still very much prefer we see if there is a way we can adapt LLVM's
ThreadPool class to be suitable for our needs. Unless some fundamental
aspect of its design results in unacceptable performance for our needs, I
think we should just use it and not re-invent another one. If there are
impr
On Mon, May 1, 2017 at 2:42 PM, Pavel Labath wrote:
> Besides, hardcoding the nesting logic into "add" is kinda wrong.
> Adding a task is not the problematic operation, waiting for the result
> of one is. Granted, generally these happen on the same thread, but
> they don't have to be -- you can w
On 28 April 2017 at 16:04, Scott Smith wrote:
> Hmmm ok, I don't like hard coding pools. Your idea about limiting the
> number of high level threads gave me an idea:
>
> 1. System has one high level TaskPool.
> 2. TaskPools have up to one child and one parent (the parent for the high
> level Task
#1 is no big deal, we could just allocate one in a global class somewhere.
#2 actually seems quite desirable, is there any reason you don't want that?
#3 seems like a win for performance since no locks have to be acquired to
manage the collection of threads
On Sun, Apr 30, 2017 at 9:41 PM Scott S
The overall concept is similar; it comes down to implementation details like
1. llvm doesn't have a global pool, it's probably instantiated on demand
2. llvm keeps threads around until the pool is destroyed, rather than
letting the threads exit when they have nothing to do
3. llvm starts up all the
Have we examined llvm::ThreadPool to see if it can work for our needs? And
if not, what kind of changes would be needed to llvm::ThreadPool to make it
suitable?
On Fri, Apr 28, 2017 at 8:04 AM Scott Smith via lldb-dev <
lldb-dev@lists.llvm.org> wrote:
> Hmmm ok, I don't like hard coding pools.
Hmmm ok, I don't like hard coding pools. Your idea about limiting the
number of high level threads gave me an idea:
1. System has one high level TaskPool.
2. TaskPools have up to one child and one parent (the parent for the high
level TaskPool = nullptr).
3. When a worker starts up for a given Ta
On 27 April 2017 at 00:12, Scott Smith via lldb-dev
wrote:
> After a dealing with a bunch of microoptimizations, I'm back to
> parallelizing loading of shared modules. My naive approach was to just
> create a new thread per shared library. I have a feeling some users may not
> like that; I think
On 27 April 2017 at 19:12, Jim Ingham wrote:
> Interesting. Do you have to catch this information as the JIT modules get
> loaded, or can you recover the data after-the-fact? For most uses, I don't
> think you need to track JIT modules as they are loaded, but it would be good
> enough to refr
Hmm, turns out I was wrong about delayed symbol loading not working under
Linux. I've added timings to the review.
On Thu, Apr 27, 2017 at 11:12 AM, Jim Ingham wrote:
> Interesting. Do you have to catch this information as the JIT modules get
> loaded, or can you recover the data after-the-fac
Interesting. Do you have to catch this information as the JIT modules get
loaded, or can you recover the data after-the-fact? For most uses, I don't
think you need to track JIT modules as they are loaded, but it would be good
enough to refresh the list on stop.
Jim
> On Apr 27, 2017, at 10:
It's the gdb jit interface breakpoint. I don't think there is a good
way to scope that to a library, as that symbol can be anywhere...
On 27 April 2017 at 18:35, Jim Ingham via lldb-dev
wrote:
> Somebody is probably setting an internal breakpoint for some purpose w/o
> scoping it to the shared
Somebody is probably setting an internal breakpoint for some purpose w/o
scoping it to the shared library it's to be found in. Either that or somebody
has broken lazy loading altogether. But that's not intended behavior.
Jim
> On Apr 27, 2017, at 7:02 AM, Scott Smith wrote:
>
> So as it tur
So as it turns out, at least on my platform (Ubuntu 14.04), the symbols are
loaded regardless. I changed my test so:
1. main() just returns right away
2. cmdline is: lldb -b -o run /path/to/my/binary
and it takes the same amount of time as setting a breakpoint.
On Wed, Apr 26, 2017 at 5:00 PM, J
We started out with the philosophy that lldb wouldn't touch any more
information in a shared library than we actually needed. So when a library
gets loaded we might need to read in and resolve its section list, but we won't
read in any symbols if we don't need to look at them. The idea was th
A worker thread would call DynamicLoader::LoadModuleAtAddress. This in
turn eventually calls SymbolFileDWARF::Index, which uses TaskRunners to
1. extracts dies for each DWARF compile unit in a separate thread
2. parse/unmangle/etc all the symbols
The code distance from DynamicLoader to SymbolFile
Under what conditions would a worker thread spawn additional work to be run
in parallel and then wait for it, as opposed to just doing it serially? Is
it feasible to just require tasks to be non blocking?
On Wed, Apr 26, 2017 at 4:12 PM Scott Smith via lldb-dev <
lldb-dev@lists.llvm.org> wrote:
>
After a dealing with a bunch of microoptimizations, I'm back to
parallelizing loading of shared modules. My naive approach was to just
create a new thread per shared library. I have a feeling some users may
not like that; I think I read an email from someone who has thousands of
shared libraries.
23 matches
Mail list logo