[lldb-dev] [Bug 37301] New: Unable to display statically initialized pointers on arm64 (linux?) without a running process

2018-05-01 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=37301

Bug ID: 37301
   Summary: Unable to display statically initialized pointers on
arm64 (linux?) without a running process
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: lab...@google.com
CC: llvm-b...@lists.llvm.org

This happens because the pointer, even though "initialized" statically, will
leave behind a runtime relocation (at least for PIC) to be fixed up by the
dynamic linker. When lldb goes to read the value of the variable from the
object file, it just sees a zero.

On other architectures this works, though mostly by luck as the linker uses a
relocation without an explicit addend, so the value in the object file happens
to be the right pointer without the relocation applied (this applies mainly to
ELF, I don't know whether we have better relocation handling for MachO).

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Dlopen extremely slow while LLDB is attached

2018-05-01 Thread Pavel Labath via lldb-dev
On Mon, 30 Apr 2018 at 20:13, Scott Funkenhauser 
wrote:

> I messed up my test setup and incorrectly attributed the additional time.
Parsing the DWARF sections does add additional time, but only increases by
a small percentage.

> By pure chance I noticed that the latency between the client and server
had a huge impact. I did some tests against lldb_server running on a
machine with a RTT of 18ms to the client. The difference in load time in my
real world example (initializing Vulkan) was around 16x slower (~3.5s vs
55s). I did some more digging and it looks like there are certain
operations that perform a number of synchronous requests to the server
(DYLDRendezvous::TakeSnapshot - updating SO entries and
ThreadList::WillResume - grabbing SP and FP for every thread). Since all
the requests are synchronous they are extremely sensitive to increased
latency.

> Is this a limitation of the gdb-server (can't handle parallel requests)?
> Or is this not a common use case, and is not a known issue?

This is a known issue, though I did not expect it to have that much of an
impact. In fact, I have trouble reconciling this fact with your earlier
statement that second and subsequent runs are much faster. The SO entry
loading is something that has to happen on every run, so I don't see why
the second run would be faster. This would be more consistent with the
debug-info parsing case, as there we only index the dwarf once (if it
hasn't changed). So, I think we are missing something here.

In any case, this is not a fundamental limitation, and there are ways to
remove that. The most obvious one is to move the rendezvous structure
parsing to the server -- there is even a gdb packet for that, i don't know
its name off-hand. Currently we have support for that in the client (for
communicating with stubs that support it), but not in lldb-server.

For the register reading part, we usually make sure we send the "important"
registers in a batch, so that the client does not have to handle every one
separately. At the moment it's only PC, because that used to be enough at
some point. I don't know if anything changed in the client to make it ask
for more info, but this is something that needs to be looked at more
closely.

> I enabled "gdb-remote all" logging, and searched for all all instances of
'0x5652ACF3F120' (the address of the connection object that is
reporting the timeout?). Seems to be a pretty good corelation between the
timeouts and 'Communication::SyncronizeWithReadThread', unfortunately I
haven't had time to investigate further.

> 1525110345.916504145 0x5652acefcbb0
Broadcaster("lldb.process")::RestoreBroadcaster (about to pop
listener("lldb.PlatformLinux.DebugProcess.hijack")=0x5652acf079a0)
> 1525110345.916502953 this = 0x5652ACF3F120, timeout = 500 us
> --
> 1525110345.919557333 0x7f0868000940
'Communication::SyncronizeWithReadThread'
Listener::FindNextEventInternal(broadcaster=(nil),
broadcaster_names=(nil)[0], event_type_mask=0x, remove=1) event
0x7f086c0008c0
> 525110345.919566154 this = 0x5652ACF3F120, timeout = 500 us
> --
> 1525110346.123922110 0x7f0868000d10
'Communication::SyncronizeWithReadThread'
Listener::FindNextEventInternal(broadcaster=(nil),
broadcaster_names=(nil)[0], event_type_mask=0x, remove=1) event
0x7f086c0008c0
> 1525110346.123931408 this = 0x5652ACF3F120, timeout = 500 us
> --
> 1525110346.152676821 0x7f0868006710
'Communication::SyncronizeWithReadThread'
Listener::FindNextEventInternal(broadcaster=(nil),
broadcaster_names=(nil)[0], event_type_mask=0x, remove=1) event
0x7f086c0008c0
> 1525110346.152685642 this = 0x5652ACF3F120, timeout = 500 us
> --
> 1525110346.167683363 0x7f08682b2fe0
'Communication::SyncronizeWithReadThread'
Listener::FindNextEventInternal(broadcaster=(nil),
broadcaster_names=(nil)[0], event_type_mask=0x, remove=1) event
0x7f086c0008c0
> 1525110346.167692184 this = 0x5652ACF3F120, timeout = 500 us
I think the timeout quest is a red haring, tbh. These two are correlated,
because they are both things that we need to do when the process stops
(flush its STDIO after it stops, and start the read thread after we resume
it). The run fast, and there are no timeouts involved.

> --
> 1525110351.172777176 error: timed out, status = timed out, uri =
> 1525110351.172847271 this = 0x5652ACF3F120, timeout = 500 us
> 1525110356.173308611 error: timed out, status = timed out, uri =
> 1525110356.173368216 this = 0x5652ACF3F120, timeout = 500 us
> 1525110361.175591230 error: timed out, status = timed out, uri =
> 1525110361.175647497 this = 0x5652ACF3F120, timeout = 500 us
> 1525110366.180710316 error: timed out, status = timed out, uri =
> 1525110366.180769205 this = 0x5652ACF3F120, timeout = 500 us
And I bet these happen while the process is running, so they do not impact
the latency at all. It's just us waiting for the process to stop in a loop.
__

Re: [lldb-dev] Dlopen extremely slow while LLDB is attached

2018-05-01 Thread Scott Funkenhauser via lldb-dev
 > This is a known issue, though I did not expect it to have that much of an
> impact. In fact, I have trouble reconciling this fact with your earlier
> statement that second and subsequent runs are much faster. The SO entry
> loading is something that has to happen on every run, so I don't see why
> the second run would be faster. This would be more consistent with the
> debug-info parsing case, as there we only index the dwarf once (if it
> hasn't changed). So, I think we are missing something here.

My statement that subsequent runs are much faster was for my simplified
example (only running locally) that was meant to reproduce the problem I
was seeing when initializing Vulkan. When I went back and tested
initializing Vulkan (remotely) with  SymbolFileDWARF::Index() commented out it
was still very slow, and I realized I was on the wrong path.

My latency to the server is normally only a few ms, which is why I
initially ignored that difference between the two test environments. But it
turns out a few ms is enough to make it twice as slow as running locally.
This is what initially kicked off my investigation. I didn't catch this
earlier because I was previously using GDB locally, and recently switched
to using LLDB remotely. I hadn't compared LLDB local vs LLDB remote. My
latest example to a server with a RTT of 18ms was to confirm that latency
was the contributing factor.

On Tue, May 1, 2018 at 6:44 AM Pavel Labath  wrote:

> On Mon, 30 Apr 2018 at 20:13, Scott Funkenhauser  >
> wrote:
>
> > I messed up my test setup and incorrectly attributed the additional time.
> Parsing the DWARF sections does add additional time, but only increases by
> a small percentage.
>
> > By pure chance I noticed that the latency between the client and server
> had a huge impact. I did some tests against lldb_server running on a
> machine with a RTT of 18ms to the client. The difference in load time in my
> real world example (initializing Vulkan) was around 16x slower (~3.5s vs
> 55s). I did some more digging and it looks like there are certain
> operations that perform a number of synchronous requests to the server
> (DYLDRendezvous::TakeSnapshot - updating SO entries and
> ThreadList::WillResume - grabbing SP and FP for every thread). Since all
> the requests are synchronous they are extremely sensitive to increased
> latency.
>
> > Is this a limitation of the gdb-server (can't handle parallel requests)?
> > Or is this not a common use case, and is not a known issue?
>
> This is a known issue, though I did not expect it to have that much of an
> impact. In fact, I have trouble reconciling this fact with your earlier
> statement that second and subsequent runs are much faster. The SO entry
> loading is something that has to happen on every run, so I don't see why
> the second run would be faster. This would be more consistent with the
> debug-info parsing case, as there we only index the dwarf once (if it
> hasn't changed). So, I think we are missing something here.
>
> In any case, this is not a fundamental limitation, and there are ways to
> remove that. The most obvious one is to move the rendezvous structure
> parsing to the server -- there is even a gdb packet for that, i don't know
> its name off-hand. Currently we have support for that in the client (for
> communicating with stubs that support it), but not in lldb-server.
>
> For the register reading part, we usually make sure we send the "important"
> registers in a batch, so that the client does not have to handle every one
> separately. At the moment it's only PC, because that used to be enough at
> some point. I don't know if anything changed in the client to make it ask
> for more info, but this is something that needs to be looked at more
> closely.
>
> > I enabled "gdb-remote all" logging, and searched for all all instances of
> '0x5652ACF3F120' (the address of the connection object that is
> reporting the timeout?). Seems to be a pretty good corelation between the
> timeouts and 'Communication::SyncronizeWithReadThread', unfortunately I
> haven't had time to investigate further.
>
> > 1525110345.916504145 0x5652acefcbb0
> Broadcaster("lldb.process")::RestoreBroadcaster (about to pop
> listener("lldb.PlatformLinux.DebugProcess.hijack")=0x5652acf079a0)
> > 1525110345.916502953 this = 0x5652ACF3F120, timeout = 500 us
> > --
> > 1525110345.919557333 0x7f0868000940
> 'Communication::SyncronizeWithReadThread'
> Listener::FindNextEventInternal(broadcaster=(nil),
> broadcaster_names=(nil)[0], event_type_mask=0x, remove=1) event
> 0x7f086c0008c0
> > 525110345.919566154 this = 0x5652ACF3F120, timeout = 500 us
> > --
> > 1525110346.123922110 0x7f0868000d10
> 'Communication::SyncronizeWithReadThread'
> Listener::FindNextEventInternal(broadcaster=(nil),
> broadcaster_names=(nil)[0], event_type_mask=0x, remove=1) event
> 0x7f086c0008c0
> > 1525110346.123931408 this = 0x5652ACF3F120, timeout = 500 us
> > --
> > 1525110346.1526768

Re: [lldb-dev] Proposal: Using LLD in tests

2018-05-01 Thread Pavel Labath via lldb-dev
I have created a patch , which extends
lldb-test to support more precise dumping of the symbol information in a
module. It uses lld to make sure the tests can run on any system (which has
lld checked out) and to avoid the tests being affected by the environment.
Let me know what you think of it.

I've also tried using lld for the PDB tests. The lld part worked fine, but
unfortunately, it seems lldb still depends on the microsoft pdb reader to
get the symbol information (it seems the "native" pdb reading apis in llvm
are not all implemented).

On the bright side, it looks like it should be able to produce working (and
debuggable) MachO binaries using lld. It probably does not support all the
fancy features that the native darwin linker does, but it seemed to work
fine for my hello world examples (the only issue I saw was that it is not
possible to convince it to *not* require the dyld_stub_binder symbol, but
this can be worked around). I am going to continue experimenting here.


On Thu, 19 Apr 2018 at 19:41, Ted Woodward 
wrote:


> Our Windows buildbots use msys for gnuisms. The makefiles in the test
suite run fine with minimal modifications (just the object delete hack Zach
put in to use del instead of rm; msys make doesn't accept cmd syntax while
Cygwin make does). Now, that's using clang to build Hexagon binaries, but
teaching the makefile to use cl syntax shouldn't be too hard. I've seen it
done before; same makefile for windows and various unix derivatives, detect
what OS you were running on and set CFLAGS/CXXFLAGS/LDFLAGS accordingly.

> Ted

> --
> Qualcomm Innovation Center, Inc.
> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

> > -Original Message-
> > From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of
Pavel
> > Labath via lldb-dev
> > Sent: Thursday, April 19, 2018 12:45 PM
> > To: Leonard Mosescu 
> > Cc: aaron.lee.sm...@gmail.com; LLDB 
> > Subject: Re: [lldb-dev] Proposal: Using LLD in tests
> >
> > On Thu, 19 Apr 2018 at 18:19, Leonard Mosescu 
> > wrote:
> >
> > >>the PDB tests under lit/SymbolFile/PDB need a linker to produce
> > >> the
> > program database
> >
> >
> > > With this proposal, would we preserve any coverage for MSVC produced
> > debug information?
> >
> >
> > Well.. the question there is what are you trying to test? Is it the
fact your
> > debugger works with a particular compiler+linker combination (note that
those
> > tests already compile with clang-cl), or that your pdb-parsing code is
sane.
> > (integration vs. regression test).
> >
> > Historically we've only had the former kind of tests (dotest), and
we've had the
> > ability (and used it) to run those tests against different kinds of
compilers. This
> > is all nice, but it means that a specific test will be testing a
different thing for
> > every person who runs it. That's why I would like to build up a suite
of more
> > regression-like tests (*). I would say that the tests under lit/***
should be
> > regression tests and our goal should be to remove as many system
> > dependencies as possible, and leave the job of testing integration with
a
> > specific toolchain to "dotest" tests (**).
> >
> > Technically, the answer to your question is "no", because currently
dotest tests
> > don't know how to work with cl+link. Making that work would be an
interesting
> > project (although a bit annoying as the Makefiles are full of gcc-isms).
> > However, I don't think that should stop us here.
> >
> > (*) Ideally I would like to leave even the compiler out of the equation
for these
> > tests, and make it so that the tests always run on the exact same set
of bytes. I
> > am hoping I will be able to write at least some tests using .s files.
However, I
> > don't think I will do that for all of them, because these files can be
> > long/verbose/tedious to write.
> >
> > (**) However, even "dotest" tests should have a "default" mode which is
as
> > hermetic as possible.
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB tests getting stuck on GDBRemoteCommunicationClientTest.GetMemoryRegionInfo ?

2018-05-01 Thread Leonard Mosescu via lldb-dev
Thanks Pavel. It doesn't look like a timeout to me:

1. First, the other (main) thread is just waiting on the std::future::get()
on the final EXPECT_TRUE(result.get().Success())

*#0  0x7fe4bdfbb6cd in pthread_join (threadid=140620333614848,
thread_return=0x0) at pthread_join.c:90*
*...*
*#14 0x55b855bdf370 in std::future::get
(this=0x7ffe4498aad0) at /usr/include/c++/7/future:796*
*#15 0x55b855b8c502 in
GDBRemoteCommunicationClientTest_GetMemoryRegionInfo_Test::TestBody
(this=0x55b85bc195d0)*
*at
/usr/local/google/home/mosescu/extra/llvm/src/tools/lldb/unittests/Process/gdb-remote/GDBRemoteCommunicationClientTest.cpp:*
330


2. The part that seems interesting to me is this part of the callstack I
mentioned:

*frame #9: 0x564647c39a23
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteClientBase::SendPacketAndWaitForResponse(this=0x56464d53e580,
payload=(Data = "qSupported:xmlRegisters=i386,arm,mips", Length = 37),
response=0x7f2d1eb0a0e0, send_async=false) at
GDBRemoteClientBase.cpp:176*
*frame #10: 0x564647c44e0a
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetRemoteQSupported(this=0x56464d53e580)
at GDBRemoteCommunicationClient.cpp:370*
*frame #11: 0x564647c4427b
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetQXferMemoryMapReadSupported(this=0x56464d53e580)
at GDBRemoteCommunicationClient.cpp:200*
*frame #12: 0x564647c4c661
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::LoadQXferMemoryMap(this=0x56464d53e580)
at GDBRemoteCommunicationClient.cpp:1609*
*frame #13: 0x564647c4bb4e
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetQXferMemoryMapRegionInfo(this=0x56464d53e580,
addr=16384, region=0x7f2d1eb0a6c0) at
GDBRemoteCommunicationClient.cpp:1583*
*frame #14: 0x564647c4b95d
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetMemoryRegionInfo(this=0x56464d53e580,
addr=16384, region_info=0x7ffd8b1a8870) at
GDBRemoteCommunicationClient.cpp:1558*
*frame #15: 0x56464797ee25
ProcessGdbRemoteTests`operator(__closure=0x56464d5636a8) at
GDBRemoteCommunicationClientTest.cpp:339*

It seems that the client is attempting extra communication which is not
modeled in the mock HandlePacket(), so it simply hangs in there. If that's
the case I'd expect this issue to be more widespread (unless my source tree
is in a broken state).

This is the fist time I looked at this part of the code so it's possible I
missed something obvious though.



On Fri, Apr 27, 2018 at 2:11 AM, Pavel Labath  wrote:

> On Thu, 26 Apr 2018 at 22:58, Leonard Mosescu via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> > I just did a clean build (debug) on Linux, and I noticed that the LLDB
> tests seem to consistently get stuck:
>
> >   -- Testing:
> 1002 tests, 12 threads --
>
> >   99%
> [===
> 
> ===-]
> ETA: 00:00:01
> > lldb-Suite :: types/TestIntegerTypes.py
>
>
> > At this point there are a bunch of llvm-lit processes waiting and two
> suspicious LLDB unit tests:
>
>
> > ProcessGdbRemoteTests
> --gtest_filter=GDBRemoteCommunicationClientTest.GetMemoryRegionInfo
> > ProcessGdbRemoteTests
> --gtest_filter=GDBRemoteCommunicationClientTest.
> GetMemoryRegionInfoInvalidResponse
>
>
> > I took a quick look and they both seem to blocked on communicating with
> the remote:
>
> > thread #2, name = 'ProcessGdbRemot', stop reason = signal SIGSTOP
>
> These tests should have two threads communicating with each other. Can you
> check what the other thread is doing?
>
> My bet would be that fact that we are now running dotest tests concurrently
> with the unittests is putting more load on the system (particularly in
> debug builds), and the communication times out. You can try increasing the
> timeout in GDBRemoteTestUtils.cpp:GetPacket to see if that helps.
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB tests getting stuck on GDBRemoteCommunicationClientTest.GetMemoryRegionInfo ?

2018-05-01 Thread Leonard Mosescu via lldb-dev
PS. just a wild guess, could it be related to : rL327970: Re-land: [lldb]
Use vFlash commands when writing to target's flash memory… ?

On Tue, May 1, 2018 at 1:24 PM, Leonard Mosescu  wrote:

> Thanks Pavel. It doesn't look like a timeout to me:
>
> 1. First, the other (main) thread is just waiting on the
> std::future::get() on the final EXPECT_TRUE(result.get().Success())
>
> *#0  0x7fe4bdfbb6cd in pthread_join (threadid=140620333614848,
> thread_return=0x0) at pthread_join.c:90*
> *...*
> *#14 0x55b855bdf370 in std::future::get
> (this=0x7ffe4498aad0) at /usr/include/c++/7/future:796*
> *#15 0x55b855b8c502 in
> GDBRemoteCommunicationClientTest_GetMemoryRegionInfo_Test::TestBody
> (this=0x55b85bc195d0)*
> *at
> /usr/local/google/home/mosescu/extra/llvm/src/tools/lldb/unittests/Process/gdb-remote/GDBRemoteCommunicationClientTest.cpp:*
> 330
>
>
> 2. The part that seems interesting to me is this part of the callstack I
> mentioned:
>
> *frame #9: 0x564647c39a23
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteClientBase::SendPacketAndWaitForResponse(this=0x56464d53e580,
> payload=(Data = "qSupported:xmlRegisters=i386,arm,mips", Length = 37),
> response=0x7f2d1eb0a0e0, send_async=false) at
> GDBRemoteClientBase.cpp:176*
> *frame #10: 0x564647c44e0a
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetRemoteQSupported(this=0x56464d53e580)
> at GDBRemoteCommunicationClient.cpp:370*
> *frame #11: 0x564647c4427b
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetQXferMemoryMapReadSupported(this=0x56464d53e580)
> at GDBRemoteCommunicationClient.cpp:200*
> *frame #12: 0x564647c4c661
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::LoadQXferMemoryMap(this=0x56464d53e580)
> at GDBRemoteCommunicationClient.cpp:1609*
> *frame #13: 0x564647c4bb4e
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetQXferMemoryMapRegionInfo(this=0x56464d53e580,
> addr=16384, region=0x7f2d1eb0a6c0) at
> GDBRemoteCommunicationClient.cpp:1583*
> *frame #14: 0x564647c4b95d
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetMemoryRegionInfo(this=0x56464d53e580,
> addr=16384, region_info=0x7ffd8b1a8870) at
> GDBRemoteCommunicationClient.cpp:1558*
> *frame #15: 0x56464797ee25
> ProcessGdbRemoteTests`operator(__closure=0x56464d5636a8) at
> GDBRemoteCommunicationClientTest.cpp:339*
>
> It seems that the client is attempting extra communication which is not
> modeled in the mock HandlePacket(), so it simply hangs in there. If that's
> the case I'd expect this issue to be more widespread (unless my source tree
> is in a broken state).
>
> This is the fist time I looked at this part of the code so it's possible I
> missed something obvious though.
>
>
>
> On Fri, Apr 27, 2018 at 2:11 AM, Pavel Labath  wrote:
>
>> On Thu, 26 Apr 2018 at 22:58, Leonard Mosescu via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>> > I just did a clean build (debug) on Linux, and I noticed that the LLDB
>> tests seem to consistently get stuck:
>>
>> >   --
>> Testing:
>> 1002 tests, 12 threads --
>>
>> >   99%
>> [===
>> 
>> ===-]
>> ETA: 00:00:01
>> > lldb-Suite :: types/TestIntegerTypes.py
>>
>>
>> > At this point there are a bunch of llvm-lit processes waiting and two
>> suspicious LLDB unit tests:
>>
>>
>> > ProcessGdbRemoteTests
>> --gtest_filter=GDBRemoteCommunicationClientTest.GetMemoryRegionInfo
>> > ProcessGdbRemoteTests
>> --gtest_filter=GDBRemoteCommunicationClientTest.GetMemoryReg
>> ionInfoInvalidResponse
>>
>>
>> > I took a quick look and they both seem to blocked on communicating with
>> the remote:
>>
>> > thread #2, name = 'ProcessGdbRemot', stop reason = signal SIGSTOP
>>
>> These tests should have two threads communicating with each other. Can you
>> check what the other thread is doing?
>>
>> My bet would be that fact that we are now running dotest tests
>> concurrently
>> with the unittests is putting more load on the system (particularly in
>> debug builds), and the communication times out. You can try increasing the
>> timeout in GDBRemoteTestUtils.cpp:GetPacket to see if that helps.
>>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Forking fixes in the context of debuggers

2018-05-01 Thread Kamil Rytarowski via lldb-dev
For the past month I've been mostly working on improving the kernel code
in the ptrace(2) API. Additionally, I've prepared support for reading
NetBSD/aarch64 core(5) files.

A critical Problem Report kern/51630 regarding lack of PTRACE_VFORK
implementation has been fixed. This means that there are no other
unimplemented API calls, but there are still bugs in the existing ones.

With fixes and addition of new test cases, as of today we are passing
961 ptrace(2) tests and skipping 1 (out of 1018 total).

Plan for the next milestone

Cover the remaining forking corner-cases in the context of debuggers
with new ATF tests and fix the remaining bugs.

The first step is to implement proper support for handling
PT_TRACE_ME-traced scenarios from a vfork(2)ed child. Next I plan to
keep covering the corner cases of the forking code and finish this by
removal of subtle bugs that are still left in the code since the
SMP'ification.

http://blog.netbsd.org/tnf/entry/forking_fixes_in_the_context

This work was sponsored by The NetBSD Foundation.



signature.asc
Description: OpenPGP digital signature
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev