[lldb-dev] [Bug 25070] New: SBThread::ReturnFromFrame does not work

2015-10-06 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25070

Bug ID: 25070
   Summary: SBThread::ReturnFromFrame does not work
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: beryku...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

Created attachment 15010
  --> https://llvm.org/bugs/attachment.cgi?id=15010&action=edit
C++ source with test program

I'm using lldb 3.8 compiled from trunk, but I also experienced this issue in
lldb-3.6.

When I use SBThread::ReturnFromFrame when in a C++ function that returns some
value, the SBValue doesn't get set, even though no error is returned. Is this
function currently implemented?
https://github.com/llvm-mirror/lldb/blob/f2d745d54e1903f72190f767633af481f61ff0c2/source/Target/Thread.cpp#L1921
seems to imply that it's not.

Example script (I attached the C++ source test.cpp):

import lldb
import os
import time

debugger = lldb.SBDebugger.Create()
target = debugger.CreateTarget("./test")
target.BreakpointCreateByLocation("test.cpp", 3)
process = target.LaunchSimple([], [], os.getcwd())

time.sleep(1) # wait for bp to be hit

thread = process.GetSelectedThread()
frame = thread.GetSelectedFrame()
value = lldb.SBValue()
error = thread.ReturnFromFrame(frame, value)
# error is marked as sucesss, but value contains nothing

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25071] New: Creating target with default architecture returns invalid target

2015-10-06 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25071

Bug ID: 25071
   Summary: Creating target with default architecture returns
invalid target
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: beryku...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

This worked on lldb-3.6:
target = debugger.CreateTargetWithFileAndArch("./test", lldb.LLDB_ARCH_DEFAULT)
assert target.IsValid()

but doesn't work on lldb 3.8 compiled from trunk, it returns an invalid target.
When I manually set the target ("i386-pc-linux", I'm running on Linux Mint
Rafaela 17.2 32-bit), it works and returns a valid target, but
LLDB_ARCH_DEFAULT doesn't work (nor does LLDB_ARCH_DEFAULT_32BIT). Does this
have to be set manually during the compilation of LLDB itself?

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25076] New: auto dsym/dwarf support needs to add debug_info support to @expectedFlakey* decorators

2015-10-06 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25076

Bug ID: 25076
   Summary: auto dsym/dwarf support needs to add debug_info
support to @expectedFlakey* decorators
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: todd.fi...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

I'm converting TestCalluserDefinedFunction from XFAIL to flaky on OS X.  It is
only flaky on DSYMs.  I don't have a way to express that anymore since it will
be marked flaky for both AFAICT.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25076] auto dsym/dwarf support needs to add debug_info support to @expectedFlakey* decorators

2015-10-06 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25076

Todd Fiala  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #2 from Todd Fiala  ---
Ah nice.  Thanks!  I'll change that up.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Too many open files

2015-10-06 Thread Todd Fiala via lldb-dev
On Mon, Oct 5, 2015 at 3:58 PM, Adrian McCarthy  wrote:

> Different tools are giving me different numbers.
>
> At the time of the error, Windbg says there are about 2000 open handles,
> most of them are Event handles, not File handles.  That's higher than I'd
> expect, but not really concerning.
>
>
Ah, that's useful.  I am using events (python threading.Event).  These
don't afford any clean up mechanisms on them, so I assume these go away
when the Python objects that hold them go away.


> Process Explorer, however, shows ~20k open handles per Python process
> running dotest.exe.  It also says that about 2000 of those are the
> process's "own handles."  I'm researching to see what that means.  I
> suspect it means that the process has about ~18k handles to objects owned
> by another process and 2k of ones that it actually owns.
>
> I found this Stack Overflow post, which suggests is may be an interaction
> with using Python subprocess in a loop and having those subprocesses work
> with files that are still open in the parent process, but I don't entirely
> understand the answer:
>
>
> http://stackoverflow.com/questions/16526783/python-subprocess-too-many-open-files
>
>
Hmm I'll read through that.


> It might be a problem with Python subprocess that's been fixed in a newer
> version.  I'm going to try upgrading from Python 2.7.9 to 2.7.10 to see if
> that makes a difference.
>
>
Okay, we're on 2.7.10 on latest OS X.  I *think* I'm using Python 2.7.6 on
Ubuntu 14.04.  Checking now... (yes, 2.7.6 on 14.04).  Ubuntu 15.10 beta 1
is using Python 2.7.10.

Seems reasonable to check that out.  Let me know what you find out!

-Todd


> On Mon, Oct 5, 2015 at 12:02 PM, Todd Fiala  wrote:
>
>> It's possible.  However, I was monitoring actual open files during the
>> course of the run (i.e. what the kernel thought was open for the master
>> driver process, which is the only place that makes sense to see leaks
>> accumulate) in both threading and threading-pool (on OS X), and I saw only
>> the handful of file handles that I'd expect to  be open - pipes
>> (stdout,stderr,stdin) from the main test runner to the inferior test
>> runners, the shared libraries loaded as part of the test runner, and (in my
>> case, but probably not yours for the configuration), the tcp sockets for
>> gathering the test events.  There was no growth, and I didn't see things
>> hanging around longer than I'd expect.
>>
>> The SysInternals process viewer tool is great for this kind of thing -
>> glad you're using it.  Once you find out which file handles are getting
>> leaked and where they came from, we can probably figure out which part of
>> the implementation is leaking it.  I don't *expect* it to be on our side
>> given that it's not showing up on a POSIX-y system, but maybe it really is
>> but isn't in the form of a file handle on the POSIX side.  I should have a
>> look at the memory growth...
>>
>> On Mon, Oct 5, 2015 at 11:41 AM, Adrian McCarthy 
>> wrote:
>>
>>> I'm poking around with some SysInternals tools.  Over the course of test
>>> run, there are about 602k opens (CreateFiles) and 405k
>>> closes (CloseFiles) system-wide.
>>>
>>> I'm looking for a way to stop it once the error happens, so I can see
>>> how many files each process has open.  As it stands, the OS cleans up once
>>> the error is hit.
>>>
>>> I wonder if it's not a matter of actually leaking open file handles but
>>> that the closes are happening too late so that we cross the threshold
>>> shortly before the test runner would have shut everything down.
>>>
>>> On Mon, Oct 5, 2015 at 11:32 AM, Todd Fiala 
>>> wrote:
>>>
 On OS X, I'm also not seeing growth in the --test-runner-name
 threading-pool (the one you were using on Windows).

 Perhaps you can dig into if you're experiencing some kind of file leak
 on Windows.  It's possible you're hitting a platform-specific leak?  I
 recall Ed Maste hitting a FreeBSD-only leak in one or more of the python
 2.7.x releases.

 On Mon, Oct 5, 2015 at 11:26 AM, Todd Fiala 
 wrote:

> Hmm, on OS X the file handles seem to be well behaved on the
> --test-runner-name threading.  I'm not seeing any file handle growth 
> beyond
> the file handles I expect to be open.
>
> I'll see if the threading-pool behaves differently.  (That is similar
> to threading but uses the multiprocessing.pool mechanism, at the expense 
> of
> me not  being able to catch Ctrl-C at all).
>
> It's possible the pool is introducing some leakage at the file level.
>
> On Mon, Oct 5, 2015 at 11:20 AM, Todd Fiala 
> wrote:
>
>> Interesting, okay..
>>
>> This does appear to be an accumulation issue.  You made it most of
>> the way through before the issue hit.  I suspect we're leaking file
>> handles.  It probably doesn't hit the per-process limit on 
>> multiprocessing
>> because the leaked files get spread across more processes.
>

Re: [lldb-dev] Too many open files

2015-10-06 Thread Adrian McCarthy via lldb-dev
Python 2.7.10 made no difference.  I'm dealing with other issues this
afternoon, so I'll probably return to this on Wednesday.  It's not critical
since there are workarounds.

On Tue, Oct 6, 2015 at 9:41 AM, Todd Fiala  wrote:

>
>
> On Mon, Oct 5, 2015 at 3:58 PM, Adrian McCarthy 
> wrote:
>
>> Different tools are giving me different numbers.
>>
>> At the time of the error, Windbg says there are about 2000 open handles,
>> most of them are Event handles, not File handles.  That's higher than I'd
>> expect, but not really concerning.
>>
>>
> Ah, that's useful.  I am using events (python threading.Event).  These
> don't afford any clean up mechanisms on them, so I assume these go away
> when the Python objects that hold them go away.
>
>
>> Process Explorer, however, shows ~20k open handles per Python process
>> running dotest.exe.  It also says that about 2000 of those are the
>> process's "own handles."  I'm researching to see what that means.  I
>> suspect it means that the process has about ~18k handles to objects owned
>> by another process and 2k of ones that it actually owns.
>>
>> I found this Stack Overflow post, which suggests is may be an interaction
>> with using Python subprocess in a loop and having those subprocesses work
>> with files that are still open in the parent process, but I don't entirely
>> understand the answer:
>>
>>
>> http://stackoverflow.com/questions/16526783/python-subprocess-too-many-open-files
>>
>>
> Hmm I'll read through that.
>
>
>> It might be a problem with Python subprocess that's been fixed in a newer
>> version.  I'm going to try upgrading from Python 2.7.9 to 2.7.10 to see if
>> that makes a difference.
>>
>>
> Okay, we're on 2.7.10 on latest OS X.  I *think* I'm using Python 2.7.6 on
> Ubuntu 14.04.  Checking now... (yes, 2.7.6 on 14.04).  Ubuntu 15.10 beta 1
> is using Python 2.7.10.
>
> Seems reasonable to check that out.  Let me know what you find out!
>
> -Todd
>
>
>> On Mon, Oct 5, 2015 at 12:02 PM, Todd Fiala  wrote:
>>
>>> It's possible.  However, I was monitoring actual open files during the
>>> course of the run (i.e. what the kernel thought was open for the master
>>> driver process, which is the only place that makes sense to see leaks
>>> accumulate) in both threading and threading-pool (on OS X), and I saw only
>>> the handful of file handles that I'd expect to  be open - pipes
>>> (stdout,stderr,stdin) from the main test runner to the inferior test
>>> runners, the shared libraries loaded as part of the test runner, and (in my
>>> case, but probably not yours for the configuration), the tcp sockets for
>>> gathering the test events.  There was no growth, and I didn't see things
>>> hanging around longer than I'd expect.
>>>
>>> The SysInternals process viewer tool is great for this kind of thing -
>>> glad you're using it.  Once you find out which file handles are getting
>>> leaked and where they came from, we can probably figure out which part of
>>> the implementation is leaking it.  I don't *expect* it to be on our side
>>> given that it's not showing up on a POSIX-y system, but maybe it really is
>>> but isn't in the form of a file handle on the POSIX side.  I should have a
>>> look at the memory growth...
>>>
>>> On Mon, Oct 5, 2015 at 11:41 AM, Adrian McCarthy 
>>> wrote:
>>>
 I'm poking around with some SysInternals tools.  Over the course of
 test run, there are about 602k opens (CreateFiles) and 405k
 closes (CloseFiles) system-wide.

 I'm looking for a way to stop it once the error happens, so I can see
 how many files each process has open.  As it stands, the OS cleans up once
 the error is hit.

 I wonder if it's not a matter of actually leaking open file handles but
 that the closes are happening too late so that we cross the threshold
 shortly before the test runner would have shut everything down.

 On Mon, Oct 5, 2015 at 11:32 AM, Todd Fiala 
 wrote:

> On OS X, I'm also not seeing growth in the --test-runner-name
> threading-pool (the one you were using on Windows).
>
> Perhaps you can dig into if you're experiencing some kind of file leak
> on Windows.  It's possible you're hitting a platform-specific leak?  I
> recall Ed Maste hitting a FreeBSD-only leak in one or more of the python
> 2.7.x releases.
>
> On Mon, Oct 5, 2015 at 11:26 AM, Todd Fiala 
> wrote:
>
>> Hmm, on OS X the file handles seem to be well behaved on the
>> --test-runner-name threading.  I'm not seeing any file handle growth 
>> beyond
>> the file handles I expect to be open.
>>
>> I'll see if the threading-pool behaves differently.  (That is similar
>> to threading but uses the multiprocessing.pool mechanism, at the expense 
>> of
>> me not  being able to catch Ctrl-C at all).
>>
>> It's possible the pool is introducing some leakage at the file level.
>>
>> On Mon, Oct 5, 2015 at 11:20 AM, Todd Fiala 
>>

Re: [lldb-dev] Too many open files

2015-10-06 Thread Todd Fiala via lldb-dev
Okay.

A promising avenue might be to look at how Windows cleans up the
threading.Event objects.  Chasing that thread might yield why the events
are not going away (assuming those are the events that are lingering on
your end).  One thing you could consider doing is patching in a replacement
destructor for the threading.Event and print something when it fires off,
verifying that they're really going away from the Python side.  If they're
not, perhaps there's a retain bloat issue where we're not getting rid of
some python objects due to some unintended references living beyond
expectations.

The dosep.py call_with_timeout method drives the child process operation
chain.  That thing creates a ProcessDriver and collects the results from it
when done.  Everything within the ProcessDriver (including the event)
should be cleaned up by the time the call_with_timeout() call wraps up as
there shouldn't be any references outstanding.  It might also be worth you
adding a destructor to the ProcessDriver to make sure that's going away,
one per Python test inferior executed.

On Tue, Oct 6, 2015 at 9:48 AM, Adrian McCarthy  wrote:

> Python 2.7.10 made no difference.  I'm dealing with other issues this
> afternoon, so I'll probably return to this on Wednesday.  It's not critical
> since there are workarounds.
>
> On Tue, Oct 6, 2015 at 9:41 AM, Todd Fiala  wrote:
>
>>
>>
>> On Mon, Oct 5, 2015 at 3:58 PM, Adrian McCarthy 
>> wrote:
>>
>>> Different tools are giving me different numbers.
>>>
>>> At the time of the error, Windbg says there are about 2000 open handles,
>>> most of them are Event handles, not File handles.  That's higher than I'd
>>> expect, but not really concerning.
>>>
>>>
>> Ah, that's useful.  I am using events (python threading.Event).  These
>> don't afford any clean up mechanisms on them, so I assume these go away
>> when the Python objects that hold them go away.
>>
>>
>>> Process Explorer, however, shows ~20k open handles per Python process
>>> running dotest.exe.  It also says that about 2000 of those are the
>>> process's "own handles."  I'm researching to see what that means.  I
>>> suspect it means that the process has about ~18k handles to objects owned
>>> by another process and 2k of ones that it actually owns.
>>>
>>> I found this Stack Overflow post, which suggests is may be an
>>> interaction with using Python subprocess in a loop and having those
>>> subprocesses work with files that are still open in the parent process, but
>>> I don't entirely understand the answer:
>>>
>>>
>>> http://stackoverflow.com/questions/16526783/python-subprocess-too-many-open-files
>>>
>>>
>> Hmm I'll read through that.
>>
>>
>>> It might be a problem with Python subprocess that's been fixed in a
>>> newer version.  I'm going to try upgrading from Python 2.7.9 to 2.7.10 to
>>> see if that makes a difference.
>>>
>>>
>> Okay, we're on 2.7.10 on latest OS X.  I *think* I'm using Python 2.7.6
>> on Ubuntu 14.04.  Checking now... (yes, 2.7.6 on 14.04).  Ubuntu 15.10 beta
>> 1 is using Python 2.7.10.
>>
>> Seems reasonable to check that out.  Let me know what you find out!
>>
>> -Todd
>>
>>
>>> On Mon, Oct 5, 2015 at 12:02 PM, Todd Fiala 
>>> wrote:
>>>
 It's possible.  However, I was monitoring actual open files during the
 course of the run (i.e. what the kernel thought was open for the master
 driver process, which is the only place that makes sense to see leaks
 accumulate) in both threading and threading-pool (on OS X), and I saw only
 the handful of file handles that I'd expect to  be open - pipes
 (stdout,stderr,stdin) from the main test runner to the inferior test
 runners, the shared libraries loaded as part of the test runner, and (in my
 case, but probably not yours for the configuration), the tcp sockets for
 gathering the test events.  There was no growth, and I didn't see things
 hanging around longer than I'd expect.

 The SysInternals process viewer tool is great for this kind of thing -
 glad you're using it.  Once you find out which file handles are getting
 leaked and where they came from, we can probably figure out which part of
 the implementation is leaking it.  I don't *expect* it to be on our side
 given that it's not showing up on a POSIX-y system, but maybe it really is
 but isn't in the form of a file handle on the POSIX side.  I should have a
 look at the memory growth...

 On Mon, Oct 5, 2015 at 11:41 AM, Adrian McCarthy 
 wrote:

> I'm poking around with some SysInternals tools.  Over the course of
> test run, there are about 602k opens (CreateFiles) and 405k
> closes (CloseFiles) system-wide.
>
> I'm looking for a way to stop it once the error happens, so I can see
> how many files each process has open.  As it stands, the OS cleans up once
> the error is hit.
>
> I wonder if it's not a matter of actually leaking open file handles
> but that the closes are 

[lldb-dev] [Bug 25070] SBThread::ReturnFromFrame does not work

2015-10-06 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25070

Jim Ingham  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 CC||jing...@apple.com
 Resolution|--- |INVALID

--- Comment #1 from Jim Ingham  ---
ReturnFromFrame doesn't capture the return value from a given frame, rather it
FORCES a return from that frame without executing the rest of the code in the
frame, and optionally artificially sets the return value to the value passed
in.  So provided this actually did force a return from the current stack frame,
then it is behaving as designed.

If you want to capture the return value after executing the code from the
frame, then call SBThread::StepOut, and then check
SBThread::GetStopReturnValue.

I'll add some python autodoc to ReturnFromFrame to make clear what it does.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25070] SBThread::ReturnFromFrame does not work

2015-10-06 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25070

Jakub Beránek  changed:

   What|Removed |Added

 Status|RESOLVED|REOPENED
 Resolution|INVALID |---

--- Comment #3 from Jakub Beránek  ---
I tried to use StepOut, but it also doesn't seem to work.

import lldb
import os
import time

debugger = lldb.SBDebugger.Create()
target = debugger.CreateTarget("./test")
target.BreakpointCreateByLocation("test.cpp", 3)
process = target.LaunchSimple([], [], os.getcwd())

time.sleep(2) # wait for bp to be hit

thread = process.GetSelectedThread()
thread.StepOut()
value = thread.GetStopReturnValue() # No value

When I print the thread (print(thread)), it shows the stop reason and value:
"stop reason = step out\nReturn value: (int) $0 = 11", but I can't get the
value from the thread directly (both return_value and GetStopReturnValue()
return No value). And after the step out, the frames from the thread disappear
(before step out, the thread had 6 stack frames, after step out, all of them
disappeared).

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25081] New: SBThread::is_stopped shows incorrect value

2015-10-06 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25081

Bug ID: 25081
   Summary: SBThread::is_stopped shows incorrect value
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: beryku...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

Created attachment 15016
  --> https://llvm.org/bugs/attachment.cgi?id=15016&action=edit
C++ source code that loops endlessly

I'm using lldb 3.8 built from trunk. When I stop the process, it stops all the
threads, but their attribute is_stopped is not set properly. I've attached a
simple C++ source code that loops endlessly.

import lldb
import os
import time

debugger = lldb.SBDebugger.Create()
target = debugger.CreateTarget("./test")
process = target.LaunchSimple([], [], os.getcwd())

time.sleep(1)

process.Stop()

time.sleep(1)

for t in process:
assert t.is_stopped

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev