Re: [lldb-dev] [llvm-dev] [cfe-dev] [3.8 Release] RC1 has been tagged

2016-01-28 Thread Daniel Sanders via lldb-dev
I've been putting together a patch to bring that back and I've just posted it 
as http://reviews.llvm.org/D16679.

From: Nikola Smiljanic [mailto:popiz...@gmail.com]
Sent: 28 January 2016 02:00
To: Daniel Sanders
Cc: James Molloy; Ismail Donmez; Ben Pope; cfe-dev; openmp-dev 
(openmp-...@lists.llvm.org); LLDB Dev
Subject: Re: [llvm-dev] [cfe-dev] [3.8 Release] RC1 has been tagged

It seems that test-release was fixed, thanks everyone. Builds are OK but I'd 
like to know where did test-suite go? All I see is the llvm.src directory, am I 
supposed to export test-suite myself?

On Wed, Jan 27, 2016 at 9:47 PM, Daniel Sanders 
mailto:daniel.sand...@imgtec.com>> wrote:
> Have you accidentally checked out the test-suite into /projects? if it's 
> there it will auto-configure

We fixed it for rc1 but test-release.sh used to put the test-suite there.

From: llvm-dev 
[mailto:llvm-dev-boun...@lists.llvm.org]
 On Behalf Of James Molloy via llvm-dev
Sent: 26 January 2016 16:05
To: Ismail Donmez; Nikola Smiljanic
Cc: Ben Pope; llvm-dev; cfe-dev; openmp-dev 
(openmp-...@lists.llvm.org); LLDB Dev
Subject: Re: [llvm-dev] [cfe-dev] [3.8 Release] RC1 has been tagged

The test-suite shouldn't be being build with CMake for the release - the CMake 
system is not yet ready. Have you accidentally checked out the test-suite into 
/projects? if it's there it will auto-configure.

James

On Tue, 26 Jan 2016 at 16:01 Ismail Donmez via cfe-dev 
mailto:cfe-...@lists.llvm.org>> wrote:
On Tue, Jan 26, 2016 at 1:56 PM, Nikola Smiljanic via llvm-dev
mailto:llvm-...@lists.llvm.org>> wrote:
> Phase1 fails to build on openSUSE 13.2, can anyone see what's wrong from
> this log file?

Something wrong with the test-suite:

make -f CMakeFiles/test-suite.dir/build.make CMakeFiles/test-suite.dir/depend
make[2]: Entering directory
'/home/nikola/rc1/Phase1/Release/llvmCore-3.8.0-rc1.obj'
CMakeFiles/test-suite.dir/build.make:112: *** target pattern contains
no '%'.  Stop.
make[2]: Leaving directory
'/home/nikola/rc1/Phase1/Release/llvmCore-3.8.0-rc1.obj'
CMakeFiles/Makefile2:198: recipe for target
'CMakeFiles/test-suite.dir/all' failed
make[1]: *** [CMakeFiles/test-suite.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs
___
cfe-dev mailing list
cfe-...@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Module Cache improvements - RFC

2016-01-28 Thread Pavel Labath via lldb-dev
Hello all,

we are running into limitations of the current module download/caching
system. A simple android application can link to about 46 megabytes
worth of modules, and downloading that with our current transfer rates
takes about 25 seconds. Much of the data we download this way is never
actually accessed, and yet we download everything immediately upon
starting the debug session, which makes the first session extremely
laggy.

We could speed up a lot by only downloading the portions of the module
that we really need (in my case this turns out to be about 8
megabytes). Also, further speedups could be made by increasing the
throughput of the gdb-remote protocol used for downloading these files
by using pipelining.

I made a proof-of-concept hack  of these things, put it into lldb and
I was able to get the time for the startup-attach-detach-exit cycle
down to 5.4 seconds (for comparison, the current time for the cycle is
about 3.6 seconds with a hot module cache, and 28(!) seconds with an
empty cache).

Now, I would like to properly implement these things in lldb properly,
so this is a request for comments on my plan. What I would like to do
is:
- Replace ModuleCache with a SectionCache (actually, more like a cache
of arbitrary file chunks). When a the cache gets a request for a file
and the file is not in the cache already, it returns a special kind of
a Module, whose fragments will be downloaded as we are trying to
access them. These fragments will be cached on disk, so that
subsequent requests for the file do not need to re-download them. We
can also have the option to short-circuit this logic and download the
whole file immediately (e.g., when the file is small, or we have a
super-fast way of obtaining the whole file via rsync, etc...)
- Add pipelining support to GDBRemoteCommunicationClient for
communicating with the platform. This actually does not require any
changes to the wire protocol. The only change is in adding the ability
to send an additional request to the server while waiting for the
response to the previous one. Since the protocol is request-response
based and we are communication over a reliable transport stream, each
response can be correctly matched to a request even though we have
multiple packets in flight. Any packets which need to maintain more
complex state (like downloading a single entity using continuation
packets) can still lock the stream to get exclusive access, but I am
not sure if we actually even have any such packets in the platform
flavour of the protocol.
- Paralelize downloading of multiple files in parallel, utilizing
request pipelining. Currently we get the biggest delay when first
attaching to a process (we download file headers and some basic
informative sections) and when we try to set the first symbol-level
breakpoint (we download symbol tables and string sections). Both of
these actions operate on all modules in bulk, which makes them easy
paralelization targets. This will provide a big speed boost, as we
will be eliminating communication latency. Furthermore, in case of
lots of files, we will be overlapping file download  (io) with parsing
(cpu), for an even bigger boost.

What do you think?

cheers,
pl
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Fixing OS X Xcode build

2016-01-28 Thread Todd Fiala via lldb-dev
This is all fixed up by r259028.  Change comments for r259027 contain some
changes to the build requirements for Xcode OS X builds.

These boil down to essentially:
* OS X 10.9 is the minimum deployment version now, up from 10.8.  This is
driven by the LLVM/clang cmake-based build.

* Cmake is now required.  (Not surprising, hopefully).

* The build grabs LLVM and clang source with git via the
http://llvm.org/git/{project}.git mirrors if the code doesn't already exist
accessible via the lldb/llvm and lldb/llvm/tools/clang directory
locations.  Previously it would use svn for the initial retrieval.

The buildbot is turned back on and is now green.  r259028 fixed a minor
breakage in the gtest target that I forget to check when doing the work for
r259027.

Let me know if you have any questions!

-Todd

On Wed, Jan 27, 2016 at 7:30 AM, Todd Fiala  wrote:

> Hi all,
>
> At the current moment the OS X Xcode build is broken.  I'll be working on
> fixing it today.  As has been discussed in the past, post llvm/clang-3.8
> the configure/automake system was getting stripped out of LLVM and clang.
> The OS X Xcode build has a legacy step in it that still uses the
> configure-based build system.  I'll be cleaning that up today.
>
> In the meantime, expect if you use the Xcode build that you'll either need
> to work with llvm/clang from earlier than yesterday (along with locally
> undoing any changes in lldb for llvm/clang changes - there was at least one
> yesterday), or just sit tight a bit.
>
> Thanks!
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Fixing OS X Xcode build

2016-01-28 Thread Nico Weber via lldb-dev
On Thu, Jan 28, 2016 at 9:28 AM, Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> This is all fixed up by r259028.  Change comments for r259027 contain some
> changes to the build requirements for Xcode OS X builds.
>
> These boil down to essentially:
> * OS X 10.9 is the minimum deployment version now, up from 10.8.  This is
> driven by the LLVM/clang cmake-based build.
>

(FWIW we build clang binaries with a deployment target of 10.6 (this
requires some trickery due to libc++ not being there) -- the cmake-based
build should at least support 10.7 without any problems as far as I know.
Not that I have a problem with lldb requiring 10.9+, the reason just sounds
a bit surprising to me.)


> * Cmake is now required.  (Not surprising, hopefully).
>
> * The build grabs LLVM and clang source with git via the
> http://llvm.org/git/{project}.git mirrors if the code doesn't already
> exist accessible via the lldb/llvm and lldb/llvm/tools/clang directory
> locations.  Previously it would use svn for the initial retrieval.
>
> The buildbot is turned back on and is now green.  r259028 fixed a minor
> breakage in the gtest target that I forget to check when doing the work for
> r259027.
>
> Let me know if you have any questions!
>
> -Todd
>
> On Wed, Jan 27, 2016 at 7:30 AM, Todd Fiala  wrote:
>
>> Hi all,
>>
>> At the current moment the OS X Xcode build is broken.  I'll be working on
>> fixing it today.  As has been discussed in the past, post llvm/clang-3.8
>> the configure/automake system was getting stripped out of LLVM and clang.
>> The OS X Xcode build has a legacy step in it that still uses the
>> configure-based build system.  I'll be cleaning that up today.
>>
>> In the meantime, expect if you use the Xcode build that you'll either
>> need to work with llvm/clang from earlier than yesterday (along with
>> locally undoing any changes in lldb for llvm/clang changes - there was at
>> least one yesterday), or just sit tight a bit.
>>
>> Thanks!
>> --
>> -Todd
>>
>
>
>
> --
> -Todd
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Fixing OS X Xcode build

2016-01-28 Thread Todd Fiala via lldb-dev
Yeah, we poked around at it for a while here.

This is the issue I hit:

-- Performing Test HAVE_CXX_ATOMICS_WITHOUT_LIB
-- Performing Test HAVE_CXX_ATOMICS_WITHOUT_LIB - Failed
-- Looking for __atomic_fetch_add_4 in atomic
-- Looking for __atomic_fetch_add_4 in atomic - not found

CMake Error at cmake/modules/CheckAtomic.cmake:36 (message):
  Host compiler appears to require libatomic, but cannot find it.

Call Stack (most recent call first):
  cmake/config-ix.cmake:296 (include)
  CMakeLists.txt:409 (include)
-- Configuring incomplete, errors occurred!

With a deployment target set to 10.8, it doesn't find the atomic header or
a lib.  With a deployment target of 10.9, it passes.  (It finds the
header).  This is using Xcode 7.2 and 7.3 beta1 compilers, our latest
publicly available options.


On Thu, Jan 28, 2016 at 7:16 AM, Nico Weber  wrote:

> On Thu, Jan 28, 2016 at 9:28 AM, Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> This is all fixed up by r259028.  Change comments for r259027 contain
>> some changes to the build requirements for Xcode OS X builds.
>>
>> These boil down to essentially:
>> * OS X 10.9 is the minimum deployment version now, up from 10.8.  This is
>> driven by the LLVM/clang cmake-based build.
>>
>
> (FWIW we build clang binaries with a deployment target of 10.6 (this
> requires some trickery due to libc++ not being there) -- the cmake-based
> build should at least support 10.7 without any problems as far as I know.
> Not that I have a problem with lldb requiring 10.9+, the reason just sounds
> a bit surprising to me.)
>
>
>> * Cmake is now required.  (Not surprising, hopefully).
>>
>> * The build grabs LLVM and clang source with git via the
>> http://llvm.org/git/{project}.git mirrors if the code doesn't already
>> exist accessible via the lldb/llvm and lldb/llvm/tools/clang directory
>> locations.  Previously it would use svn for the initial retrieval.
>>
>> The buildbot is turned back on and is now green.  r259028 fixed a minor
>> breakage in the gtest target that I forget to check when doing the work for
>> r259027.
>>
>> Let me know if you have any questions!
>>
>> -Todd
>>
>> On Wed, Jan 27, 2016 at 7:30 AM, Todd Fiala  wrote:
>>
>>> Hi all,
>>>
>>> At the current moment the OS X Xcode build is broken.  I'll be working
>>> on fixing it today.  As has been discussed in the past, post llvm/clang-3.8
>>> the configure/automake system was getting stripped out of LLVM and clang.
>>> The OS X Xcode build has a legacy step in it that still uses the
>>> configure-based build system.  I'll be cleaning that up today.
>>>
>>> In the meantime, expect if you use the Xcode build that you'll either
>>> need to work with llvm/clang from earlier than yesterday (along with
>>> locally undoing any changes in lldb for llvm/clang changes - there was at
>>> least one yesterday), or just sit tight a bit.
>>>
>>> Thanks!
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Fixing OS X Xcode build

2016-01-28 Thread Todd Fiala via lldb-dev
(7.3 beta 2 is public, but I was primarily focusing on on 7.2 and 7.3 beta
1).

On Thu, Jan 28, 2016 at 11:09 AM, Todd Fiala  wrote:

> Yeah, we poked around at it for a while here.
>
> This is the issue I hit:
>
> -- Performing Test HAVE_CXX_ATOMICS_WITHOUT_LIB
> -- Performing Test HAVE_CXX_ATOMICS_WITHOUT_LIB - Failed
> -- Looking for __atomic_fetch_add_4 in atomic
> -- Looking for __atomic_fetch_add_4 in atomic - not found
>
> CMake Error at cmake/modules/CheckAtomic.cmake:36 (message):
>   Host compiler appears to require libatomic, but cannot find it.
>
> Call Stack (most recent call first):
>   cmake/config-ix.cmake:296 (include)
>   CMakeLists.txt:409 (include)
> -- Configuring incomplete, errors occurred!
>
> With a deployment target set to 10.8, it doesn't find the atomic header or
> a lib.  With a deployment target of 10.9, it passes.  (It finds the
> header).  This is using Xcode 7.2 and 7.3 beta1 compilers, our latest
> publicly available options.
>
>
> On Thu, Jan 28, 2016 at 7:16 AM, Nico Weber  wrote:
>
>> On Thu, Jan 28, 2016 at 9:28 AM, Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> This is all fixed up by r259028.  Change comments for r259027 contain
>>> some changes to the build requirements for Xcode OS X builds.
>>>
>>> These boil down to essentially:
>>> * OS X 10.9 is the minimum deployment version now, up from 10.8.  This
>>> is driven by the LLVM/clang cmake-based build.
>>>
>>
>> (FWIW we build clang binaries with a deployment target of 10.6 (this
>> requires some trickery due to libc++ not being there) -- the cmake-based
>> build should at least support 10.7 without any problems as far as I know.
>> Not that I have a problem with lldb requiring 10.9+, the reason just sounds
>> a bit surprising to me.)
>>
>>
>>> * Cmake is now required.  (Not surprising, hopefully).
>>>
>>> * The build grabs LLVM and clang source with git via the
>>> http://llvm.org/git/{project}.git mirrors if the code doesn't already
>>> exist accessible via the lldb/llvm and lldb/llvm/tools/clang directory
>>> locations.  Previously it would use svn for the initial retrieval.
>>>
>>> The buildbot is turned back on and is now green.  r259028 fixed a minor
>>> breakage in the gtest target that I forget to check when doing the work for
>>> r259027.
>>>
>>> Let me know if you have any questions!
>>>
>>> -Todd
>>>
>>> On Wed, Jan 27, 2016 at 7:30 AM, Todd Fiala 
>>> wrote:
>>>
 Hi all,

 At the current moment the OS X Xcode build is broken.  I'll be working
 on fixing it today.  As has been discussed in the past, post llvm/clang-3.8
 the configure/automake system was getting stripped out of LLVM and clang.
 The OS X Xcode build has a legacy step in it that still uses the
 configure-based build system.  I'll be cleaning that up today.

 In the meantime, expect if you use the Xcode build that you'll either
 need to work with llvm/clang from earlier than yesterday (along with
 locally undoing any changes in lldb for llvm/clang changes - there was at
 least one yesterday), or just sit tight a bit.

 Thanks!
 --
 -Todd

>>>
>>>
>>>
>>> --
>>> -Todd
>>>
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>>
>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Ubuntu version-based fail/skip

2016-01-28 Thread Todd Fiala via lldb-dev
That could be a reasonable way to do it.  Now that I think about it,
unittest2 already gives us a generic skip where we can put the logic in
that we want.  Not sure why that didn't occur to me earlier as I've done
that very thing in the past. (I think I've conditioned myself to use our
custom decorators...)

On Mon, Jan 25, 2016 at 5:44 AM, Tamas Berghammer 
wrote:

> I think recently we are trying to reduce the number of decorators we are
> having so adding a few new Ubuntu specific decorators might not be a good
> idea. My suggestion would be to move on a little bit to the functional
> programming style with adding a new option to @expetedFailureAll where we
> can specify a function what have to evaluate to true for the decorator to
> be considered (and it is evaluated only after all other condition of
> @expectedFailureAll). Then we can create a free function called
> getLinuxDistribution what will return the distribution id and then as a
> final step we can specify a lambda to expetedFailureAll through its new
> argument what calls getLinuxDistribution and compares it with the right
> value. I know it is a lot of hoops to jump over to get a distribution
> specific decorator but I think this approach can handle arbitrarily complex
> skip/xfail conditions what will help us in the future.
>
> What do you think?
>
> Thanks,
> Tamas
>
>
>
> On Fri, Jan 22, 2016 at 6:31 PM Todd Fiala  wrote:
>
>> Hey all,
>>
>> What do you think about having some kind of way of marking the (in this
>> case, specifically) Ubuntu distribution for fail/skip test decorators?
>> I've had a few cases where I've needed to mark tests failing on for Ubuntu
>> where it really was only a particular release of an Ubuntu distribution,
>> and wasn't specifically the compiler.  (i.e. it was a constellation of more
>> moving parts that clearly occur on a particular release of an Ubuntu
>> distribution but not on others, and certainly not generically across all
>> Linux distributions).
>>
>> I'd love to have a way to skip and xfail a test for a specific Ubuntu
>> distribution release.  I guess it could be done uber-genrically, but with
>> Linux distributions this can get complicated due to the os/distribution
>> axes.  So I'd be happy to start off with just having them at a distribution
>> basis:
>>
>> @skipIfUbuntu(version_check_list)  # version_check_list contains one or
>> more version checks that, if passing, trigger the skip
>>
>> @expectedFailureUbuntu(version_check_list)  # similar to above
>>
>> Or possibly more usefully,
>>
>> @skipIfLinuxDistribution(version_check_list)  # version_check_list
>> contains one or more version checks that, if passing, trigger the skip,
>> includes the distribution
>>
>> @expectedFailureLinuxDistribution(version_check_list)  # similar to above
>>
>>
>> It's not clear to me how to work in the os=linux, distribution=Ubuntu
>> into the more generic checks like and get distribution-level version
>> checking working right otherwise, but I'm open to suggestions.
>>
>> The workaround for the short term is to just use blanket-linux @skipIf
>> and @expectedFailure style calls.
>>
>> Thoughts?
>> --
>> -Todd
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Ubuntu version-based fail/skip

2016-01-28 Thread Zachary Turner via lldb-dev
I'd prefer to avoid calling the unittest2 functions.  We already do that in
a couple places, but if we could centralize on one place where we call
unittest2 decorators it would really make it easier to customize our own
decorators.  For example, I have a short-term goal of adding an option to
dotest that allows us to treat skips as xfails.  Right now this is
problematic because we have calls into the unittest2 decorators scattered
all over.  If it's in one place, it's very easy to make this change.

I like tamas's solution, and I was planning on adding that anyway.
 basically make @decorateTest() take a new keyword argument called test_fun
default to None.  inside the wrapper function, just run that function if
it's specified and have that be factored into the equation.  Everything
else shoudl just work.  By putting it here, we also get the behavior for
free on either skips or xfails

On Thu, Jan 28, 2016 at 3:03 PM Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> That could be a reasonable way to do it.  Now that I think about it,
> unittest2 already gives us a generic skip where we can put the logic in
> that we want.  Not sure why that didn't occur to me earlier as I've done
> that very thing in the past. (I think I've conditioned myself to use our
> custom decorators...)
>
> On Mon, Jan 25, 2016 at 5:44 AM, Tamas Berghammer 
> wrote:
>
>> I think recently we are trying to reduce the number of decorators we are
>> having so adding a few new Ubuntu specific decorators might not be a good
>> idea. My suggestion would be to move on a little bit to the functional
>> programming style with adding a new option to @expetedFailureAll where we
>> can specify a function what have to evaluate to true for the decorator to
>> be considered (and it is evaluated only after all other condition of
>> @expectedFailureAll). Then we can create a free function called
>> getLinuxDistribution what will return the distribution id and then as a
>> final step we can specify a lambda to expetedFailureAll through its new
>> argument what calls getLinuxDistribution and compares it with the right
>> value. I know it is a lot of hoops to jump over to get a distribution
>> specific decorator but I think this approach can handle arbitrarily complex
>> skip/xfail conditions what will help us in the future.
>>
>> What do you think?
>>
>> Thanks,
>> Tamas
>>
>>
>>
>> On Fri, Jan 22, 2016 at 6:31 PM Todd Fiala  wrote:
>>
>>> Hey all,
>>>
>>> What do you think about having some kind of way of marking the (in this
>>> case, specifically) Ubuntu distribution for fail/skip test decorators?
>>> I've had a few cases where I've needed to mark tests failing on for Ubuntu
>>> where it really was only a particular release of an Ubuntu distribution,
>>> and wasn't specifically the compiler.  (i.e. it was a constellation of more
>>> moving parts that clearly occur on a particular release of an Ubuntu
>>> distribution but not on others, and certainly not generically across all
>>> Linux distributions).
>>>
>>> I'd love to have a way to skip and xfail a test for a specific Ubuntu
>>> distribution release.  I guess it could be done uber-genrically, but with
>>> Linux distributions this can get complicated due to the os/distribution
>>> axes.  So I'd be happy to start off with just having them at a distribution
>>> basis:
>>>
>>> @skipIfUbuntu(version_check_list)  # version_check_list contains one or
>>> more version checks that, if passing, trigger the skip
>>>
>>> @expectedFailureUbuntu(version_check_list)  # similar to above
>>>
>>> Or possibly more usefully,
>>>
>>> @skipIfLinuxDistribution(version_check_list)  # version_check_list
>>> contains one or more version checks that, if passing, trigger the skip,
>>> includes the distribution
>>>
>>> @expectedFailureLinuxDistribution(version_check_list)  # similar to above
>>>
>>>
>>> It's not clear to me how to work in the os=linux, distribution=Ubuntu
>>> into the more generic checks like and get distribution-level version
>>> checking working right otherwise, but I'm open to suggestions.
>>>
>>> The workaround for the short term is to just use blanket-linux @skipIf
>>> and @expectedFailure style calls.
>>>
>>> Thoughts?
>>> --
>>> -Todd
>>>
>>
>
>
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 26363] New: lldb 3.8.0.rc1 fails to build out of llvm tree

2016-01-28 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=26363

Bug ID: 26363
   Summary: lldb 3.8.0.rc1 fails to build out of llvm tree
   Product: lldb
   Version: 3.8
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: su...@fb.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

Build and install llvm, clang and compiler-rt. Clone lldb into a directory
outside of llvm tree. Build it:
```
% cmake -G Ninja -DCMAKE_BUILD_TYPE=Release
-DLLDB_PATH_TO_LLVM_BUILD=/home/sugak/llvm/3.8.0/centos6-native/da39a3e
-DLLDB_PATH_TO_CLANG_BUILD=/home/sugak/llvm/3.8.0/centos6-native/da39a3e
% ninja

FAILED: /home/sugak/gcc/4.9.x/centos6-native/1317bc4/bin/g++  
-DHAVE_NR_PROCESS_VM_READV -DHAVE_ROUND -D__STDC_CONSTANT_MACROS
-D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -O3 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fstack-protector --param=ssp-buffer-size=4
-fno-omit-frame-pointer -momit-leaf-frame-pointer -m64 -mtune=generic
-isystem/home/sugak/python/2.7.8/centos6-native/da39a3e/include
-isystem/home/sugak/ncurses/5.9/centos6-native/da39a3e/include
-isystem/home/engshare/third-party2/libedit/3.1/centos6-native/e1c8e90/include
-isystem/home/engshare/third-party2/llvm-fb/stable/centos6-native/da39a3e/include
 -fvisibility-inlines-hidden -Werror=date-time -std=c++11 -ffu
nction-sections -fdata-sections -Wno-deprecated-declarations
-Wno-unknown-pragmas -Wno-strict-aliasing -Wno-deprecated-register
-Wno-vla-extension  -fno-exceptions -fno-rtti -O3 -DNDEBUG -Itools/lldb-mi
-I/home/sugak/lldb/3.8.0.rc1/src/lldb/tools/lldb-mi
-I/home/sugak/lldb/3.8.0.rc1/src/lldb/include -Iinclude
-I/home/sugak/llvm/3.8.0/centos6-native/da39a3e/include
-I/home/lldb/3.8.0.rc1/src/lldb/source
-I/home/sugak/python/2.7.8/centos6-native/da39a3e/include/python2.7
-I/home/sugak/lldb/3.8.0.rc1/src/lldb/tools/clang/include -I../clang/include
-I/home/sugak/ncurses/5.9/centos6-native/da39a3e/includ
e -MMD -MT tools/lldb-mi/CMakeFiles/lldb-mi.dir/MICmdCmdData.cpp.o -MF
tools/lldb-mi/CMakeFiles/lldb-mi.dir/MICmdCmdData.cpp.o.d -o
tools/lldb-mi/CMakeFiles/lldb-mi.dir/MICmdCmdData.cpp.o -c
/home/sugak/lldb/3.8.0.rc1/src/lldb/tools/lldb-mi/MICmdCmdData.cpp
In file included from
/home/sugak/lldb/3.8.0.rc1/src/lldb/tools/lldb-mi/MICmdCmdData.cpp:45:0:
/home/sugak/lldb/3.8.0.rc1/src/lldb/tools/lldb-mi/MIUtilParse.h:13:39: fatal
error: ../lib/Support/regex_impl.h: No such file or directory
 #include "../lib/Support/regex_impl.h"
   ^
compilation terminated.
```

Looks like `lldb/tools/lldb-mi/MICmdCmdData.cpp` includes a file from llvm
repository, and expects to access it by a relative path from lldb root.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Understanding debugger launch events sequence

2016-01-28 Thread Jeffrey Tan via lldb-dev
Hi,

On mac OS, I am having difficulty understanding the launch debugger events
sequence of lldb. I used the following code to play around LLDB. I found,
for some binaries, debugger will enter stopped/paused mode, waiting for my
further input, print stack shows:
dbg> bt
* thread #1: tid = 0x15153e, 0x7fff5fc0d2af
dyld`gdb_image_notifier(dyld_image_mode, unsigned int, dyld_image_info
const*) + 1
  * frame #0: 0x7fff5fc0d2af dyld`gdb_image_notifier(dyld_image_mode,
unsigned int, dyld_image_info const*) + 1
frame #1: 0x401d

But some other binaries, it just print "Process event: stopped, reason: 1"
and inferior just exits immediately without waiting for debugger's further
input.

Questions:
1. When I launch a binary, is there supposed to be a loader breakpoint
waiting for debugger continue? Any other debug events do I expect to get
and continue?
2. What about attach?
3. What is the dyld`gdb_image_notifier() debugger break above? Why does it
happen for some binary but not others?

Thanks for any information!

# Should be first for LLDB package to be added to search path.
from find_lldb import lldb
from lldb import eStateStepping, eStateRunning, eStateExited, SBBreakpoint,
SBEvent, SBListener, SBProcess, SBTarget
import sys
import os
import subprocess
from sys import stdin, stdout
from threading import Thread

class LLDBListenerThread(Thread):
should_quit = False

def __init__(self, process):
  Thread.__init__(self)
  self.listener = SBListener('Chrome Dev Tools Listener')
  self._add_listener_to_process(process)
  self._broadcast_process_state(process)
  self._add_listener_to_target(process.target)

def _add_listener_to_target(self, target):
# Listen for breakpoint/watchpoint events
(Added/Removed/Disabled/etc).
broadcaster = target.GetBroadcaster()
mask = SBTarget.eBroadcastBitBreakpointChanged |
SBTarget.eBroadcastBitWatchpointChanged |
SBTarget.eBroadcastBitModulesLoaded
broadcaster.AddListener(self.listener, mask)

def _add_listener_to_process(self, process):
# Listen for process events (Start/Stop/Interrupt/etc).
broadcaster = process.GetBroadcaster()
mask = SBProcess.eBroadcastBitStateChanged
broadcaster.AddListener(self.listener, mask)

def _broadcast_process_state(self, process):
state = 'stopped'
if process.state == eStateStepping or process.state ==
eStateRunning:
state = 'running'
elif process.state == eStateExited:
state = 'exited'
self.should_quit = True
thread = process.selected_thread
print 'Process event: %s, reason: %d' % (state,
thread.GetStopReason())

def _breakpoint_event(self, event):
breakpoint = SBBreakpoint.GetBreakpointFromEvent(event)
print 'Breakpoint event: %s' % str(breakpoint)

def run(self):
while not self.should_quit:
event = SBEvent()
if self.listener.WaitForEvent(1, event):
if event.GetType() == SBTarget.eBroadcastBitModulesLoaded:
print 'Module load: %s' % str(event)
elif SBProcess.EventIsProcessEvent(event):

self._broadcast_process_state(SBProcess.GetProcessFromEvent(event))
elif SBBreakpoint.EventIsBreakpointEvent(event):
self._breakpoint_event(event)

def _interctive_loop(debugger):
process = debugger.GetSelectedTarget().process
event_thread = LLDBListenerThread(process)
event_thread.start()

while (True):
stdout.write('dbg> ')
command = stdin.readline().rstrip()
if len(command) == 0:
continue
debugger.HandleCommand(command)


def main():
debugger = lldb.SBDebugger.Create()

print('Working Directory: %s' % os.getcwd())
debugger.HandleCommand('target create /usr/bin/find')
debugger.HandleCommand('run .')
_interctive_loop(debugger)

if __name__ == '__main__':
main()
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev