[lldb-dev] [Bug 26677] LLDB does not report Linux signal 35 (SIGRTMIN+1)
https://llvm.org/bugs/show_bug.cgi?id=26677 lab...@google.com changed: What|Removed |Added Status|NEW |RESOLVED CC||lab...@google.com Resolution|--- |WONTFIX --- Comment #2 from lab...@google.com --- We've made a decision to treat real time signals more like other non-critical signals (e.g., SIGALRM and SIGCHLD) and not stop on them by default. The reason for that was that debugging applications that use these signals a lot was quite troublesome (unless you are debugging the actual signal-passing logic). You can always enable the original behavior with the command "process handle --stop true SIGWHATEVER". We may change it back if it turns out that this is what most people want, but for now you will need to enable stopping manually, if that's what you need. Please reopen if you still can't get your program to stop after enabling the said behavior. -- You are receiving this mail because: You are the assignee for the bug. ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] [Bug 24717] MiBreakTestCase.test_lldbmi_break_insert_function_pending is marked expectedFlakeyLinux, but still fails
https://llvm.org/bugs/show_bug.cgi?id=24717 lab...@google.com changed: What|Removed |Added Status|RESOLVED|REOPENED Resolution|FIXED |--- -- You are receiving this mail because: You are the assignee for the bug. ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] [Bug 26694] New: Expression evaluation broken for__attribute__((overloadable))
https://llvm.org/bugs/show_bug.cgi?id=26694 Bug ID: 26694 Summary: Expression evaluation broken for__attribute__((overloadable)) Product: lldb Version: unspecified Hardware: PC OS: Windows NT Status: NEW Severity: normal Priority: P Component: All Bugs Assignee: lldb-dev@lists.llvm.org Reporter: e...@codeplay.com CC: llvm-b...@lists.llvm.org, scalla...@apple.com Classification: Unclassified Created attachment 15932 --> https://llvm.org/bugs/attachment.cgi?id=15932&action=edit Example C file and compiled binary Commit r260768 broke expression evaluation for C functions with __attribute__((overloadable)) See attached example compiled with clang -g -O0 Before this commit the correct decl of evaluated function 'MyFunc' was chosen. (lldb) target create "overloadable" Current executable set to 'overloadable' (x86_64). (lldb) b 19 Breakpoint 1: where = overloadable`main + 44 at overloadable.c:19, address = 0x004005bc (lldb) r Process 7127 launched: '/home/ewan/Desktop/Scratch/overloadable' (x86_64) This is a float function This is an integer function Process 7127 stopped * thread #1: tid = 7127, 0x004005bc overloadable`main + 44 at overloadable.c:19, name = 'overloadable', stop reason = breakpoint 1.1 frame #0: 0x004005bc overloadable`main + 44 at overloadable.c:19 16 MyFunc(2.0f); 17 int x = MyFunc(2); 18 -> 19 return 0; 20 } (lldb) expr -- MyFunc('a') This is an integer function (int) $0 = 97 (lldb) expr -- MyFunc(2) This is an integer function (int) $1 = 2 (lldb) expr -- MyFunc(2.0f) This is a float function Now however the the first decl looks like it's always chosen, incorrectly in these cases. (lldb) target create "overloadable" Current executable set to 'overloadable' (x86_64). (lldb) b 19 Breakpoint 1: where = overloadable`main + 44 at overloadable.c:19, address = 0x004005bc (lldb) r Process 6992 launched: '/home/ewan/Desktop/Scratch/overloadable' (x86_64) This is a float function This is an integer function Process 6992 stopped * thread #1: tid = 6992, 0x004005bc overloadable`main + 44 at overloadable.c:19, name = 'overloadable', stop reason = breakpoint 1.1 frame #0: 0x004005bc overloadable`main + 44 at overloadable.c:19 16 MyFunc(2.0f); 17 int x = MyFunc(2); 18 -> 19 return 0; 20 } (lldb) expr -- MyFunc('a') This is a float function (int) $0 = 25 (lldb) expr -- MyFunc(2) This is a float function -- You are receiving this mail because: You are the assignee for the bug. ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] [Bug 26632] Breakpoint opcode triggering SIGBUS on Nexus 6
https://llvm.org/bugs/show_bug.cgi?id=26632 Ewan Crawford changed: What|Removed |Added Status|NEW |RESOLVED Resolution|--- |FIXED -- You are receiving this mail because: You are the assignee for the bug. ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] problem with quoted strings in setting target.run-args
> On Feb 18, 2016, at 4:16 PM, Ted Woodward via lldb-dev > wrote: > > Quoted strings in target.run-args aren’t handled correctly. > > (lldb) settings set target.run-args "foo bar" > (lldb) settings show target.run-args > target.run-args (array of strings) = > [0]: "foo bar" > > This looks correct, but the Args in the ProcessLaunchInfo passed to the > Platform doesn’t have m_args_quote_char set, so if the Args is later pulled > out with GetQuotedCommandString() it won’t get “foo bar”, but will instead > get foo and bar unquoted. This is masked when talking to debugserver or > lldb_server because run-args are sent to the server using an RSP packet, but > on systems like Windows or the Hexagon Simulator, where run-args are on the > command line, you get 2 args, foo and bar, instead of 1 arg “foo bar”. > > The first problem is in OptionValueArray::SetArgs(), in the > eVarSetOperationAppend case. It calls Args::GetArgumentAtIndex(), which > doesn’t return a quoted argument. I added a function > GetQuotedArgumentAtIndex() and called that, which revealed the second > problem. The string is passed into > OptionValue::CreateValueFromCStringForTypeMask(), which calls > OptionValueString::SetValueFromString(). In that function it explicitly > strips quotes. Changing it to not strip quotes leads to the third problem – > when TargetProperties::RunArgsValueChangedCallback() pulls the data from the > OptionValueArray to make a new Args, it calls OptionValueArray::GetArgs(), > which doesn’t handle quoting like the Args ctor does. > > I think changing the OptionValue classes to handle quoting could lead to > problems with other use cases. So that leaves me with the option of going > through the Args before launch and adding quotes around anything with spaces, > which seems hackish. Any thoughts on how to solve this issue? Any changes that are made need to know a few things: 1 - Many things that take arguments don't need the quotes for the arguments, the quotes are there to help us split arguments that contain things that must be quoted. Things like exec and posix_spawn take a "const char **" NULL terminate array of C strings. And the quotes are not needed, nor are they wanted and if you add them, they will hose things up. 2 - Anyone launching via an API that launches through a shell will need to quote correctly for your given shell or launch mechanism. There are no guarantees that the original quotes (ours mimic bash and other shell quoting) will be what you will want/need when you launch (launch in command.exe in windows). What OptionValueArgs should contain is a valid list of strings that has been broken up into args. If that is currently true, I don't see a bug here. I am fine with you adding a method to OptionValueArgs that is something like "GetQuotedCommandString(...)" that would add the quotes as needed, but again, this might be specific to the shell. I know what bash and tcsh expect, but what does windows expect? Can you use single quoted strings if your arguments contain double quotes? Can you use double quotes if your argument has single quotes? Can you escape the quote characters with a '\' character? That seems like a lot of arguments to pass to the GetQuotedCommandString() function, but you will need to make it this way if so... But the _only_ client of OptionValueArgs is the "run-args" so the other option would be to switch OptionValueArgs over to use lldb_private::Args instead of inheriting from OptionValueArray. If you do this, you will need to implement many of the OptionValue virtual functions like: virtual void DumpValue (const ExecutionContext *exe_ctx, Stream &strm, uint32_t dump_mask) = 0; virtual Error SetValueFromString (llvm::StringRef value, VarSetOperationType op = eVarSetOperationAssign); virtual bool Clear () = 0; virtual lldb::OptionValueSP DeepCopy () const = 0; Then you would have something that still has the strings split up and yet still knows what the original quoting was like since you would store your arguments in a lldb_private::Args array that would be a member variable of OptionValueArgs. ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] lldb-mi and shared library path
> On Feb 4, 2016, at 1:51 PM, Ted Woodward via lldb-dev > wrote: > > I’d expect “-gdb-set solib-search-path” to call “target modules search-paths > add”, and it does, but only when the –target-remote command is issued. It > also doesn’t handle the multiple path case, :… > > I think it should: > > 1) Set the search path immediately when called, if the target is > connected > 2) Set the search path on run/attach > 3) Handle multiple paths > > Any thoughts on this? > Here are some thoughts that say kind of how we are doing things, and after reading this it might adjust your request: In LLDB the approach we have taken is that when you are going to remotely debug something, your Platform is responsible for finding any remote file paths might not match the local file paths. For Apple with iOS, we have one or more root directories available for us to grab system files from (everything from /usr/lib/* /System/Library/Frameworks, etc). Only the executables you just built tend to exist outside of the system roots, so as long as your provide those to LLDB prior to running ("target create /path/to/locally/built/cross/executable"), we will be able to match up the binaries using their UUID even if the remote path is "/users/admin/executable". There are also ways to say "I built /path/to/locally/built/cross/executable and /path/to/locally/built/cross/libfoo.so and /path/to/locally/built/cross/libbar.so", now attach to a remote binary to debug these things. The extra .so files can be added to your target with "target module add /path/to/locally/built/cross/libfoo.so" and "target module add /path/to/locally/built/cross/libbar.so" and then we will be able to find these files when they are needed. So the main questions becomes: if you modify your platform to do the right thing, do you need any of the changes you are requesting ("-gdb-set solib-search-path" or "target modules search-paths add")? This is how things were done back with GDB, but in LLDB we are trying to make our Platform subclasses do a lot of this hard work for us. Your Platform could check with a build server and download and cache any binaries it needed. It could look through a set of directories or other commonly used areas for these files, it really depends on how your SDK/PDK is setup and how your builds tend to happen. If you have an IDE that is creating binaries, it typically knows about all of the build products you might be trying to debug, and it can often supply the build products to LLDB in case it needs them. Let me know. Greg Clayton ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] Sending input to the process being debugged
You have to consume the events from the debugger's listener (unless you specify a different listener in your SBLaunchInfo or SBAttachInfo). We have python code that can show you how to consume events: svn cat http://llvm.org/svn/llvm-project/lldb/trunk/examples/python/process_events.py So even though your process might be stopped, until you consume the stop event, the process will claim it is running or launching. The process broadcasts process event state changes (changing from running to stopped, or stopped to running). If you have more detailed questions, please let me know. Greg Clayton > On Feb 3, 2016, at 2:03 PM, John Lindal via lldb-dev > wrote: > > When I use SBDebugger::SetAsync(true), the process is not stopped at scanf, > so it does not wait for input. The process does stop and wait for input when > SetAsync(false). Unfortunately, when building a GUI on top of the C++ API, I > have to SetAsync(true). > > Is there some way to resolve this? > > Thanks, > John > > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] Module Cache improvements - RFC
> On Jan 28, 2016, at 4:21 AM, Pavel Labath wrote: > > Hello all, > > we are running into limitations of the current module download/caching > system. A simple android application can link to about 46 megabytes > worth of modules, and downloading that with our current transfer rates > takes about 25 seconds. Much of the data we download this way is never > actually accessed, and yet we download everything immediately upon > starting the debug session, which makes the first session extremely > laggy. > > We could speed up a lot by only downloading the portions of the module > that we really need (in my case this turns out to be about 8 > megabytes). Also, further speedups could be made by increasing the > throughput of the gdb-remote protocol used for downloading these files > by using pipelining. > > I made a proof-of-concept hack of these things, put it into lldb and > I was able to get the time for the startup-attach-detach-exit cycle > down to 5.4 seconds (for comparison, the current time for the cycle is > about 3.6 seconds with a hot module cache, and 28(!) seconds with an > empty cache). > > Now, I would like to properly implement these things in lldb properly, > so this is a request for comments on my plan. What I would like to do > is: > - Replace ModuleCache with a SectionCache (actually, more like a cache > of arbitrary file chunks). When a the cache gets a request for a file > and the file is not in the cache already, it returns a special kind of > a Module, whose fragments will be downloaded as we are trying to > access them. These fragments will be cached on disk, so that > subsequent requests for the file do not need to re-download them. We > can also have the option to short-circuit this logic and download the > whole file immediately (e.g., when the file is small, or we have a > super-fast way of obtaining the whole file via rsync, etc...) > - Add pipelining support to GDBRemoteCommunicationClient for > communicating with the platform. This actually does not require any > changes to the wire protocol. The only change is in adding the ability > to send an additional request to the server while waiting for the > response to the previous one. Since the protocol is request-response > based and we are communication over a reliable transport stream, each > response can be correctly matched to a request even though we have > multiple packets in flight. Any packets which need to maintain more > complex state (like downloading a single entity using continuation > packets) can still lock the stream to get exclusive access, but I am > not sure if we actually even have any such packets in the platform > flavour of the protocol. > - Paralelize downloading of multiple files in parallel, utilizing > request pipelining. Currently we get the biggest delay when first > attaching to a process (we download file headers and some basic > informative sections) and when we try to set the first symbol-level > breakpoint (we download symbol tables and string sections). Both of > these actions operate on all modules in bulk, which makes them easy > paralelization targets. This will provide a big speed boost, as we > will be eliminating communication latency. Furthermore, in case of > lots of files, we will be overlapping file download (io) with parsing > (cpu), for an even bigger boost. > > What do you think? > Feel free to implement this in PlatformAndroid and allow others to opt into this. I won't want this by default in any of the Apple platforms in MachO we have our entire image mapped into memory and we have other tricks for getting the information quicker. So I would leave the module cache there and not change it, but feel free to add the section cache as needed. Maybe if this goes really well and it can be arbitrarily used on any files types (MachO, ELF, COFF, etc) and it just works seamlessly, we can expand who uses it. In Xcode we take the time the first time we connect to a device we haven't seen to download all of the system libraries. Why is the 28 seconds considered prohibitive for the first time you connect. The data stays cached even after you quit and restart LLDB or your IDE right? Greg Clayton ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] lldb-mi and shared library path
The Hexagon SDK taking to hardware is very bare bones. The OS is sitting on the phone and is loaded to the Hexagon by Android. The debugger opens the OS elf file, the user tells it where the shared libraries are, and lldb does the usual stop-at-the-rendezvous function negotiation to get info when shared libraries are loaded. Each example application is its own shared library, and each is built in a different directory. I don't think I can have the Platform do the hard work, because the shared libraries could be anywhere. It works fine when we run lldb; it doesn't when our Eclipse guy runs lldb-mi. I'm having fun looking at lots of logs! -- Qualcomm Innovation Center, Inc. The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project -Original Message- From: Greg Clayton [mailto:gclay...@apple.com] Sent: Monday, February 22, 2016 6:24 PM To: Ted Woodward Cc: LLDB Subject: Re: [lldb-dev] lldb-mi and shared library path > On Feb 4, 2016, at 1:51 PM, Ted Woodward via lldb-dev > wrote: > > I’d expect “-gdb-set solib-search-path” to call “target modules search-paths > add”, and it does, but only when the –target-remote command is issued. It > also doesn’t handle the multiple path case, :… > > I think it should: > > 1) Set the search path immediately when called, if the target is > connected > 2) Set the search path on run/attach > 3) Handle multiple paths > > Any thoughts on this? > Here are some thoughts that say kind of how we are doing things, and after reading this it might adjust your request: In LLDB the approach we have taken is that when you are going to remotely debug something, your Platform is responsible for finding any remote file paths might not match the local file paths. For Apple with iOS, we have one or more root directories available for us to grab system files from (everything from /usr/lib/* /System/Library/Frameworks, etc). Only the executables you just built tend to exist outside of the system roots, so as long as your provide those to LLDB prior to running ("target create /path/to/locally/built/cross/executable"), we will be able to match up the binaries using their UUID even if the remote path is "/users/admin/executable". There are also ways to say "I built /path/to/locally/built/cross/executable and /path/to/locally/built/cross/libfoo.so and /path/to/locally/built/cross/libbar.so", now attach to a remote binary to debug these things. The extra .so files can be added to your target with "target module add /path/to/locally/built/cross/libfoo.so" and "target module add /path/to/locally/built/cross/libbar.so" and then we will be able to find these files when they are needed. So the main questions becomes: if you modify your platform to do the right thing, do you need any of the changes you are requesting ("-gdb-set solib-search-path" or "target modules search-paths add")? This is how things were done back with GDB, but in LLDB we are trying to make our Platform subclasses do a lot of this hard work for us. Your Platform could check with a build server and download and cache any binaries it needed. It could look through a set of directories or other commonly used areas for these files, it really depends on how your SDK/PDK is setup and how your builds tend to happen. If you have an IDE that is creating binaries, it typically knows about all of the build products you might be trying to debug, and it can often supply the build products to LLDB in case it needs them. Let me know. Greg Clayton ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] Module Cache improvements - RFC
Can't you just cache the modules locally on the disk, so that you only take that 26 second hit the first time you try to download that module, and then it indexes it by some sort of hash. Then instead of just downloading it, you check the local cache first and only download if it's not there. If you already do all this, then disregard. On Mon, Feb 22, 2016 at 4:39 PM Greg Clayton via lldb-dev < lldb-dev@lists.llvm.org> wrote: > > > On Jan 28, 2016, at 4:21 AM, Pavel Labath wrote: > > > > Hello all, > > > > we are running into limitations of the current module download/caching > > system. A simple android application can link to about 46 megabytes > > worth of modules, and downloading that with our current transfer rates > > takes about 25 seconds. Much of the data we download this way is never > > actually accessed, and yet we download everything immediately upon > > starting the debug session, which makes the first session extremely > > laggy. > > > > We could speed up a lot by only downloading the portions of the module > > that we really need (in my case this turns out to be about 8 > > megabytes). Also, further speedups could be made by increasing the > > throughput of the gdb-remote protocol used for downloading these files > > by using pipelining. > > > > I made a proof-of-concept hack of these things, put it into lldb and > > I was able to get the time for the startup-attach-detach-exit cycle > > down to 5.4 seconds (for comparison, the current time for the cycle is > > about 3.6 seconds with a hot module cache, and 28(!) seconds with an > > empty cache). > > > > Now, I would like to properly implement these things in lldb properly, > > so this is a request for comments on my plan. What I would like to do > > is: > > - Replace ModuleCache with a SectionCache (actually, more like a cache > > of arbitrary file chunks). When a the cache gets a request for a file > > and the file is not in the cache already, it returns a special kind of > > a Module, whose fragments will be downloaded as we are trying to > > access them. These fragments will be cached on disk, so that > > subsequent requests for the file do not need to re-download them. We > > can also have the option to short-circuit this logic and download the > > whole file immediately (e.g., when the file is small, or we have a > > super-fast way of obtaining the whole file via rsync, etc...) > > - Add pipelining support to GDBRemoteCommunicationClient for > > communicating with the platform. This actually does not require any > > changes to the wire protocol. The only change is in adding the ability > > to send an additional request to the server while waiting for the > > response to the previous one. Since the protocol is request-response > > based and we are communication over a reliable transport stream, each > > response can be correctly matched to a request even though we have > > multiple packets in flight. Any packets which need to maintain more > > complex state (like downloading a single entity using continuation > > packets) can still lock the stream to get exclusive access, but I am > > not sure if we actually even have any such packets in the platform > > flavour of the protocol. > > - Paralelize downloading of multiple files in parallel, utilizing > > request pipelining. Currently we get the biggest delay when first > > attaching to a process (we download file headers and some basic > > informative sections) and when we try to set the first symbol-level > > breakpoint (we download symbol tables and string sections). Both of > > these actions operate on all modules in bulk, which makes them easy > > paralelization targets. This will provide a big speed boost, as we > > will be eliminating communication latency. Furthermore, in case of > > lots of files, we will be overlapping file download (io) with parsing > > (cpu), for an even bigger boost. > > > > What do you think? > > > > Feel free to implement this in PlatformAndroid and allow others to opt > into this. I won't want this by default in any of the Apple platforms in > MachO we have our entire image mapped into memory and we have other tricks > for getting the information quicker. > > So I would leave the module cache there and not change it, but feel free > to add the section cache as needed. Maybe if this goes really well and it > can be arbitrarily used on any files types (MachO, ELF, COFF, etc) and it > just works seamlessly, we can expand who uses it. > > In Xcode we take the time the first time we connect to a device we haven't > seen to download all of the system libraries. Why is the 28 seconds > considered prohibitive for the first time you connect. The data stays > cached even after you quit and restart LLDB or your IDE right? > > Greg Clayton > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > ___ lldb-dev mailing l
Re: [lldb-dev] [3.8 Release] Release status
I had hoped to tag rc3 today (I feel like I've said this a lot lately), but it's at least really, really close. I'm waiting for: - r261297 - Implement the likely resolution of core issue 253. Still in post-commit review. - D17507 - The controlling expression for _Generic is unevaluated New for today. Waiting for review. Thanks, Hans ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev