Re: [lldb-dev] increase timeout for tests?
+1 On deleting the lldb-mi tests and increasing the timeout. On Wed, 14 Mar 2018 at 02:27, Jim Ingham wrote: > It is unfortunate that we have to set really long test timeouts because we > are timing the total Test class run, not the individual tests. It is > really convenient to be able to group similar tests in one class, so they > can reuse setup and common check methods etc. But if we're going to have > to push the timeouts to something really long because these tests get > charged incorrectly, which makes this strategy artificially less desirable. > > When we spawn a dotest.py to run each test file, the runner that is doing > the timeout hasn't ingested the test class, so it can't do something > reasonable like count the number of tests and multiply that into the > timeout to get the full test timeout. I tried to hack around this but I > wasn't successful at getting hold of the test configuration in the runner > so you could figure out the number of tests there. If somebody more > familiar with the test harness than I am can see a way to do that, that > seems like a much better way to go. > > But if we can't do that, then we can increase the overall timeout. Though > we might want to override that with LLDB_TEST_TIMEOUT and set it to > something lower on the bots. > > Counting the test methods would be a bit tricky, I believe. I think that a slightly more feasible solution (although it would still require some rearchitecting) would be to base the timeout on the last message received from the "inferior" dotest instance. Each dotest sub-process opens up a socket to communicate the test results to the master process. We could use this as a liveness indicator, and base the timeout on the time elapsed since the last message. This is still a bit tricky because right now the timeout logic is in a completely different place than the communication code, but this could be fixed (if someone feels adventurous enough). cheers, pl ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] settings set target.source-map question
Sounds like a good idea to me. (For testing I'd recommend a .ll file with the paths you need hard coded and lldb-test breakpoint.) On Wed, 14 Mar 2018 at 05:16, Greg Clayton via lldb-dev < lldb-dev@lists.llvm.org> wrote: > When using "settings set target.source-map", when we try to set > breakpoints by file and line, we try to undo any source remapping we did so > we can set the breakpoint correctly: > > BreakpointSP Target::CreateBreakpoint(const FileSpecList > *containingModules, > const FileSpec &file, uint32_t > line_no, > lldb::addr_t offset, > LazyBool check_inlines, > LazyBool skip_prologue, bool > internal, > bool hardware, > LazyBool move_to_nearest_code) { > FileSpec remapped_file; > ConstString remapped_path; > if (GetSourcePathMap().ReverseRemapPath(ConstString(file.GetPath().c_str > ()), > remapped_path)) > remapped_file.SetFile(remapped_path.AsCString(), true); > else > remapped_file = file; > > > Note that the "remapped_file.SetFile(remapped_path.AsCString(), true);" > is saying to resolve the path. I don't believe we want this path to resolve > itself right? > > I am currently running issues when using this with: > > (lldb) settings set target.source-map ./ /Users/me/source > > The debug info has all of the compilation directories set to "." and the > resolving the path will cause the current working directory to be used when > resolving the path and then we can't set breakpoints because the resolved > path doesn't match. Any objections if I change the second argument to false > so it doesn't resolve? I can't imagine we would want this reverse mapping > to resolve?? > > > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] increase timeout for tests?
I don't see 22 lldb-mi tests xfailed everywhere. I see a lot of tests skipped, but those are clearly marked as skip on Windows, FreeBSD, Darwin, Linux. I've got a good chunk of the lldb-mi tests running on Hexagon. I don’t want them deleted, since I use them. lldb-mi tests can be hard to debug, but I found that setting the lldb-mi log to be stdout helps a lot. In lldbmi_testcase.py, in spawnLldbMi, add this line: self.child.logfile = sys.stdout -- Qualcomm Innovation Center, Inc. The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project > -Original Message- > From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of Vedant > Kumar via lldb-dev > Sent: Tuesday, March 13, 2018 7:48 PM > To: Davide Italiano > Cc: LLDB > Subject: Re: [lldb-dev] increase timeout for tests? > > As a first step, I think there's consensus on increasing the test timeout to > ~3x > the length of the slowest test we know of. That test appears to be > TestDataFormatterObjC, which takes 388 seconds on Davide's machine. So I > propose 20 minutes as the timeout value. > > Separately, regarding x-failed pexpect()-backed tests, I propose deleting them > if they've been x-failed for over a year. That seems like a long enough time > to > wait for someone to step up and fix them given that they're a real > testing/maintenance burden. For any group of to-be-deleted tests, like the 22 > lldb-mi tests x-failed in all configurations, I'd file a PR about potentially > bringing the tests back. Thoughts? > > thanks, > vedant > > > On Mar 13, 2018, at 11:52 AM, Davide Italiano > wrote: > > > > On Tue, Mar 13, 2018 at 11:26 AM, Jim Ingham > wrote: > >> It sounds like we timing out based on the whole test class, not the > >> individual > tests? If you're worried about test failures not hanging up the test suite > the you > really want to do the latter. > >> > >> These are all tests that contain 5 or more independent tests. That's > probably why they are taking so long to run. > >> > >> I don't object to having fairly long backstop timeouts, though I agree with > Pavel that we should choose something reasonable based on the slowest > running tests just so some single error doesn't cause test runs to just never > complete, making analysis harder. > >> > > > > Vedant (cc:ed) is going to take a look at this as he's babysitting the > > bots for the week. I'll defer the call to him. > > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] [Bug 36739] New: Fix tests which check that the lldb-mi driver exits properly
https://bugs.llvm.org/show_bug.cgi?id=36739 Bug ID: 36739 Summary: Fix tests which check that the lldb-mi driver exits properly Product: lldb Version: unspecified Hardware: PC OS: All Status: NEW Severity: normal Priority: P Component: All Bugs Assignee: lldb-dev@lists.llvm.org Reporter: v...@apple.com CC: llvm-b...@lists.llvm.org Virtually everything in TestMiExit.py has been xfailed for a while. -- You are receiving this mail because: You are the assignee for the bug.___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] [Bug 36740] New: Fix tests which check the lldb-mi -gdb-set and -gdb-show commands
https://bugs.llvm.org/show_bug.cgi?id=36740 Bug ID: 36740 Summary: Fix tests which check the lldb-mi -gdb-set and -gdb-show commands Product: lldb Version: unspecified Hardware: PC OS: All Status: NEW Severity: normal Priority: P Component: All Bugs Assignee: lldb-dev@lists.llvm.org Reporter: v...@apple.com CC: llvm-b...@lists.llvm.org Virtually everything in TestMiGdbSetShow.py has been xfailed for a while. -- You are receiving this mail because: You are the assignee for the bug.___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] [Bug 36741] New: Fix tests which check the lldb-mi -symbol-xxx commands
https://bugs.llvm.org/show_bug.cgi?id=36741 Bug ID: 36741 Summary: Fix tests which check the lldb-mi -symbol-xxx commands Product: lldb Version: unspecified Hardware: PC OS: All Status: NEW Severity: normal Priority: P Component: All Bugs Assignee: lldb-dev@lists.llvm.org Reporter: v...@apple.com CC: llvm-b...@lists.llvm.org Virtually everything in TestMiSymbol.py has been xfailed for a while. -- You are receiving this mail because: You are the assignee for the bug.___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] increase timeout for tests?
I got the 22 number from a command which may have counted too much: find . -name \*TestMi\*.py -exec grep -E "(unittest2\.)?expectedFailure(All)?" {} \; | wc -l Some of the 'expectedFailureAll' decorators actually specified an OS list. I'm not planning on touching those. There were a handful of lldb-mi tests that didn't appear to work at all, and I've filed bugs / deleted those in r327552. If you see something you feel really should stay in tree, we can bring it back. vedant > On Mar 14, 2018, at 11:27 AM, Ted Woodward via lldb-dev > wrote: > > I don't see 22 lldb-mi tests xfailed everywhere. I see a lot of tests > skipped, but > those are clearly marked as skip on Windows, FreeBSD, Darwin, Linux. I've got > a good chunk of the lldb-mi tests running on Hexagon. I don’t want them > deleted, > since I use them. > > lldb-mi tests can be hard to debug, but I found that setting the lldb-mi log > to be > stdout helps a lot. In lldbmi_testcase.py, in spawnLldbMi, add this line: > >self.child.logfile = sys.stdout > > -- > Qualcomm Innovation Center, Inc. > The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a > Linux Foundation Collaborative Project > >> -Original Message- >> From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of Vedant >> Kumar via lldb-dev >> Sent: Tuesday, March 13, 2018 7:48 PM >> To: Davide Italiano >> Cc: LLDB >> Subject: Re: [lldb-dev] increase timeout for tests? >> >> As a first step, I think there's consensus on increasing the test timeout to >> ~3x >> the length of the slowest test we know of. That test appears to be >> TestDataFormatterObjC, which takes 388 seconds on Davide's machine. So I >> propose 20 minutes as the timeout value. >> >> Separately, regarding x-failed pexpect()-backed tests, I propose deleting >> them >> if they've been x-failed for over a year. That seems like a long enough time >> to >> wait for someone to step up and fix them given that they're a real >> testing/maintenance burden. For any group of to-be-deleted tests, like the 22 >> lldb-mi tests x-failed in all configurations, I'd file a PR about potentially >> bringing the tests back. Thoughts? >> >> thanks, >> vedant >> >>> On Mar 13, 2018, at 11:52 AM, Davide Italiano >> wrote: >>> >>> On Tue, Mar 13, 2018 at 11:26 AM, Jim Ingham >> wrote: It sounds like we timing out based on the whole test class, not the individual >> tests? If you're worried about test failures not hanging up the test suite >> the you >> really want to do the latter. These are all tests that contain 5 or more independent tests. That's >> probably why they are taking so long to run. I don't object to having fairly long backstop timeouts, though I agree with >> Pavel that we should choose something reasonable based on the slowest >> running tests just so some single error doesn't cause test runs to just never >> complete, making analysis harder. >>> >>> Vedant (cc:ed) is going to take a look at this as he's babysitting the >>> bots for the week. I'll defer the call to him. >> >> ___ >> lldb-dev mailing list >> lldb-dev@lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] [Bug 36746] New: Allow 'quit' to take an exit code
https://bugs.llvm.org/show_bug.cgi?id=36746 Bug ID: 36746 Summary: Allow 'quit' to take an exit code Product: lldb Version: 6.0 Hardware: PC OS: All Status: NEW Severity: enhancement Priority: P Component: All Bugs Assignee: lldb-dev@lists.llvm.org Reporter: alb...@apple.com CC: llvm-b...@lists.llvm.org When running lldb, it is not possible to return an exit code from the process using the 'quit' command: (lldb) quit 1 $ echo $? 0 Note that a workaround is to directly call os._exit(1) (since sys.exit is apparently caught by the interpreter to prevent accidental exiting) (lldb) script os._exit(1) $ echo $? 1 -- You are receiving this mail because: You are the assignee for the bug.___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] Converting a pointer to python string in a formatter
Hi, I came across the formatter example in unicode_strings.py where in utf16_summary() I see this code: string_data = value.process.ReadMemory(pointer, length, error) # utf8 is safe to emit as-is on OSX return '"%s"' % (string_data.decode('utf-16').encode('utf-8')) I am trying to replicate that in my own formatter and I'm having difficulty converting the pointer to an addr_t which is what ReadMemory wants. In my case pointer comes from: self.rep.GetChildMemberWithName("__l").GetChildMemberWithName("__data_") (this is a basic_string which has a union: struct __rep { union { __long __l; // Used for long strings __short __s; // Used for short strings - stores in place __raw __r; // ?? }; }; and __long is defined as: struct __long { pointer __data_; size_type __size_; size_type __cap_; }; So __data_ is a basic_string::pointer (plain C pointer). How do I convert this pointer to the addr_t that I need? I believe that the GetChildMemberWithName returns an SBValue. I tried pointer.GetPointeeData(0, 1).GetLoadAddress() but I'm not getting the corect results. Thanks! Florin ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev