Re: [lldb-dev] FileSpec and normalization questions

2018-04-20 Thread Pavel Labath via lldb-dev
On Thu, 19 Apr 2018 at 19:20, Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:



> On Thu, Apr 19, 2018 at 11:14 AM Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:


>> Also, looking at the tests for normalizing paths I found the following
pairs of pre-normalized and post-normalization paths for posix:

>>{"//", "//"},
>>{"//net", "//net"},

>> Why wouldn't we reduce "//" to just "/" for posix? And why wouldn't we
reduce "//net" to "/net"?


> I don't know what the author of this test had in mind, but from the POSIX
spec:


http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap04.html#tag_04_11

> > A pathname that begins with two successive slashes may be interpreted
in an implementation-defined manner, although more than two leading slashes
shall be treated as a single slash.


Yes, that's exactly what the author of this test (me) had in mind. :)
And it's not just a hypothetical posix thing either. Windows and cygwin
both use \\ and // to mean funny things. I remember also seeing something
like that on linux, though I can't remember now what was it being used for.

This is also the same way as llvm path functions handle these prefixes, so
I think we should keep them. I don't know whether we do this already, but
we can obviously fold 3 or more consecutive slashes into one during
normalization. Same goes for two slashes which are not at the beginning of
the path.


On Thu, Apr 19, 2018 at 11:14 AM Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:
>> {"./foo", "foo"},
> Do we prefer to not have "./foo" to stay as "./foo"?

This is an interesting question. It basically comes down to our definition
of "identical" FileSpecs. Do we consider "foo" and "./foo" to identical? If
we do, then we should do the above normalization (theoretically we could
choose a different normal form, and convert "foo" to "./foo", but I think
that would be even weirder), otherwise we should skip it.

On one hand, these are obviously identical -- if you just take the string
and pass it to the filesystem, you will always get back the same file. But,
on the other hand, we have this notion that a FileSpec with an empty
directory component represents a wildcard that matches any file with that
name in any directory. For these purposes "./foo" and "foo" are two very
different things.

So, I can see the case for both, and I don't really have a clear
preference. All I would say is, whichever way we choose, we should make it
very explicit so that the users of FileSpec know what to expect.

On Thu, 19 Apr 2018 at 19:37, Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:
> I think I might have tried to replace some of the low level functions in
FileSpec with the LLVM equivalents and gotten a few test failures, but I
didn't have time to investigate.  It would be a worthwhile experiment for
someone to try again if they have some cycles.


I can try to take a look at it. The way I remember it, I just copied these
functions from llvm and replaced all #ifdefs with runtime checks, which is
pretty much what you later did in llvm proper. Unless there has been some
significant divergence since then, it shouldn't be hard to reconcile these.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Advice on architectures with multiple address spaces

2018-04-20 Thread Zdenek Prikryl via lldb-dev


On 04/19/2018 08:22 PM, Jim Ingham wrote:

On Apr 19, 2018, at 10:54 AM, Greg Clayton  wrote:




On Apr 19, 2018, at 10:35 AM, Jim Ingham  wrote:




On Apr 19, 2018, at 9:44 AM, Greg Clayton via lldb-dev 
 wrote:




On Apr 19, 2018, at 6:51 AM, Zdenek Prikryl via lldb-dev 
 wrote:

Hi lldb developers,

I've been researching using lldb + gdbserver stub that is based on Harvard architecture with 
multiple address spaces (one program, multiple data). The commonly adopted approach is that 
everything is mapped to a single "virtual" address space. The stub reads/writes from/to 
the right memory based on the "virtual" addresses. But I'd like to use real addresses 
with address space id instead. So, I've started looking at what has to be changed.

I've enhanced read/write commands (e.g. memory read --as  ...) and RSP 
protocol (new packet) so that the stub can read/write properly. That wasn't that 
complicated.

It might be nice to add a new RSP protocol packet that asks for the address 
space names/values:

qGetAddressSpaces

which would return something like:

1:text;2:data1,3:data2

or it would return not supported. If we get a valid return value from 
qGetAddressSpaces, then it enables the use of the new packet you added above. 
Else it defaults to using the old memory read functions.


Sounds good to me. I would return more information though. For instance, 
you can have a code address space where a 32bit byte is used and data 
address space where an 8bit byte is used. Some support for this is 
already in LLDB, although it's not tied to address spaces, but to an 
architecture.






Now I've hit an issue with expressions (LLVMUserExpression.cpp) and local 
variables (DWARFExpressions.cpp). There is a lot of memory read/write functions 
that take just an address argument. Is the only way to go to patch all these 
calls? Has anybody solved it differently?

My quick take is that any APIs that take just a lldb::addr_t would need to take 
something like:

struct SpaceAddress {
static constexpr uint32_t kNoSpace = 0;
lldb::addr_t addr;
uint32_t space;
};


I'm curious why you are suggesting another kind of address, rather than adding 
this functionality to Address?  When you actually go to resolve an Address in a 
target with a process you should have everything you need to know to give it 
the proper space.  Then fixing the expression evaluator (and anything else that 
needs fixing) would be a matter of consistently using Address rather than 
lldb::addr_t.  That seems general goodness, since converting to an lldb::addr_t 
loses information.

If we accept lldb_private::Address in all APIs that take a lldb::addr_t 
currently, then we need to always be able to get to the target in case we need 
to add code to resolve the address everywhere. I am thinking of SpaceAddress as 
an augmented lldb::addr_t instead of a section + offset style address. Also, 
there will be addresses in the code and data that do not exist in actual 
sections. Not saying that you couldn't use lldb_private::Address. I am open to 
suggestions though. So your though it remove all API that take lldb::addr_t and 
use lldb_private::Address everywhere all the time?

It has always bugged me that we have these two ways of specifying addresses.  
Are there many/any places that have to resolve an Address to a real address in 
a process that don't have a Target readily available?  That would surprise me.  
I would much rather centralize on one way than adding a third.

Jim


I'd like to remove more ways of describing the same thing, so going with 
the Address() sounds better. Having said that, there are about 4k 
instances of lldb::addr_t in LLDB code base. Where to begin/how to split 
the work? :-)...






Jim



We would need a default value for "space" (feel free to rename) that indicates 
the default address space as most of our architectures would not need this support. If we 
added a constructor like:

SpaceAddress(lldb::addr_t a) : addr(a), space(kNoSpace) {}

Then all usages of the APIs that used to take just a "lldb::addr_t" would 
implicitly call this constructor and continue to act as needed. Then we would need to 
allow lldb_private::Address objects to resolve to a SpaceAddress:

SpaceAddress lldb_private::Address::GetSpaceAddress(Target *target) const;

Since each lldb_private::Address has a section and each section knows its 
address space. Then the tricky part is finding all locations in the expression 
parser and converting those to track and use SpaceAddress. We would probably 
need to modify the allocate memory packets in the RSP protocol to be able to 
allocate memory in any address space as well.

I didn't spend much time think about correct names above, so feel free to 
suggest alternate naming.

Best advice:
- make things "just work" to keep changes to a minimum and allowing 
lldb::addr_t to implicitly convert to a SpaceAddress easily
- when modifying RSP, make sure to check for existence of new feature before 
enabling it
- qu

Re: [lldb-dev] Advice on architectures with multiple address spaces

2018-04-20 Thread Zdenek Prikryl via lldb-dev

Maybe Kalimba developers can help here. Kalimba has crazy memory map...:-)

--
Zdenek

On 04/19/2018 08:32 PM, Ted Woodward wrote:

Hexagon has a single address space, so we don't need to do anything like this.

When I worked on Motorola 56xxx DSPs we had memory spaces, but we didn't use 
RSP. We had our own protocol that used a struct for addreses, with the space 
(an enum, defined per supported core) and a uint32_t (later 2 of them) for the 
address.

Ted

--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project


-Original Message-
From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of Greg
Clayton via lldb-dev
Sent: Thursday, April 19, 2018 11:45 AM
To: Zdenek Prikryl 
Cc: lldb-dev@lists.llvm.org
Subject: Re: [lldb-dev] Advice on architectures with multiple address spaces

You might ask

the Hexagon folks if they have done anything in case they already support this
is some shape or form.

Greg Clayton






___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] r329889 - Use in-tree dsymutil on Darwin

2018-04-20 Thread Ted Woodward via lldb-dev
r329889 says "Use in-tree dsymutil on Darwin", but it's got these change in
test/CMakeLists.txt:
-set(LLDB_TEST_DEPS lldb)
+set(LLDB_TEST_DEPS lldb dsymutil)

...

+  --dsymutil $


These changes aren't gated by a check for Darwin, so they happen on all
systems. On my machine (Ubuntu 14), which doesn't have dsymutil, cmake
generation gives errors about missing dependency dsymutil.

CMake Error at tools/lldb/test/CMakeLists.txt:161 (add_dependencies):
   The dependency target "dsymutil" of target "lldb-dotest" does not exist.

Jonas, can you gate those changes with a check for Darwin, which is the
intention of the patch?

Ted

--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] FileSpec and normalization questions

2018-04-20 Thread Greg Clayton via lldb-dev


> On Apr 20, 2018, at 1:08 AM, Pavel Labath  wrote:
> 
> On Thu, 19 Apr 2018 at 19:20, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> 
> 
> 
>> On Thu, Apr 19, 2018 at 11:14 AM Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> 
> 
>>> Also, looking at the tests for normalizing paths I found the following
> pairs of pre-normalized and post-normalization paths for posix:
> 
>>>   {"//", "//"},
>>>   {"//net", "//net"},
> 
>>> Why wouldn't we reduce "//" to just "/" for posix? And why wouldn't we
> reduce "//net" to "/net"?
> 
> 
>> I don't know what the author of this test had in mind, but from the POSIX
> spec:
> 
> 
> http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap04.html#tag_04_11
> 
>>> A pathname that begins with two successive slashes may be interpreted
> in an implementation-defined manner, although more than two leading slashes
> shall be treated as a single slash.
> 
> 
> Yes, that's exactly what the author of this test (me) had in mind. :)
> And it's not just a hypothetical posix thing either. Windows and cygwin
> both use \\ and // to mean funny things. I remember also seeing something
> like that on linux, though I can't remember now what was it being used for.

ok, we need to keep any paths starting with // or \\

> 
> This is also the same way as llvm path functions handle these prefixes, so
> I think we should keep them. I don't know whether we do this already, but
> we can obviously fold 3 or more consecutive slashes into one during
> normalization. Same goes for two slashes which are not at the beginning of
> the path.
> 
> 
> On Thu, Apr 19, 2018 at 11:14 AM Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>>>{"./foo", "foo"},
>> Do we prefer to not have "./foo" to stay as "./foo"?
> 
> This is an interesting question. It basically comes down to our definition
> of "identical" FileSpecs. Do we consider "foo" and "./foo" to identical? If
> we do, then we should do the above normalization (theoretically we could
> choose a different normal form, and convert "foo" to "./foo", but I think
> that would be even weirder), otherwise we should skip it.
> 
> On one hand, these are obviously identical -- if you just take the string
> and pass it to the filesystem, you will always get back the same file. But,
> on the other hand, we have this notion that a FileSpec with an empty
> directory component represents a wildcard that matches any file with that
> name in any directory. For these purposes "./foo" and "foo" are two very
> different things.

> 
> So, I can see the case for both, and I don't really have a clear
> preference. All I would say is, whichever way we choose, we should make it
> very explicit so that the users of FileSpec know what to expect.

I would say that without a directory it is a wildcard match on base name alone, 
and with one, the partial directories must match if the path is relative, and 
the full directory must match if absolute. I will submit a patch that keeps 
leading "./" and "../" during normalization and we will see what people think.

> 
> On Thu, 19 Apr 2018 at 19:37, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>> I think I might have tried to replace some of the low level functions in
> FileSpec with the LLVM equivalents and gotten a few test failures, but I
> didn't have time to investigate.  It would be a worthwhile experiment for
> someone to try again if they have some cycles.

I took a look at the llvm file stuff and it has llvm::sys::fs::real_path which 
always resolves symlinks _and_ normalizes the path. Would be nice to break it 
out into two parts by adding llvm::sys::fs::normalize_path and have 
llvm::sys::fs::real_path call it.

> I can try to take a look at it. The way I remember it, I just copied these
> functions from llvm and replaced all #ifdefs with runtime checks, which is
> pretty much what you later did in llvm proper. Unless there has been some
> significant divergence since then, it shouldn't be hard to reconcile these.

Ok, I will submit a patch and we will see how things go.


Greg

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] FileSpec and normalization questions

2018-04-20 Thread via lldb-dev
>> Yes, that's exactly what the author of this test (me) had in mind. :)>> And it's not just a hypothetical posix thing either. Windows and cygwin>> both use \\ and // to mean funny things. I remember also seeing something>> like that on linux, though I can't remember now what was it being used for.>ok, we need to keep any paths starting with // or \\I would add there are also cases where you *COMPILE* on one system (ie: Posix, build farm producing pre-built libraries, or complete applications) and debug on another system (Windows)Practical example: produce an SDK on a POSIX platform - and your users/victims are on any of 3 platforms (mac, windows, linux) thus the build location is different then the debug location, and more importantly the OS might be different.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] problems running the LLDB lit tests on Windows

2018-04-20 Thread Adrian McCarthy via lldb-dev
I'm trying to figure out what's happening with the LLDB lit tests on
Windows.  I'm not sure how to proceed with debugging this.

I execute this command:

  ninja check-lldb

And several things happen very rapidly:

1.  On the console, I get one warning that says:

D:/src/llvm/mono/llvm-project/llvm\utils\lit\lit\discovery.py:121:
ResourceWarning: unclosed file <_io.BufferedReader name=3>
key = (ts, path_in_suite)


2.  Then I get several dozen messages of this form:

D:/src/llvm/mono/llvm-project/llvm\utils\lit\lit\TestRunner.py:727:
ResourceWarning: unclosed file <_io.BufferedReader name=6>
res = _executeShCmd(cmd.rhs, shenv, results, timeoutHelper)


3.  I get more than 200 dialog boxes that are essentially assertion
failures in the CRT implementation of `close`.  The line complained about
in the dialog is:


_VALIDATE_CLEAR_OSSERR_RETURN((fh >= 0 && (unsigned)fh <
(unsigned)_nhandle), EBADF, -1);


where `fh` is the value passed to `close`.  Indeed, `fh` typically has a
value like 452 which is not in the range of 0 to `_nhandle` because
`_nhandle` is 64.

Starting from 3, I tried to walk up the stack to see what's going on, but
it's just the generic workings of the Python virtual machine.  The `close`
call is happening because something in the .py code is calling `close`.
It's hard to see the Python code in the debugger.  It doesn't actually seem
to be test code.

So I checked out the command line for one of those asserting processes to
see if I could figure out which tests are exhibiting the problem.

"C:\python_35\python_d.exe" "-R" "-c" "from multiprocessing.spawn import
spawn_main; spawn_main(pipe_handle=992, parent_pid=32640)"
"--multiprocessing-fork"


The `pipe_handle` value does not correspond to the value being passed to
the `close`.  The `parent_pid` always refers to the parent lit command.

There always seem to be 32 Python processes in this state.  If I kill one,
another is immediately spawned to replace it (creating a new assertion
failure dialog).  I'm guessing that if I continued, there would be one for
each test, and that somewhere there's a limit of 32 processes at a time.

So this kind of sounds like a lit bug, but other lit tests (as in `ninja
check-llvm`) run just fine.  So it has something to do with how we invoke
lit for LLDB.  The command being executed, per the build.ninja file, is:

cd /D D:\src\llvm\build\mono\tools\lldb\lit && C:\python_35\python_d.exe
D:/src/llvm/build/mono/./bin/llvm-lit.py -sv --param
lldb_site_config=D:/src/llvm/build/mono/tools/lldb/lit/lit.site.cfg --param
lldb_unit_site_config=D:/src/llvm/build/mono/tools/lldb/lit/Unit/lit.site.cfg
D:/src/llvm/build/mono/tools/lldb/lit


The LLDB-specific things in the command are lit configs, with which I've
been blissfully ignorant.  Should I head down that rabbit hole?  Could this
be a problem with my environment?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] FileSpec and normalization questions

2018-04-20 Thread Pavel Labath via lldb-dev
On Fri, 20 Apr 2018 at 17:14, Greg Clayton  wrote:
> > On Apr 20, 2018, at 1:08 AM, Pavel Labath  wrote:
> >
> >
> > So, I can see the case for both, and I don't really have a clear
> > preference. All I would say is, whichever way we choose, we should make
it
> > very explicit so that the users of FileSpec know what to expect.

> I would say that without a directory it is a wildcard match on base name
alone, and with one, the partial directories must match if the path is
relative, and the full directory must match if absolute. I will submit a
patch that keeps leading "./" and "../" during normalization and we will
see what people think.

Ok, what about multiple leading "./" components? Would it make sense to
collapse those to a single one ("././././foo.cpp" -> "./foo.cpp")?


> >
> > On Thu, 19 Apr 2018 at 19:37, Zachary Turner via lldb-dev <
> > lldb-dev@lists.llvm.org> wrote:
> >> I think I might have tried to replace some of the low level functions
in
> > FileSpec with the LLVM equivalents and gotten a few test failures, but I
> > didn't have time to investigate.  It would be a worthwhile experiment
for
> > someone to try again if they have some cycles.

> I took a look at the llvm file stuff and it has llvm::sys::fs::real_path
which always resolves symlinks _and_ normalizes the path. Would be nice to
break it out into two parts by adding llvm::sys::fs::normalize_path and
have llvm::sys::fs::real_path call it.

> > I can try to take a look at it. The way I remember it, I just copied
these
> > functions from llvm and replaced all #ifdefs with runtime checks, which
is
> > pretty much what you later did in llvm proper. Unless there has been
some
> > significant divergence since then, it shouldn't be hard to reconcile
these.

So, I tried playing around with unifying the two implementations today. I
didn't touch the normalization code, I just wanted to try to replace path
parsing functions with the llvm ones.

In theory, it should be as simple as replacing our parsing code in
FileSpec::SetFile with calls to llvm::sys::path::filename and
...::parent_path (to set m_filename and m_directory).

It turned out this was not as simple, and the reason is that the llvm path
api's aren't completely self-consistent either. For example, for "//", the
filename+parent_path functions will decompose it into "." and "", while the
path iterator api will just report a single component ("//").

After a couple of hours of fiddling with +/- ones, I think I have came up
with a working and consistent implementation, but I haven't managed to
finish polishing it today. I'll try to upload the llvm part of the patch on
Monday. (I'll refrain from touching the lldb code for now, to avoid
interfering with your patch).
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] problems running the LLDB lit tests on Windows

2018-04-20 Thread Ted Woodward via lldb-dev
See my comment in https://reviews.llvm.org/D45333 .

 

r330275 changed how lldb’s lit tests were set up. This gives cmake errors using 
the Visual Studio generator; I wouldn’t be surprised if what you’re seeing 
using ninja is the same issue.

 

Short version: the cmake code that sets up the lit config in lldb is different 
from the cmake code that sets up the lit config in clang. This is causing the 
VS generator errors, and might be causing your problems with ninja.

 

--

Qualcomm Innovation Center, Inc.

The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project

 

From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of Adrian 
McCarthy via lldb-dev
Sent: Friday, April 20, 2018 1:21 PM
To: LLDB 
Subject: [lldb-dev] problems running the LLDB lit tests on Windows

 

I'm trying to figure out what's happening with the LLDB lit tests on Windows.  
I'm not sure how to proceed with debugging this.

 

I execute this command:

 

  ninja check-lldb

 

And several things happen very rapidly:

 

1.  On the console, I get one warning that says:

 

D:/src/llvm/mono/llvm-project/llvm\utils\lit\lit\discovery.py:121: 
ResourceWarning: unclosed file <_io.BufferedReader name=3>

key = (ts, path_in_suite)

  

2.  Then I get several dozen messages of this form:

 

D:/src/llvm/mono/llvm-project/llvm\utils\lit\lit\TestRunner.py:727: 
ResourceWarning: unclosed file <_io.BufferedReader name=6>

res = _executeShCmd(cmd.rhs, shenv, results, timeoutHelper)

  

3.  I get more than 200 dialog boxes that are essentially assertion failures in 
the CRT implementation of `close`.  The line complained about in the dialog is:

  

_VALIDATE_CLEAR_OSSERR_RETURN((fh >= 0 && (unsigned)fh < (unsigned)_nhandle), 
EBADF, -1);

  

where `fh` is the value passed to `close`.  Indeed, `fh` typically has a value 
like 452 which is not in the range of 0 to `_nhandle` because `_nhandle` is 64.

 

Starting from 3, I tried to walk up the stack to see what's going on, but it's 
just the generic workings of the Python virtual machine.  The `close` call is 
happening because something in the .py code is calling `close`.  It's hard to 
see the Python code in the debugger.  It doesn't actually seem to be test code.

 

So I checked out the command line for one of those asserting processes to see 
if I could figure out which tests are exhibiting the problem.

 

"C:\python_35\python_d.exe" "-R" "-c" "from multiprocessing.spawn import 
spawn_main; spawn_main(pipe_handle=992, parent_pid=32640)" 
"--multiprocessing-fork"



The `pipe_handle` value does not correspond to the value being passed to the 
`close`.  The `parent_pid` always refers to the parent lit command.

 

There always seem to be 32 Python processes in this state.  If I kill one, 
another is immediately spawned to replace it (creating a new assertion failure 
dialog).  I'm guessing that if I continued, there would be one for each test, 
and that somewhere there's a limit of 32 processes at a time.

 

So this kind of sounds like a lit bug, but other lit tests (as in `ninja 
check-llvm`) run just fine.  So it has something to do with how we invoke lit 
for LLDB.  The command being executed, per the build.ninja file, is:

 

cd /D D:\src\llvm\build\mono\tools\lldb\lit && C:\python_35\python_d.exe 
D:/src/llvm/build/mono/./bin/llvm-lit.py -sv --param 
lldb_site_config=D:/src/llvm/build/mono/tools/lldb/lit/lit.site.cfg --param 
lldb_unit_site_config=D:/src/llvm/build/mono/tools/lldb/lit/Unit/lit.site.cfg 
D:/src/llvm/build/mono/tools/lldb/lit

 

The LLDB-specific things in the command are lit configs, with which I've been 
blissfully ignorant.  Should I head down that rabbit hole?  Could this be a 
problem with my environment?

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] problems running the LLDB lit tests on Windows

2018-04-20 Thread Adrian McCarthy via lldb-dev
If I run the llvm lit tests with the debug build of Python, I get the same
kind of errors, so I think this is a bug in lit that we haven't seen
because people have been using it with non-debug Python.  I'm investigating
that angle.

On Fri, Apr 20, 2018 at 12:21 PM, Ted Woodward via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> See my comment in https://reviews.llvm.org/D45333 .
>
>
>
> r330275 changed how lldb’s lit tests were set up. This gives cmake errors
> using the Visual Studio generator; I wouldn’t be surprised if what you’re
> seeing using ninja is the same issue.
>
>
>
> Short version: the cmake code that sets up the lit config in lldb is
> different from the cmake code that sets up the lit config in clang. This is
> causing the VS generator errors, and might be causing your problems with
> ninja.
>
>
>
> --
>
> Qualcomm Innovation Center, Inc.
>
> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
> Linux Foundation Collaborative Project
>
>
>
> *From:* lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] *On Behalf Of 
> *Adrian
> McCarthy via lldb-dev
> *Sent:* Friday, April 20, 2018 1:21 PM
> *To:* LLDB 
> *Subject:* [lldb-dev] problems running the LLDB lit tests on Windows
>
>
>
> I'm trying to figure out what's happening with the LLDB lit tests on
> Windows.  I'm not sure how to proceed with debugging this.
>
>
>
> I execute this command:
>
>
>
>   ninja check-lldb
>
>
>
> And several things happen very rapidly:
>
>
>
> 1.  On the console, I get one warning that says:
>
>
>
> D:/src/llvm/mono/llvm-project/llvm\utils\lit\lit\discovery.py:121:
> ResourceWarning: unclosed file <_io.BufferedReader name=3>
>
> key = (ts, path_in_suite)
>
>
>
> 2.  Then I get several dozen messages of this form:
>
>
>
> D:/src/llvm/mono/llvm-project/llvm\utils\lit\lit\TestRunner.py:727:
> ResourceWarning: unclosed file <_io.BufferedReader name=6>
>
> res = _executeShCmd(cmd.rhs, shenv, results, timeoutHelper)
>
>
>
> 3.  I get more than 200 dialog boxes that are essentially assertion
> failures in the CRT implementation of `close`.  The line complained about
> in the dialog is:
>
>
>
> _VALIDATE_CLEAR_OSSERR_RETURN((fh >= 0 && (unsigned)fh <
> (unsigned)_nhandle), EBADF, -1);
>
>
>
> where `fh` is the value passed to `close`.  Indeed, `fh` typically has a
> value like 452 which is not in the range of 0 to `_nhandle` because
> `_nhandle` is 64.
>
>
>
> Starting from 3, I tried to walk up the stack to see what's going on, but
> it's just the generic workings of the Python virtual machine.  The `close`
> call is happening because something in the .py code is calling `close`.
> It's hard to see the Python code in the debugger.  It doesn't actually seem
> to be test code.
>
>
>
> So I checked out the command line for one of those asserting processes to
> see if I could figure out which tests are exhibiting the problem.
>
>
>
> "C:\python_35\python_d.exe" "-R" "-c" "from multiprocessing.spawn import
> spawn_main; spawn_main(pipe_handle=992, parent_pid=32640)"
> "--multiprocessing-fork"
>
>
>
> The `pipe_handle` value does not correspond to the value being passed to
> the `close`.  The `parent_pid` always refers to the parent lit command.
>
>
>
> There always seem to be 32 Python processes in this state.  If I kill one,
> another is immediately spawned to replace it (creating a new assertion
> failure dialog).  I'm guessing that if I continued, there would be one for
> each test, and that somewhere there's a limit of 32 processes at a time.
>
>
>
> So this kind of sounds like a lit bug, but other lit tests (as in `ninja
> check-llvm`) run just fine.  So it has something to do with how we invoke
> lit for LLDB.  The command being executed, per the build.ninja file, is:
>
>
>
> cd /D D:\src\llvm\build\mono\tools\lldb\lit && C:\python_35\python_d.exe
> D:/src/llvm/build/mono/./bin/llvm-lit.py -sv --param
> lldb_site_config=D:/src/llvm/build/mono/tools/lldb/lit/lit.site.cfg
> --param 
> lldb_unit_site_config=D:/src/llvm/build/mono/tools/lldb/lit/Unit/lit.site.cfg
> D:/src/llvm/build/mono/tools/lldb/lit
>
>
>
> The LLDB-specific things in the command are lit configs, with which I've
> been blissfully ignorant.  Should I head down that rabbit hole?  Could this
> be a problem with my environment?
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] FileSpec and normalization questions

2018-04-20 Thread Greg Clayton via lldb-dev


> On Apr 20, 2018, at 12:17 PM, Pavel Labath  wrote:
> 
> On Fri, 20 Apr 2018 at 17:14, Greg Clayton  wrote:
>>> On Apr 20, 2018, at 1:08 AM, Pavel Labath  wrote:
>>> 
>>> 
>>> So, I can see the case for both, and I don't really have a clear
>>> preference. All I would say is, whichever way we choose, we should make
> it
>>> very explicit so that the users of FileSpec know what to expect.
> 
>> I would say that without a directory it is a wildcard match on base name
> alone, and with one, the partial directories must match if the path is
> relative, and the full directory must match if absolute. I will submit a
> patch that keeps leading "./" and "../" during normalization and we will
> see what people think.

I have that as part of my current patch, so don't worry about that.
> 
> Ok, what about multiple leading "./" components? Would it make sense to
> collapse those to a single one ("././././foo.cpp" -> "./foo.cpp")?

yes!

> 
> 
>>> 
>>> On Thu, 19 Apr 2018 at 19:37, Zachary Turner via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
 I think I might have tried to replace some of the low level functions
> in
>>> FileSpec with the LLVM equivalents and gotten a few test failures, but I
>>> didn't have time to investigate.  It would be a worthwhile experiment
> for
>>> someone to try again if they have some cycles.
> 
>> I took a look at the llvm file stuff and it has llvm::sys::fs::real_path
> which always resolves symlinks _and_ normalizes the path. Would be nice to
> break it out into two parts by adding llvm::sys::fs::normalize_path and
> have llvm::sys::fs::real_path call it.
> 
>>> I can try to take a look at it. The way I remember it, I just copied
> these
>>> functions from llvm and replaced all #ifdefs with runtime checks, which
> is
>>> pretty much what you later did in llvm proper. Unless there has been
> some
>>> significant divergence since then, it shouldn't be hard to reconcile
> these.
> 
> So, I tried playing around with unifying the two implementations today. I
> didn't touch the normalization code, I just wanted to try to replace path
> parsing functions with the llvm ones.
> 
> In theory, it should be as simple as replacing our parsing code in
> FileSpec::SetFile with calls to llvm::sys::path::filename and
> ...::parent_path (to set m_filename and m_directory).
> 
> It turned out this was not as simple, and the reason is that the llvm path
> api's aren't completely self-consistent either. For example, for "//", the
> filename+parent_path functions will decompose it into "." and "", while the
> path iterator api will just report a single component ("//").
> 
> After a couple of hours of fiddling with +/- ones, I think I have came up
> with a working and consistent implementation, but I haven't managed to
> finish polishing it today. I'll try to upload the llvm part of the patch on
> Monday. (I'll refrain from touching the lldb code for now, to avoid
> interfering with your patch).

Sounds good. Feel free to replace my changes with LLVM stuff as long as our 
test suite passes as I am adding a bunch of tests to cover all the cases.

Greg


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 37190] New: 'memory read' reports 0s for unreadable memory on FreeBSD

2018-04-20 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=37190

Bug ID: 37190
   Summary: 'memory read' reports 0s for unreadable memory on
FreeBSD
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Windows NT
Status: NEW
  Severity: enhancement
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: ema...@freebsd.org
CC: llvm-b...@lists.llvm.org

ptrace returns an error but it's not propagated to the user:

(lldb) memory read -format hex -size 8 0
reebsd.operationptrace(PT_IO, 92788, 0x7fffdedfaed8, 0) called from file
/tank/emaste/src/git-stable-11/contrib/llvm/tools/lldb/source/Plugins/Process/FreeBSD/ProcessMonitor.cpp
line 166
reebsd.operationPT_IO: op=READ_D offs=0 size=512
reebsd.operationptrace() failed; errno=14 ()
0x: 0x 0x
0x0010: 0x 0x
0x0020: 0x 0x
0x0030: 0x 0x

I tried as far back as lldb37 and it fails there.

(For reference, "reebsd.operation" is the thread name - final 16 chars of
"lldb.process.freebsd.operation"; it seems "-n" mode is enabled by default in
"log enable posix all"?)

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 37190] 'memory read' reports 0s for unreadable memory on FreeBSD

2018-04-20 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=37190

ema...@freebsd.org changed:

   What|Removed |Added

 OS|Windows NT  |FreeBSD
   Assignee|lldb-dev@lists.llvm.org |ema...@freebsd.org

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev