Re: [lldb-dev] ELF header does not hold big modules

2017-02-02 Thread Greg Clayton via lldb-dev
Found this on the web:

e_shnum
This member holds the number of entries in the section header
table. Thus the product of e_shentsize and e_shnum gives the section
header table's size in bytes. If a file has no section header table,
e_shnum holds the value zero.

If the number of sections is greater than or equal to SHN_LORESERVE
(0xff00), this member has the value zero and the actual number of
section header table entries is contained in the sh_size field of
the section header at index 0. (Otherwise, the sh_size member of the
initial entry contains 0.)

See the second paragraph. I am guessing the same thing would need to happen for 
program headers? I would check the various ELF file parsers in LLVM and see if 
they handle this correctly?

Greg

> On Jan 23, 2017, at 3:26 PM, Eugene Birukov via lldb-dev 
>  wrote:
> 
> > By "all" I presume you mean "all fields that refer to section counts or 
> > indexes".
> 
> Correct. I mean exactly 3 fields: e_phnum, e_shnum, and e_shstrndx that could 
> be redirected to section #0.
> 
> Sent from Outlook 
> 
> 
> From: Pavel Labath 
> Sent: Monday, January 23, 2017 2:04 AM
> To: Eugene Birukov
> Cc: LLDB
> Subject: Re: ELF header does not hold big modules
>  
> Hello Eugene,
> 
> On 21 January 2017 at 00:42, Eugene Birukov  wrote:
> > Hello,
> >
> >
> > I have a core dump that LLDB cannot open because it contains more than 64K
> > sections. The "readelf" utility gives me the output in the end of this
> > message. It seems that the actual number of program headers and the index of
> > string table section are written into very first section since they do not
> > fit in 16 bits.
> >
> >
> > The "natural" way to deal with this problem would be to change the types of
> > fields in ELFHeader struct from 16 to 32 bits (or should I go all the way
> > and  do it 64? in case the core dump is bigger than 4GB...) and deal with
> > the problem in a single place where we parse the header - the
> > ELFHeader::Parse().
> >
> >
> > Objections? Suggestions? Advices?
> 
> Sounds like a plan. By "all" I presume you mean "all fields that refer
> to section counts or indexes". I don't see a reason the change the
> size of the e.g. e_type field. I think going 32 bit will be enough,
> particularly as the "fallback" field you are reading this from is only
> 32 bit wide anyway, and so you would still have to touch this if we
> ever cross that boundary. (And we are really talking about 4 billion
> sections, not a 4GB core file, right?)
> 
> 
> > Hmm... I am not sure that the DataExtractor we pass down there would let me
> > read that much past in the file - I have impression that we give only first
> > 512 bytes there, but I might be mistaken...
> 
> The reason we initially read only the first few bytes of the file, is
> to be able to make a quick decision as to whether this is likely to be
> a valid elf file (see call to ObjectFileELF::MagicBytesMatch in
> GetModuleSpecifications). After that, we extend the buffer to the
> whole file. It looks like this currently happens before parsing the
> ELF header, but I don't see a reason why  it would have to stay that
> way.
> 
> cheers,
> pavel
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Navigating STL types

2017-02-02 Thread Greg Clayton via lldb-dev
You could add an explicit instantiation of your template for C++ types you 
need. 

Example without:

#include 
#include 

int main (int argc, char const *argv[])
{
  std::vector ints = { 1,2,3,4 };
  for (auto i: ints)
printf("%i\n", i);
  return 0;
}


(lldb) target create "a.out"
(lldb) b /auto i/
(lldb) r
(lldb) p ints.size()
error: Couldn't lookup symbols:
  __ZNKSt3__16vectorIiNS_9allocatorIiEEE4sizeEv


But add the explicit instantiation:

#include 
#include 

template class std::vector; /// <

int main (int argc, char const *argv[])
{
  std::vector ints = { 1,2,3,4 };
  for (auto i: ints)
printf("%i\n", i);
  return 0;
}


(lldb) target create "a.out"
(lldb) b /auto i/
(lldb) r
(lldb) p ints.size()
(std::__1::vector >::size_type) $0 = 4


So you could have some piece of code somewhere in your project:

#ifndef NDEBUG
/// Explicitly instantiate any STL stuff you need in order to debug
#endif

GDB is probably working around this by doing things for you without running the 
code that doesn’t exist.

Greg


> On Jan 23, 2017, at 3:58 PM, Andreas Yankopolus via lldb-dev 
>  wrote:
> 
> How can I navigate STL types using their overloaded operators and member 
> functions (e.g., “[]” and “.first()” for vectors) ? For example, take a C++ 
> source file with:
> 
> std::vector v;
> v.push_back("foo”);
> 
> Breaking after this statement in lldb and typing "p v[0]" would be reasonably 
> expected to return "foo", but it gives a symbol lookup error. Similarly, “p 
> v.first()” gives an error that there’s no member named “first”. I’m seeing 
> this issue with clang/llvm 3.9 and 4.0 nightlies on Ubuntu 16.10 and with 
> Apple’s versions on MacOS Sierra.
> 
> Internet rumor (e.g., this discussion 
> )
>  says this is aggressive inlining of STL code. I’m compiling in clang++ with 
> “-O0 -g -glldb”.
> 
> In comparison, gdb prints the value of v[0] just fine when compiled with gdb.
> 
> What am I doing wrong?
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] ELF header does not hold big modules

2017-02-02 Thread Eugene Birukov via lldb-dev
Yes, that's what happens to all 3 fields. I already fixed LLDB, Pavel reviewed 
the change.


Sent from Outlook



From: Greg Clayton 
Sent: Thursday, February 2, 2017 2:31 PM
To: Eugene Birukov
Cc: Pavel Labath; LLDB
Subject: Re: [lldb-dev] ELF header does not hold big modules

Found this on the web:


e_shnum
This member holds the number of entries in the section header
table. Thus the product of e_shentsize and e_shnum gives the section
header table's size in bytes. If a file has no section header table,
e_shnum holds the value zero.

If the number of sections is greater than or equal to SHN_LORESERVE
(0xff00), this member has the value zero and the actual number of
section header table entries is contained in the sh_size field of
the section header at index 0. (Otherwise, the sh_size member of the
initial entry contains 0.)


See the second paragraph. I am guessing the same thing would need to happen for 
program headers? I would check the various ELF file parsers in LLVM and see if 
they handle this correctly?

Greg

On Jan 23, 2017, at 3:26 PM, Eugene Birukov via lldb-dev 
mailto:lldb-dev@lists.llvm.org>> wrote:

> By "all" I presume you mean "all fields that refer to section counts or 
> indexes".

Correct. I mean exactly 3 fields: e_phnum, e_shnum, and e_shstrndx that could 
be redirected to section #0.

Sent from Outlook



From: Pavel Labath mailto:lab...@google.com>>
Sent: Monday, January 23, 2017 2:04 AM
To: Eugene Birukov
Cc: LLDB
Subject: Re: ELF header does not hold big modules

Hello Eugene,

On 21 January 2017 at 00:42, Eugene Birukov 
mailto:eugen...@hotmail.com>> wrote:
> Hello,
>
>
> I have a core dump that LLDB cannot open because it contains more than 64K
> sections. The "readelf" utility gives me the output in the end of this
> message. It seems that the actual number of program headers and the index of
> string table section are written into very first section since they do not
> fit in 16 bits.
>
>
> The "natural" way to deal with this problem would be to change the types of
> fields in ELFHeader struct from 16 to 32 bits (or should I go all the way
> and  do it 64? in case the core dump is bigger than 4GB...) and deal with
> the problem in a single place where we parse the header - the
> ELFHeader::Parse().
>
>
> Objections? Suggestions? Advices?

Sounds like a plan. By "all" I presume you mean "all fields that refer
to section counts or indexes". I don't see a reason the change the
size of the e.g. e_type field. I think going 32 bit will be enough,
particularly as the "fallback" field you are reading this from is only
32 bit wide anyway, and so you would still have to touch this if we
ever cross that boundary. (And we are really talking about 4 billion
sections, not a 4GB core file, right?)


> Hmm... I am not sure that the DataExtractor we pass down there would let me
> read that much past in the file - I have impression that we give only first
> 512 bytes there, but I might be mistaken...

The reason we initially read only the first few bytes of the file, is
to be able to make a quick decision as to whether this is likely to be
a valid elf file (see call to ObjectFileELF::MagicBytesMatch in
GetModuleSpecifications). After that, we extend the buffer to the
whole file. It looks like this currently happens before parsing the
ELF header, but I don't see a reason why  it would have to stay that
way.

cheers,
pavel
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev