Found this on the web: e_shnum This member holds the number of entries in the section header table. Thus the product of e_shentsize and e_shnum gives the section header table's size in bytes. If a file has no section header table, e_shnum holds the value zero.
If the number of sections is greater than or equal to SHN_LORESERVE (0xff00), this member has the value zero and the actual number of section header table entries is contained in the sh_size field of the section header at index 0. (Otherwise, the sh_size member of the initial entry contains 0.) See the second paragraph. I am guessing the same thing would need to happen for program headers? I would check the various ELF file parsers in LLVM and see if they handle this correctly? Greg > On Jan 23, 2017, at 3:26 PM, Eugene Birukov via lldb-dev > <lldb-dev@lists.llvm.org> wrote: > > > By "all" I presume you mean "all fields that refer to section counts or > > indexes". > > Correct. I mean exactly 3 fields: e_phnum, e_shnum, and e_shstrndx that could > be redirected to section #0. > > Sent from Outlook <http://aka.ms/weboutlook> > > > From: Pavel Labath <lab...@google.com> > Sent: Monday, January 23, 2017 2:04 AM > To: Eugene Birukov > Cc: LLDB > Subject: Re: ELF header does not hold big modules > > Hello Eugene, > > On 21 January 2017 at 00:42, Eugene Birukov <eugen...@hotmail.com> wrote: > > Hello, > > > > > > I have a core dump that LLDB cannot open because it contains more than 64K > > sections. The "readelf" utility gives me the output in the end of this > > message. It seems that the actual number of program headers and the index of > > string table section are written into very first section since they do not > > fit in 16 bits. > > > > > > The "natural" way to deal with this problem would be to change the types of > > fields in ELFHeader struct from 16 to 32 bits (or should I go all the way > > and do it 64? in case the core dump is bigger than 4GB...) and deal with > > the problem in a single place where we parse the header - the > > ELFHeader::Parse(). > > > > > > Objections? Suggestions? Advices? > > Sounds like a plan. By "all" I presume you mean "all fields that refer > to section counts or indexes". I don't see a reason the change the > size of the e.g. e_type field. I think going 32 bit will be enough, > particularly as the "fallback" field you are reading this from is only > 32 bit wide anyway, and so you would still have to touch this if we > ever cross that boundary. (And we are really talking about 4 billion > sections, not a 4GB core file, right?) > > > > Hmm... I am not sure that the DataExtractor we pass down there would let me > > read that much past in the file - I have impression that we give only first > > 512 bytes there, but I might be mistaken... > > The reason we initially read only the first few bytes of the file, is > to be able to make a quick decision as to whether this is likely to be > a valid elf file (see call to ObjectFileELF::MagicBytesMatch in > GetModuleSpecifications). After that, we extend the buffer to the > whole file. It looks like this currently happens before parsing the > ELF header, but I don't see a reason why it would have to stay that > way. > > cheers, > pavel > _______________________________________________ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
_______________________________________________ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev