Re: covoar SIGKILL Investigation

2018-08-21 Thread Chris Johns
On 21/08/2018 16:55, Vijay Kumar Banerjee wrote:
> I tried running coverage with this latest 
> master, covoar is taking up all the memory (7 GB) including the swap (7.6 GB) 
> and after a while, still gets killed. :(

I ran rtems-test and created the .cov files and then ran covoar from the command
line (see below). Looking at top while it is running I see covoar topping out
with a size around 1430M. The size is pretty static once the "Loading symbol
sets:" is printed.

I have run covoar under valgrind with a smaller number of executables and made
sure all the allocations are ok.

I get a number of size mismatch messages related the inline functions but that
is a known issue.

> can there be something wrong with my environment?

I have no idea.

> I tried running it on a different system,
> coverage did run for the whole testsuite for score and rtems only.
> (I mentioned the symbols as argument to --coverage)
> but it  doesn't run for all the symbol-sets, strange.

I am not running coverage via the rtems-test command. I have been testing at the
covoar command line.

Can you please try a variant of:

 /opt/work/chris/rtems/rt/rtems-tools.git/build/tester/covoar/covoar \
 -v \
 -S
/opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/leon3-qemu-symbols.ini
\
 -O /opt/work/chris/rtems/kernel/bsps/leon3/leon3-qemu-coverage/score \
 -E
/opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/Explanations.txt
\
 -p RTEMS-5 `find . -name \*.exe`
?

I have top running at the same time. The foot print grows while the DWARF info
and .cov files are loaded which is

Thanks
Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: covoar SIGKILL Investigation

2018-08-21 Thread Joel Sherrill
On Tue, Aug 21, 2018 at 2:14 AM, Chris Johns  wrote:

> On 21/08/2018 16:55, Vijay Kumar Banerjee wrote:
> > I tried running coverage with this latest
> > master, covoar is taking up all the memory (7 GB) including the swap
> (7.6 GB)
> > and after a while, still gets killed. :(
>
> I ran rtems-test and created the .cov files and then ran covoar from the
> command
> line (see below). Looking at top while it is running I see covoar topping
> out
> with a size around 1430M. The size is pretty static once the "Loading
> symbol
> sets:" is printed.
>
> I have run covoar under valgrind with a smaller number of executables and
> made
> sure all the allocations are ok.
>
> I get a number of size mismatch messages related the inline functions but
> that
> is a known issue.
>
> > can there be something wrong with my environment?
>
> I have no idea.
>
> > I tried running it on a different system,
> > coverage did run for the whole testsuite for score and rtems only.
> > (I mentioned the symbols as argument to --coverage)
> > but it  doesn't run for all the symbol-sets, strange.
>
> I am not running coverage via the rtems-test command. I have been testing
> at the
> covoar command line.
>
> Can you please try a variant of:
>
>  /opt/work/chris/rtems/rt/rtems-tools.git/build/tester/covoar/covoar \
>  -v \
>  -S
> /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/
> testing/coverage/leon3-qemu-symbols.ini
> \
>  -O /opt/work/chris/rtems/kernel/bsps/leon3/leon3-qemu-coverage/score \
>  -E
> /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/
> testing/coverage/Explanations.txt
> \
>  -p RTEMS-5 `find . -name \*.exe`
> ?
>
> I have top running at the same time. The foot print grows while the DWARF
> info
> and .cov files are loaded which is
>

Vijay .. I would add to make sure the gcov processing is turned off for
now.

>
> Thanks
> Chris
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

bsp-builder Mystery Failures

2018-08-21 Thread Joel Sherrill
Hi

Ignoring the epiphany failures (it shouldn't even attempt to
build networking but I can't seem to find the .ini syntax to address
that), the following results

https://lists.rtems.org/pipermail/build/2018-August/000903.html

have some failures I don't seem to be able to reproduce by
hand. For example, number 9 is one of the set for SPARC BSPs.

 9 smp-debug sparc/gr712rc build:
  configure: /home/joel/rtems-work/rtems/configure --target=sparc-\
  rtems5 --enable-rtemsbsp=gr712rc --prefix=/home/joel/rtems-work/bsps\
  --enable-rtems-debug --enable-smp
 error: bsps/sparc/shared/start/start.S:313 Error: Unknown opcode:
`mv'

I can't reproduce that by hand. I also tried an m32c failure but couldn't
reproduce it either.

Help is appreciated.

--joel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: covoar SIGKILL Investigation

2018-08-21 Thread Vijay Kumar Banerjee
On Tue, Aug 21, 2018, 7:34 PM Joel Sherrill  wrote:

>
>
> On Tue, Aug 21, 2018 at 2:14 AM, Chris Johns  wrote:
>
>> On 21/08/2018 16:55, Vijay Kumar Banerjee wrote:
>> > I tried running coverage with this latest
>> > master, covoar is taking up all the memory (7 GB) including the swap
>> (7.6 GB)
>> > and after a while, still gets killed. :(
>>
>> I ran rtems-test and created the .cov files and then ran covoar from the
>> command
>> line (see below). Looking at top while it is running I see covoar topping
>> out
>> with a size around 1430M. The size is pretty static once the "Loading
>> symbol
>> sets:" is printed.
>>
>> I have run covoar under valgrind with a smaller number of executables and
>> made
>> sure all the allocations are ok.
>>
>> I get a number of size mismatch messages related the inline functions but
>> that
>> is a known issue.
>>
>> > can there be something wrong with my environment?
>>
>> I have no idea.
>>
>> > I tried running it on a different system,
>> > coverage did run for the whole testsuite for score and rtems only.
>> > (I mentioned the symbols as argument to --coverage)
>> > but it  doesn't run for all the symbol-sets, strange.
>>
>> I am not running coverage via the rtems-test command. I have been testing
>> at the
>> covoar command line.
>>
>> Can you please try a variant of:
>>
>>  /opt/work/chris/rtems/rt/rtems-tools.git/build/tester/covoar/covoar \
>>  -v \
>>  -S
>>
>> /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/leon3-qemu-symbols.ini
>> \
>>  -O /opt/work/chris/rtems/kernel/bsps/leon3/leon3-qemu-coverage/score \
>>  -E
>>
>> /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/Explanations.txt
>> \
>>  -p RTEMS-5 `find . -name \*.exe`
>> ?
>>
>> I have top running at the same time. The foot print grows while the DWARF
>> info
>> and .cov files are loaded which is
>>
>
> Vijay .. I would add to make sure the gcov processing is turned off for
> now.
>
it's turned off. :)

After a lot of different attempts I realized that I just needed to waf
build after pulling the new changes. Sorry about that.
It did run successfully !
I'm now running coverage with rtems-test for the whole testsuite, I will be
reporting about it soon :)

>
>> Thanks
>> Chris
>>
>
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: covoar SIGKILL Investigation

2018-08-21 Thread Joel Sherrill
On Tue, Aug 21, 2018, 1:59 PM Vijay Kumar Banerjee 
wrote:

>
>
> On Tue, Aug 21, 2018, 7:34 PM Joel Sherrill  wrote:
>
>>
>>
>> On Tue, Aug 21, 2018 at 2:14 AM, Chris Johns  wrote:
>>
>>> On 21/08/2018 16:55, Vijay Kumar Banerjee wrote:
>>> > I tried running coverage with this latest
>>> > master, covoar is taking up all the memory (7 GB) including the swap
>>> (7.6 GB)
>>> > and after a while, still gets killed. :(
>>>
>>> I ran rtems-test and created the .cov files and then ran covoar from the
>>> command
>>> line (see below). Looking at top while it is running I see covoar
>>> topping out
>>> with a size around 1430M. The size is pretty static once the "Loading
>>> symbol
>>> sets:" is printed.
>>>
>>> I have run covoar under valgrind with a smaller number of executables
>>> and made
>>> sure all the allocations are ok.
>>>
>>> I get a number of size mismatch messages related the inline functions
>>> but that
>>> is a known issue.
>>>
>>> > can there be something wrong with my environment?
>>>
>>> I have no idea.
>>>
>>> > I tried running it on a different system,
>>> > coverage did run for the whole testsuite for score and rtems only.
>>> > (I mentioned the symbols as argument to --coverage)
>>> > but it  doesn't run for all the symbol-sets, strange.
>>>
>>> I am not running coverage via the rtems-test command. I have been
>>> testing at the
>>> covoar command line.
>>>
>>> Can you please try a variant of:
>>>
>>>  /opt/work/chris/rtems/rt/rtems-tools.git/build/tester/covoar/covoar \
>>>  -v \
>>>  -S
>>>
>>> /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/leon3-qemu-symbols.ini
>>> \
>>>  -O /opt/work/chris/rtems/kernel/bsps/leon3/leon3-qemu-coverage/score \
>>>  -E
>>>
>>> /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/Explanations.txt
>>> \
>>>  -p RTEMS-5 `find . -name \*.exe`
>>> ?
>>>
>>> I have top running at the same time. The foot print grows while the
>>> DWARF info
>>> and .cov files are loaded which is
>>>
>>
>> Vijay .. I would add to make sure the gcov processing is turned off for
>> now.
>>
> it's turned off. :)
>

Just checking. That code isn't ready yet. :)

>
> After a lot of different attempts I realized that I just needed to waf
> build after pulling the new changes. Sorry about that.
>

Lol. It is always the dumb things.

It did run successfully !
> I'm now running coverage with rtems-test for the whole testsuite, I will
> be reporting about it soon :)
>

How long is covoar taking for the entire set?

Not that I am complaining, it takes minutes to do a doc build these days


>>> Thanks
>>> Chris
>>>
>>
>>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: covoar SIGKILL Investigation

2018-08-21 Thread Vijay Kumar Banerjee
On Wed, 22 Aug 2018 at 01:55, Joel Sherrill  wrote:

>
>
> On Tue, Aug 21, 2018, 1:59 PM Vijay Kumar Banerjee <
> vijaykumar9...@gmail.com> wrote:
>
>>
>>
>> On Tue, Aug 21, 2018, 7:34 PM Joel Sherrill  wrote:
>>
>>>
>>>
>>> On Tue, Aug 21, 2018 at 2:14 AM, Chris Johns  wrote:
>>>
 On 21/08/2018 16:55, Vijay Kumar Banerjee wrote:
 > I tried running coverage with this latest
 > master, covoar is taking up all the memory (7 GB) including the swap
 (7.6 GB)
 > and after a while, still gets killed. :(

 I ran rtems-test and created the .cov files and then ran covoar from
 the command
 line (see below). Looking at top while it is running I see covoar
 topping out
 with a size around 1430M. The size is pretty static once the "Loading
 symbol
 sets:" is printed.

 I have run covoar under valgrind with a smaller number of executables
 and made
 sure all the allocations are ok.

 I get a number of size mismatch messages related the inline functions
 but that
 is a known issue.

 > can there be something wrong with my environment?

 I have no idea.

 > I tried running it on a different system,
 > coverage did run for the whole testsuite for score and rtems only.
 > (I mentioned the symbols as argument to --coverage)
 > but it  doesn't run for all the symbol-sets, strange.

 I am not running coverage via the rtems-test command. I have been
 testing at the
 covoar command line.

 Can you please try a variant of:

  /opt/work/chris/rtems/rt/rtems-tools.git/build/tester/covoar/covoar \
  -v \
  -S

 /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/leon3-qemu-symbols.ini
 \
  -O /opt/work/chris/rtems/kernel/bsps/leon3/leon3-qemu-coverage/score \
  -E

 /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/Explanations.txt
 \
  -p RTEMS-5 `find . -name \*.exe`
 ?

 I have top running at the same time. The foot print grows while the
 DWARF info
 and .cov files are loaded which is

>>>
>>> Vijay .. I would add to make sure the gcov processing is turned off for
>>> now.
>>>
>> it's turned off. :)
>>
>
> Just checking. That code isn't ready yet. :)
>
>>
>> After a lot of different attempts I realized that I just needed to waf
>> build after pulling the new changes. Sorry about that.
>>
>
> Lol. It is always the dumb things.
>
> It did run successfully !
>> I'm now running coverage with rtems-test for the whole testsuite, I will
>> be reporting about it soon :)
>>
>
> How long is covoar taking for the entire set?
>
> It works great. this is what `time` says

real 17m49.887s
user 14m25.620s
sys 0m37.847s


> Not that I am complaining, it takes minutes to do a doc build these days
>
>
 Thanks
 Chris

>>>
>>>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: covoar SIGKILL Investigation

2018-08-21 Thread Joel Sherrill
On Tue, Aug 21, 2018, 4:05 PM Vijay Kumar Banerjee 
wrote:

>
> On Wed, 22 Aug 2018 at 01:55, Joel Sherrill  wrote:
>
>>
>>
>> On Tue, Aug 21, 2018, 1:59 PM Vijay Kumar Banerjee <
>> vijaykumar9...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Aug 21, 2018, 7:34 PM Joel Sherrill  wrote:
>>>


 On Tue, Aug 21, 2018 at 2:14 AM, Chris Johns  wrote:

> On 21/08/2018 16:55, Vijay Kumar Banerjee wrote:
> > I tried running coverage with this latest
> > master, covoar is taking up all the memory (7 GB) including the swap
> (7.6 GB)
> > and after a while, still gets killed. :(
>
> I ran rtems-test and created the .cov files and then ran covoar from
> the command
> line (see below). Looking at top while it is running I see covoar
> topping out
> with a size around 1430M. The size is pretty static once the "Loading
> symbol
> sets:" is printed.
>
> I have run covoar under valgrind with a smaller number of executables
> and made
> sure all the allocations are ok.
>
> I get a number of size mismatch messages related the inline functions
> but that
> is a known issue.
>
> > can there be something wrong with my environment?
>
> I have no idea.
>
> > I tried running it on a different system,
> > coverage did run for the whole testsuite for score and rtems only.
> > (I mentioned the symbols as argument to --coverage)
> > but it  doesn't run for all the symbol-sets, strange.
>
> I am not running coverage via the rtems-test command. I have been
> testing at the
> covoar command line.
>
> Can you please try a variant of:
>
>  /opt/work/chris/rtems/rt/rtems-tools.git/build/tester/covoar/covoar \
>  -v \
>  -S
>
> /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/leon3-qemu-symbols.ini
> \
>  -O /opt/work/chris/rtems/kernel/bsps/leon3/leon3-qemu-coverage/score \
>  -E
>
> /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/Explanations.txt
> \
>  -p RTEMS-5 `find . -name \*.exe`
> ?
>
> I have top running at the same time. The foot print grows while the
> DWARF info
> and .cov files are loaded which is
>

 Vijay .. I would add to make sure the gcov processing is turned off for
 now.

>>> it's turned off. :)
>>>
>>
>> Just checking. That code isn't ready yet. :)
>>
>>>
>>> After a lot of different attempts I realized that I just needed to waf
>>> build after pulling the new changes. Sorry about that.
>>>
>>
>> Lol. It is always the dumb things.
>>
>> It did run successfully !
>>> I'm now running coverage with rtems-test for the whole testsuite, I will
>>> be reporting about it soon :)
>>>
>>
>> How long is covoar taking for the entire set?
>>
>> It works great. this is what `time` says
> 
> real 17m49.887s
> user 14m25.620s
> sys 0m37.847s
> 
>


What speed and type of processor do you have?

I don't recall it taking near this long in the past. I used to run it as
part of development. But we may have more tests and the code has changed.
Reading dwarf with the file open/closes, etc just may be more expensive
than parsing the text files. But it is more accurate and lays the
groundwork.for more types of analysis.

Eventually we will have to profile this code. Whatever is costly is done
for each exe so there is a multiplier.

I suspect this code would parallelize reading info from the exes fairly
well.  Merging the info and generating the reports not well due to data
contention.

But optimizing too early and the wrong way is not smart.


Not that I am complaining, it takes minutes to do a doc build these days
>>
>>
> Thanks
> Chris
>


___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: Tickets: Milestone vs. Version

2018-08-21 Thread Gedare Bloom
On Sat, Aug 11, 2018 at 5:14 AM, Chris Johns  wrote:
> On 11/8/18 6:31 am, Gedare Bloom wrote:
>> On Fri, Aug 10, 2018 at 2:10 AM, Chris Johns  wrote:
>>> On 10/08/2018 15:41, Sebastian Huber wrote:
 On 10/08/18 07:38, Chris Johns wrote:
> On 10/08/2018 15:03, Sebastian Huber wrote:
>> we want a ticket for each milestone in which it is resolved. What is now 
>> the
>> meaning of the version field?
>>
> A ticket may be assigned to a branch but not a milestone. Milestones lets 
> us
> select which tickets we fix on branch. Once all tickets on a milestone are
> closed the release can be made.
>
> We do not work that way at the moment. I use the milestones when making 
> releases
> to move tickets scheduled for a release that are not closed to the next 
> release.

 This doesn't explain the version field. Is version the same as branch from 
 your
 point of view?

>>>
>>> The branch is the version of RTEMS released from that branch. In trac it is
>>> called version, ie 4.11, 4.10, 5 etc. The term version is more accurate, 
>>> the use
>>> of branch is actually a VC implementation detail.
>>>
>>
>> I had understood we should use 'version' field in Trac to indicate
>> when the bug first appeared.
>
> If a bug appears in 4.11 and we say the bug is no longer present on 5 because
> things has changed do we close the bug even it is still present on 4.11?
>
> If a bug is present in 4.11 and raised against it however is fixed in 5 is
> closing that bug valid if still present in 4.11?
>
> What happens if someone finds a bug in 5 that is also present on 4.11, etc,
> which is what started this thread, and it is only fixed on 4.11?
>
>> If this is not the case, then definitely
>> (a) we need more guidance,
>
> I think this discuss highlights we need to improve what we have. Thank you for
> questioning what is being said. The page I did was focused on the release
> process at the time. It is far from complete.
>
> and (b) we probably need a way to indicate
>> (our best guess about) when a bug appeared.
>
> Do we? If we decide what I have said above is correct, which is not a given,
> then we would need a ticket on each version (branch) it is present on. The 
> bugs
> have the creation date.
>
> My understanding of Trac is the relationships are sort of direct and so I am 
> not
> sure there is a way to view the complexity of a bug the way we see in it's
> database. Also I am fine with Trac. I suspect increasing a tool's complexity 
> to
> handle what we want brings it's own set of issues.
>
> Maybe it would be helpful to list what I see we need:
>
> 1. View open tickets on any version of RTEMS.
> 2. View closed tickets on any version of RTEMS.
> 3. Machine generated release notes.
> 4. ??
>
> I see viewing open tickets on a version as a query for that version of RTEMS 
> for
> any tickets that are not closed. Viewing closed tickets is a pretty simple
> query. Release note generation is keyed off the milestone.
>
> I am not saying what we have is prefect, optimal etc and it does mean we need 
> to
> do more work cloning tickets when back porting fixes.
>

Thank you for the clarification. This set of requirements (1-3) makes
sense to me. I guess it is not worth the complexity to have the
ability to show when a bug is known to exist.

Gedare
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: covoar SIGKILL Investigation

2018-08-21 Thread Chris Johns
On 22/08/2018 09:29, Joel Sherrill wrote:
> On Tue, Aug 21, 2018, 4:05 PM Vijay Kumar Banerjee  > wrote:
> On Wed, 22 Aug 2018 at 01:55, Joel Sherrill  > wrote:
> 
> How long is covoar taking for the entire set?
> 
> It works great. this is what `time` says 
> 
> real17m49.887s
> user14m25.620s
> sys0m37.847s
> 
> 
> What speed and type of processor do you have? 
> 

The program is single threaded so the preprocessing of each executable is
sequential. Memory usage is reasonable so there is no swapping.

Running covoar from the command line on a box with:

 hw.machine: amd64
 hw.model: Intel(R) Core(TM) i7-6900K CPU @ 3.20GHz
 hw.ncpu: 16
 hw.machine_arch: amd64

plus 32G of memory has a time of:

  366.32 real   324.97 user41.33 sys

The approximate time break down is:

 ELF/DWARF loading  : 110s (1m50s)
 Objdump: 176s (2m56s)
 Processing :  80s (1m20s)

The DWARF loading is not optimised and I load all source line to address maps
and all functions rather that selectively scanning for specific names at the
DWARF level. It is not clear to me scanning would be better or faster. My hope
is moving to Capstone would help lower or remove the objdump overhead. Then
there is threading for the loading.

> I don't recall it taking near this long in the past. I used to run it as part 
> of
> development. 

The objdump processing is simpler than before so I suspect the time would have
been at least 4 minutes.

> But we may have more tests and the code has changed.

I think having more tests is the dominant factor.

> Reading dwarf
> with the file open/closes, etc just may be more expensive than parsing the 
> text
> files. 

The reading DWARF is a cost and at the moment it is not optimised but it is only
a cost because we still parse the objdump data. I think opening and closing
files is not a factor.

The parsing the objdump is the largest component of time. Maybe using Capstone
with the ELF files will help.

> But it is more accurate and lays the groundwork.for more types of analysis.

Yes and think this is important.

> Eventually we will have to profile this code. Whatever is costly is done for
> each exe so there is a multiplier.
> 
> I suspect this code would parallelize reading info from the exes fairly well. 

Agreed.

> Merging the info and generating the reports not well due to data contention.

Yes.

> But optimizing too early and the wrong way is not smart.

Yes. We need Capstone to be added before this can happen.

Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: covoar SIGKILL Investigation

2018-08-21 Thread Joel Sherrill
On Tue, Aug 21, 2018, 10:26 PM Chris Johns  wrote:

> On 22/08/2018 09:29, Joel Sherrill wrote:
> > On Tue, Aug 21, 2018, 4:05 PM Vijay Kumar Banerjee <
> vijaykumar9...@gmail.com
> > > wrote:
> > On Wed, 22 Aug 2018 at 01:55, Joel Sherrill  > > wrote:
> >
> > How long is covoar taking for the entire set?
> >
> > It works great. this is what `time` says
> > 
> > real17m49.887s
> > user14m25.620s
> > sys0m37.847s
> > 
> >
> > What speed and type of processor do you have?
> >
>
> The program is single threaded so the preprocessing of each executable is
> sequential. Memory usage is reasonable so there is no swapping.
>
> Running covoar from the command line on a box with:
>
>  hw.machine: amd64
>  hw.model: Intel(R) Core(TM) i7-6900K CPU @ 3.20GHz
>  hw.ncpu: 16
>  hw.machine_arch: amd64
>
> plus 32G of memory has a time of:
>
>   366.32 real   324.97 user41.33 sys
>
> The approximate time break down is:
>
>  ELF/DWARF loading  : 110s (1m50s)
>  Objdump: 176s (2m56s)
>  Processing :  80s (1m20s)
>

I don't mind this execution time for the near future. It is far from
obscene after building and running 600 tests.

>
> The DWARF loading is not optimised and I load all source line to address
> maps
> and all functions rather that selectively scanning for specific names at
> the
> DWARF level. It is not clear to me scanning would be better or faster.


I doubt it is worth the effort. There should be few symbols in an exe we
don't care about. Especially once we start to worry about libc and libm.

My hope
> is moving to Capstone would help lower or remove the objdump overhead. Then
> there is threading for the loading.
>
> > I don't recall it taking near this long in the past. I used to run it as
> part of
> > development.
>
> The objdump processing is simpler than before so I suspect the time would
> have
> been at least 4 minutes.
>
> > But we may have more tests and the code has changed.
>
> I think having more tests is the dominant factor.
>
> > Reading dwarf
> > with the file open/closes, etc just may be more expensive than parsing
> the text
> > files.
>
> The reading DWARF is a cost and at the moment it is not optimised but it
> is only
> a cost because we still parse the objdump data. I think opening and closing
> files is not a factor.
>
> The parsing the objdump is the largest component of time. Maybe using
> Capstone
> with the ELF files will help.
>
> > But it is more accurate and lays the groundwork.for more types of
> analysis.
>
> Yes and think this is important.
>

+1

>
> > Eventually we will have to profile this code. Whatever is costly is done
> for
> > each exe so there is a multiplier.
> >
> > I suspect this code would parallelize reading info from the exes fairly
> well.
>
> Agreed.
>

Might be a good case for C++11 threads if one of the thread container
classes is a nice pool.

And we might have some locking to account for in core data structures. Are
STL container instances thread safe?

But an addition after feature stable relative to old output plus Capstone.

>
> > Merging the info and generating the reports not well due to data
> contention.
>
> Yes.
>
> > But optimizing too early and the wrong way is not smart.
>
> Yes. We need Capstone to be added before this can happen.
>

+1


I would also like to see gcov support but that will not be a factor in the
performance we have. It will add reading a lot more files (gcno) and
writing a lot of gcda at the end. Again more important to be right than
fast at first. And completely an addition.

>
> Chris
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: covoar SIGKILL Investigation

2018-08-21 Thread Chris Johns
On 22/08/2018 14:41, Joel Sherrill wrote:
> On Tue, Aug 21, 2018, 10:26 PM Chris Johns  > wrote:
> 
> On 22/08/2018 09:29, Joel Sherrill wrote:
> > On Tue, Aug 21, 2018, 4:05 PM Vijay Kumar Banerjee
> mailto:vijaykumar9...@gmail.com>
> > >> 
> wrote:
> >     On Wed, 22 Aug 2018 at 01:55, Joel Sherrill  
> >     >> wrote:
> >
> >         How long is covoar taking for the entire set?
> >
> >     It works great. this is what `time` says 
> >     
> >     real17m49.887s
> >     user14m25.620s
> >     sys0m37.847s
> >     
> >
> > What speed and type of processor do you have? 
> >
> 
> The program is single threaded so the preprocessing of each executable is
> sequential. Memory usage is reasonable so there is no swapping.
> 
> Running covoar from the command line on a box with:
> 
>  hw.machine: amd64
>  hw.model: Intel(R) Core(TM) i7-6900K CPU @ 3.20GHz
>  hw.ncpu: 16
>  hw.machine_arch: amd64
> 
> plus 32G of memory has a time of:
> 
>       366.32 real       324.97 user        41.33 sys
> 
> The approximate time break down is:
> 
>  ELF/DWARF loading  : 110s (1m50s)
>  Objdump            : 176s (2m56s)
>  Processing         :  80s (1m20s)
> 
> 
> I don't mind this execution time for the near future. It is far from obscene
> after building and running 600 tests. 

Yeah, there are other things we need to do first.

> The DWARF loading is not optimised and I load all source line to address 
> maps
> and all functions rather that selectively scanning for specific names at 
> the
> DWARF level. It is not clear to me scanning would be better or faster. 
> 
> I doubt it is worth the effort. There should be few symbols in an exe we don't
> care about. Especially once we start to worry about libc and libm.

Yeah, this is what I thought at the start.

> My hope
> is moving to Capstone would help lower or remove the objdump overhead. 
> Then
> there is threading for the loading.
> 
> > I don't recall it taking near this long in the past. I used to run it as
> part of
> > development.
> 
> The objdump processing is simpler than before so I suspect the time would 
> have
> been at least 4 minutes.
> 
> > But we may have more tests and the code has changed.
> 
> I think having more tests is the dominant factor.
> 
> > Reading dwarf
> > with the file open/closes, etc just may be more expensive than parsing 
> the
> text
> > files.
> 
> The reading DWARF is a cost and at the moment it is not optimised but it 
> is only
> a cost because we still parse the objdump data. I think opening and 
> closing
> files is not a factor.
> 
> The parsing the objdump is the largest component of time. Maybe using 
> Capstone
> with the ELF files will help.
> 
> > But it is more accurate and lays the groundwork.for more types of 
> analysis.
> 
> Yes and think this is important.
> 
> +1
> 
> > Eventually we will have to profile this code. Whatever is costly is 
> done for
> > each exe so there is a multiplier.
> >
> > I suspect this code would parallelize reading info from the exes fairly 
> well.
> 
> Agreed.
> 
> Might be a good case for C++11 threads if one of the thread container classes 
> is
> a nice pool. 

Good idea. I think we need to look at some of the global object pointers before
we head down this path.

> And we might have some locking to account for in core data structures. Are STL
> container instances thread safe? 

We need to manage all locking.

> But an addition after feature stable relative to old output plus Capstone.

Agreed.

> > Merging the info and generating the reports not well due to data 
> contention.
> 
> Yes.
> 
> > But optimizing too early and the wrong way is not smart.
> 
> Yes. We need Capstone to be added before this can happen.
> 
> +1
> 
> I would also like to see gcov support but that will not be a factor in the
> performance we have. It will add reading a lot more files (gcno) and writing a
> lot of gcda at the end. Again more important to be right than fast at first. 
> And
> completely an addition.

Agreed.

Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel