On 17/09/2018 07:33, Chris Johns wrote:
On 17/09/2018 15:13, Sebastian Huber wrote:
On 17/09/2018 05:23, Chris Johns wrote:
On 17/09/2018 08:46, Joel Sherrill wrote:
On Sat, Sep 15, 2018 at 12:58 AM Chris Johns <chr...@rtems.org
<mailto:chr...@rtems.org>> wrote:

      On 14/9/18 11:18 pm, Sebastian Huber wrote:
      > ---
      >  cpukit/Makefile.am                    | 79
      ++++++++++++++++++++++++++++++++++-
      >  cpukit/configure.ac<http://configure.ac>                    |  1 -
      >  cpukit/sapi/Makefile.am               | 63 ----------------------------
      >  cpukit/{sapi => }/vc-key.sh           |  0
      >  cpukit/{sapi => }/version-vc-key.h.in<http://version-vc-key.h.in>  |  0
      >  cpukit/wrapup/Makefile.am             |  2 +-
      >  6 files changed, 79 insertions(+), 66 deletions(-)
      >  delete mode 100644 cpukit/sapi/Makefile.am
      >  rename cpukit/{sapi => }/vc-key.sh (100%)
      >  rename cpukit/{sapi => }/version-vc-key.h.in
<http://version-vc-key.h.in>
      (100%)

      Could you please explain why you are performing this merge?

      I am not against such a change however I would like to understand what
the plan
      is and where this is going?

      The coverage tool and covoar work is using the internally built
libraries to
      group functionality, for example score. If the cpukit build is completely
      fattened I think that tool may need to change to manage the grouping.

The coverage analysis has always assumed that the directories and the
sub-libraries created imply areas of functionality that a developer would
like to know the coverage of independently of other areas. For example,
should the score, fatfs, and shell be lumped into one coverage report
or would it be IMO preferable to generate them on each area?
This makes sense. Any user or organisation wanting coverage information will
only be interested in specific areas and not everything that could be run. We
need to provide what we see as the standard groupings and I suspect we will need
to allow that data to be specialised where a more accurate and controlled
profile is needed.

We need to recognize that some code analysis needs to know the logical
grouping of code. How do you propose this happen when the information
implicit in the sub-libraries is lost?
We have a growing set of data with RTEMS. Some of the files under this tree in
the tester are an example:

https://git.rtems.org/rtems-tools/tree/tester/rtems

I see the grouping of source as another set of data we need to maintain. The
current use of libraries is implementation specific ...

https://git.rtems.org/rtems-tools/tree/tester/rtems/testing/coverage/symbol-sets.ini


... and we would need to find a workable solution for this patch to be merged.

I am starting to wonder if this data is held in the RTEMS repo and it gets
installed?
I was not aware that this set of temporary build system artefacts was used
outside the build tree for some other purposes.
That is understandable.

I think this grouping of
coverage analysis should be done independent of build system internals.
Agreed, it was a means to an end and dates back to the very first
implementations of coverage. I updated covoar recently to use the ELF symbols
directly ...

https://git.rtems.org/rtems-tools/tree/tester/covoar/DesiredSymbols.cc

That again was a small step.

In the
DWARF debug information of an object file you have the DW_TAG_compile_unit tag
https://git.rtems.org/rtems-tools/tree/rtemstoolkit/rld-dwarf.h#n605

and DW_AT_name attribute:
https://git.rtems.org/rtems-tools/tree/rtemstoolkit/rld-dwarf.h#n627

A path pattern could be used for the grouping, e.g. in

https://git.rtems.org/rtems-tools/tree/tester/rtems/testing/coverage/symbol-sets.ini

replace the [libraries] with regular expressions for path patterns.
Yeap, I suspect this is the most sensible way forward. It is agreeing on a
format for this data and then a framework to access it. We are seeing repeated
patterns with this type of data. I can see this grouping being used beyond
coverage. Take the BSP data, I think being able to read and generate XML or JSON
data from the INI files would be useful, for example as an input to a buildbot
instance or something that parses the test result emails and determines the tier
ranks.

I should mention I think the current approach of archives for grouping is an
approximation because inlines are in the archives and this needs to be managed
when working at the DWARF level and CUs. You end up with questions such as "Is
an score inline in a file system call part of the file system coverage map or
the score?".

This relies
on the source tree layout which should be more or less static after our header
file and BSP movement.
This is the reason I raised the idea of moving this data to rtems.git. It could
be installed into the `$prefix/share/rtems/..` when RTEMS is installed.

How do we want to proceed with this? Should I fix the coverage tool to use the compilation unit source path for the grouping? I don't know how the coverage tool works so this would require a considerable time for me to get started.

--
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.hu...@embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.

_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to