On Wed, Feb 19, 2020 at 8:44 AM Jose Valdez <jose.val...@edisoft.pt> wrote:
>
> Hello Gedare,
>
> Thank you for your review.
>
> Please find my answers below.
>
> Best regards
>
> José
>
> -----Original Message-----
> From: Gedare Bloom [mailto:ged...@rtems.org]
> Sent: quarta-feira, 19 de fevereiro de 2020 15:05
> To: Jose Valdez
> Cc: sebastian huber; rtems-de...@rtems.org
> Subject: Re: Tool Roadmap for the RTEMS Pre-Qualification
>
> Hi Jose,
>
> Thank you for the detailed information. I have a few questions:
> 1) Will the output of test analysis and report generator tools be
> plaintext, and will it be structured or unstructured (flat file)?
>
>
> [Jose] The final output will be in sphinx (rst) format.
>

It might be better to have a parseable intermediate output for the
results in a well-structured format (e.g., XML, yaml, etc.). The
report in sphinx makes sense, but there should be a structured output
file that could be used for example by other report generators.

> 2) Do you consider the covoar tool as viable for code coverage? That
> is what we have been using in the community. Joel may like to
> weigh-in.
>
>
> [Jose] We don't know yet. Since we are currently analyzing the RTEMS Tester, 
> we will also analyze the covoar and comeback with the conclusions.
> If the covoar presents good results I don't see why to not use it.
>
Great.

> 3) I want to see Clang Static Analyzer working. If there is any
> possibility you all can help move us toward that direction, I think it
> would be a great positive to the community. Coverity is difficult and
> the license we can get freely is limited. I also have plans to work on
> both Coverity and clang-analyzer improvements soon.
>
>
> [Jose] In the past (2018)we were able to run the CLANG static analyzer on 
> RTEMS (we had to do a minor "dirty" modification) and we get interesting 
> results.
> Maybe I could try to do it again and send you the steps to run the CLANG 
> Static Analyzer, if you would like.
>
Yes, I would like that very much. It is something I'd like to work
into our open-source workflow if possible.

Thanks,
Gedare

> Gedare
>
> On Wed, Feb 19, 2020 at 6:05 AM Jose Valdez <jose.val...@edisoft.pt> wrote:
> >
> > Hello,
> >
> > Following the e-mail sent by Sebastian, please find here the missing 
> > information, regarding the tools for RTEMS pre qualification.
> >
> > Best regards
> >
> > José
> >
> > === Test Executor ===
> >
> > The Test Executor will be the software that manages the execution of the 
> > RTEMS Tests.
> > It will set the necessary hardware, run the executable tests, control the 
> > execution and gather the output.
> > The following capabilities will be available:
> >
> > * Send commands to the target platform.
> >   A subset of commands will be made available, but it shall be possible to 
> > add more commands.
> > * Send commands to auxiliary test programs, which run in external devices.
> >   This will be done by a "command interface" that will allow the Test 
> > Executor to communicate with external components in a generic way.
> >   This "command interface" is intended to be generic to fit any 
> > functionality need.
> >   Note that this capability is still to be evaluated if its implementation 
> > complexity fits the the RTEMS SMP project budget.
> > * Load and execute RTEMS executables to the target platform.
> > * Get and store the output (log) from the running executable.
> > * Wait for end of execution of the RTEMS executable.
> >   This includes both cases of test termination 
> > (Successfully/Unsuccessfully) and to stop execution with a timeout 
> > mechanism.
> >
> > Currently RTEMS Tester offers part of these functionalities.
> > An evaluation in undergoing to assess how RTEMS Tester fits these 
> > capabilities and the possibility to add the missing features.
> > The functionalities described above will be subject to validation to make 
> > sure that the Test Executor is suitable to pre qualify RTEMS for critical 
> > missions.
> >
> > In the scope of RTEMS SMP project, the following specific functionalities 
> > are required from the above generic functionality:
> >
> > * Send a board reset command (to allow to set the board to boot-up 
> > conditions).
> > * Load and execute tests in the tsim-leon2, tsim-leon3, gr712rc and gr740 
> > targets.
> > * Use an auxiliary PC to test MIL-STD-1553 interface.
> >   The need for this PC is due that the gr712rc and gr740 targets only 
> > support a single MIL-STD-1553 interface, which makes impossible loopback 
> > tests.
> >   This auxiliary PC will contain a MIL-STD-1553 interface and will run an 
> > application which will translate the "command interface" messages into 
> > command to this auxiliary hardware.
> >   Note that this activity depends on the assessment of the possibility to 
> > have this command interface.
> > * Receive test output either by UART or the standard output.
> >
> >
> > === Test Report Analyser and Report Generator ===
> >
> > The Test Analyser will receive the reports produced by the Test Executor, 
> > which will follow the `Test Framework 
> > <https://lists.rtems.org/pipermail/devel/2019-March/025178.html>`_ 
> > structure and assess the status of the tests.
> > The possible status for each test are presented  `here 
> > <https://docs.rtems.org/branches/master/user/testing/tests.html#test-controls>`_.
> > For each test report, the Test Report Analyser will scan the requirements 
> > targeted by the test and, depending on the result of the test, will set the 
> > requirement as OK or Not OK.
> > Note also the result of a requirement tested by several tests will be the 
> > logical conjunction of the result of the associated tests (that is, a 
> > requirement is considered passed only if all the tests result is the 
> > expected for that requirement).
> >
> > The Report Generator will gather the information from the Test Report 
> > Analyser and will also check for `other validation 
> > <https://docs.rtems.org/branches/master/eng/req-eng.html#requirement-validation>`_
> >  performed to the requirements.
> > These validation items are written in doorstop format for each requirement, 
> > containing the result of the validation activity performed to the 
> > requirement, including the final assessment (OK or Not OK, as for testing 
> > and respective justification.
> > The Report Generator will read each of these validattion and, as for the 
> > requirements validated by test, set the requirement as OK or Not OK, 
> > depending on the result of the validation.
> >
> >
> > === Test Plan Generator ===
> >
> > The Test Plan will read the following information written in doorstop 
> > format:
> >
> > * `Test Suite Specifications 
> > <https://docs.rtems.org/branches/master/eng/req-eng.html#test-suite>`_, 
> > which will contain the testsuite general description.
> > * `Test Procedure Specifications 
> > <https://docs.rtems.org/branches/master/eng/req-eng.html#test-procedure>`_, 
> > which will contain the set-up for each test configuration.
> > * `Test Case Specifications 
> > <https://docs.rtems.org/branches/master/eng/req-eng.html#test-case>`_, 
> > which will contain the steps for each test.
> >
> > The information present in the above specifications will be printed in the 
> > Test Plan document, which will contain all the tests specifications and the 
> > necessary set-ups in order to run the complete test suite for RTEMS SMP.
> > Note that the contents of the above specifications are still to be refined.
> > The Test Procedure Specifications may also have the following additional 
> > information:
> >
> > * configurations - software configurations in which the test case shall be 
> > executed (eg: coverage flag set)
> >
> > The Test Case Specifications may also have the following additional 
> > information:
> >
> > * title - title of the validation test case (summary description)
> > * criteria - the criteria to decide whether the test has passed or failed
> > * environment - the exact configuration and the set of the facility used to 
> > execute the test case and the configuration of the software utilized to 
> > support the test conduction
> > * constraints - any special constraints on the used test procedures
> > * dependencies - list of all test cases that must be executed before this 
> > test case
> > * type - type of test (e.g. Functional, Performance, etc)
> > * test procedure - list of all test procedures in which the test case 
> > should be executed in (see next paragraph's explanation)
> > * reset - boolean value to indicate a power reset to the target board 
> > should be made before running the test (some tests may require boot up 
> > conditions)
> > * auxiliary software - indicates the auxiliary software test code that 
> > should be run in auxiliary PC (if needed).
> > * timeout - timeout of the test case
> > * executable - the executable file name where the test case belongs to (a 
> > single test executable can contain several test cases)
> >
> >
> > === Code Coverage Analysis ===
> >
> > The C code coverage will be performed by using the GCOV to determine the C 
> > RTEMS code exercised by each test.
> > The coverage information will be put in the test log, after the end of test 
> > marker (in the end of the test).
> > Note that this is done automatically by the instrumented GCOV code, added 
> > to a test executable, which means that there is no interference of the Test 
> > Executor itself in gathering coverage (an executable with coverage 
> > instrumented code is seen as a normal executable, which means that the Test 
> > Executor is "coverage agnostic").
> > This GCOV information at each executable will be interpreted by the 
> > QualificationManager tool and the coverage results of all tests will be 
> > gathered to obtain the overall RTEMS source code coverage.
> > Note also, for user reference, the LCOV tool will be used to transform the 
> > information from GCOV to human readable format: source code files, with 
> > embedded coverage information.
> >
> > The assembly coverage approach is not yet fully defined.
> > One of the approachs could be to use simulators and/or hardware with a 
> > trace unit and do the following:
> >
> > * execute the tests that exercise specific assembly code.
> > * trace the assembly code execution (insert breakpoint in the first 
> > assembly instruction and trace untill the last asssembly instruction).
> > * combine the coverage of all tests to get the overall assembly coverage.
> >
> >
> > === Static Code Analysis ===
> >
> > The static code analysis is an open topic (see 
> > https://lists.rtems.org/pipermail/devel/2019-July/026805.html and 
> > https://lists.rtems.org/pipermail/devel/2019-July/026796.html).
> > The currently proposed approach is to use Coverity tool, with an European 
> > Space Agency license, (currently RTEMS community uses the free version, 
> > `Coverity Scan <https://scan.coverity.com/projects/rtems>`_), which 
> > supports `these 
> > <https://www.synopsys.com/content/dam/synopsys/sig-assets/datasheets/coverity-misra-standards-ds-ul.pdf>`_
> >  rules.
> >
> > Alternatively, the project could consider to use open-source tools for the 
> > static analysis.
> > The open-source tools that were considered as a possibility were `Clang 
> > Static Analyzer <https://clang-analyzer.llvm.org/>`_ and `Cppcheck 
> > <http://cppcheck.sourceforge.net/>`_.
> > Both of these tools are active projects.
> > The rules covered by the Clang Static Analyzer are available `here 
> > <https://clang.llvm.org/docs/ClangStaticAnalyzer.html>`_.
> > The rules covered by the Cppcheck are available `here 
> > <https://sourceforge.net/p/cppcheck/wiki/ListOfChecks/>`_ and the Cppcheck 
> > compliance with MISRA C 2012 rules are available `here 
> > <http://cppcheck.sourceforge.net/misra.php>`_.
> >
> > The results obtained by the chosen tools will be read by the Qualification 
> > Manager and presented in the Software Verification Report document.
> > In addition, the justifications for the violations, written directly inside 
> > the source code, will be also read by the tool and will be presented in the 
> > Software Verification Report document.
> >
> >
> > -----Original Message-----
> > From: devel [mailto:devel-boun...@rtems.org] On Behalf Of Sebastian Huber
> > Sent: terça-feira, 11 de fevereiro de 2020 10:29
> > To: rtems-de...@rtems.org
> > Subject: Tool Roadmap for the RTEMS Pre-Qualification
> >
> > Hello,
> >
> > this email tries to give an overview of the tool roadmap for the RTEMS
> > pre-qualification activity and things to decide for the RTEMS Project.
> >
> > The tools used for the RTEMS pre-qualification will be command line
> > tools only.
> > We will not use GUIs. New tools will be written in Python. The aim is to
> > reuse
> > and improve existing tools of the RTEMS Project. The tools will be
> > configured
> > via command line options and configuration files. The tool inputs will be
> > command line options, configuration files, source files, output from other
> > tools, and bug tracker databases.
> >
> > The tools fall roughly in three categories. Firstly, there will be
> > compiler-like tools which generate output files from input files. For
> > example,
> > one tool could generate document-specific glossaries from a project-wide
> > glossary. Secondly, there will be client/server tools. For example,
> > there could
> > be a tool which receives test programs, runs them on a target system and
> > sends
> > back the result. Thirdly, there will be builder tools. For example, the
> > creation of an RTEMS pre-qualification data package for a particular target
> > which can be handed over to an end user.
> >
> > === Important Decisions to Make ===
> >
> > * Where to place the specification items?
> >
> > * What should be in the specification (master data set)?
> >
> > * Which content is generated?
> >
> > * Which generated content is version controlled?
> >
> > * How is the content generation integrated in the build system?
> >
> > * How is the Python development organized?
> >
> > === Repository Overview ===
> >
> > We have currently four main repositories in the RTEMS Project:
> >
> > * rtems
> >
> > * rtems-tools
> >
> > * rtems-docs
> >
> > * rtems-source-builder (RSB)
> >
> > The RSB is a self-contained component and we will use it as is for the
> > pre-qualification activity.
> >
> > There are dependencies between the rtems, rtems-tools, and rtems-docs
> > repositories. Inconsistencies must be currently resolved manually
> > without any
> > tool support.
> >
> > For example, we have documentation of API functions in Doxygen and the
> > Classic API Guide:
> >
> > https://docs.rtems.org/doxygen/branches/master/group__ClassicTasks.html#gabffda1c2301962f0ae5af042ac0bba62
> >
> > https://docs.rtems.org/branches/master/c-user/task_manager.html#task-create-create-a-task
> >
> > One goal of the pre-qualification activity is to introduce a master data set
> > (specification) and use tools to generate content in the right format
> > from it.
> >
> > For example, currently a human needs to edit
> >
> > https://git.rtems.org/rtems/tree/cpukit/include/rtems/rtems/tasks.h
> >
> > and
> >
> > https://git.rtems.org/rtems-docs/tree/c-user/task_manager.rst
> >
> > to declare and document the Classic Tasks API. This should be replaced by an
> > API specification which includes documentation elements and a generator tool
> > which produces the header file with Doxygen markup and the reST files
> > for the
> > Classic API Guide. Specifically for the pre-qualification activity, the API
> > specification can be used to also generate an interface specification
> > document
> > (Interface Control Document in ECSS terms).
> >
> > For the generated files, we have two options:
> >
> > 1. They are version controlled. Specification changes and the generated
> > changes
> >     are sent for review to the mailing list and then checked in or not. This
> >     means a normal user of the repository has no need to install the
> > generator
> >     tools.  Also in case the generator tools stop to work, we are back
> > to the
> >     current situation (with a dead specification in the repository).
> >
> > 2. They are not version controlled and the build system generates them on
> >     demand.
> >
> > === Location of the Specification ===
> >
> > This seems to be a hot topic. The specification could be located in the
> > rtems,
> > the rtems-docs, or a separate repository. The new build system is based on
> > specification items which are located in the "spec/build" directory in the
> > RTEMS sources. It is desirable to not split up the specification.
> >
> > We could split up the specification and add specification items more
> > related to
> > the documentation to the rtems-docs repository. This may end up in
> > discussions
> > if a particular item goes into rtems or rtems-docs. I don't think this helps
> > and makes things easier.
> >
> > We could add a new repository which contains the specification and the other
> > repositories as Git submodules along with tools and scripts and whatever. I
> > think we already have more than enough repositories in the RTEMS Project
> > and we
> > should only introduce new repositories then there is a real need. However, I
> > also acknowledge that the impact of the pre-qualification activity should be
> > manageable. Splitting up the specification into a build related part
> > which is
> > contained in the RTEMS sources repository and the rest could be a way
> > forward
> > to get started. Thanks to Chris for bringing up this approach.
> >
> > What do you think about this:
> >
> > 1. The new build system will remain as is and uses the build specification
> >     items located in "spec/build" in the RTEMS sources.
> >
> > 2. We add a new empty rtems-qual repository.
> >
> > 2.1. In this repository we add a "spec" directory for the non-build
> >       specification items.
> >
> > 2.2. In this repository we add all the generator tools.
> >
> > 2.3. In this repository we add things closely related to pre-qualification.
> >
> > 2.4. In this repository we add Git submodules for the other RTEMS
> > repositories
> >       touched by the generator tools. Changes in generated files in the
> > standard
> >       RTEMS repositories go through the normal patch review process.
> >
> > 2.5. This repository may use a simplified review policy during the initial
> >       pre-qualification activity.
> >
> > Once the pre-qualification activity produced in a mature and usable
> > infrastructure we can re-evaluate the repository organization and the
> > location
> > of the specification.
> >
> > === Python Development ===
> >
> > The tool development in Python for the pre-qualification is a team work
> > activity. Therefore we should introduce a coding standard, automatic code
> > formatters (black, yapf), static analysis tools (mypy, pylint, flake8),
> > documentation checkers (pydocstyle), and unit/integration tests
> > (unittest and
> > unittest.mock modules). The mentioned Python development tools are just
> > examples. There use and configuration should be discussed on the RTEMS
> > mailing
> > list.
> >
> > If we place the specification along with corresponding tools into a separate
> > repository we can use it to set up a prototype Python development work flow.
> >
> > === Build System Integration ===
> >
> > This section is only relevant if the specification is not located in a
> > dedicated repository.
> >
> > It is tedious to know which tools, input files, and output files are
> > used in a
> > particular repository and how they are invoked. In each repository, a build
> > system is present. The build system can be used to update the generated
> > content
> > of a particular repository on demand (e.g. if the specification
> > changed). For
> > example, we could add an --enable-maintainer-mode option to the waf
> > configure
> > command.
> >
> > ./waf configure --enable-maintainer-mode --rtems-tools=X --rtems-spec=Y
> >
> > This could check that the tools and specification are available and enable
> > rules to re-generate content based on changes in the specification. The
> > tools
> > could report a version and the build system could check that the right tool
> > version is used to avoid a re-generation with the wrong tools.
> >
> > Having the tool configuration, invocation, and dependencies in the build
> > system
> > is also an accurate documentation how the things are set up and makes sure
> > everyone is using them in a defined way.
> >
> > === Howtos ===
> >
> > We will add howtos for common RTEMS maintenance task, e.g. how to add a
> > new API
> > function, how to add a glossary term, etc. See also "8.6 Howtos" section
> > of the
> > new build system:
> >
> > https://ftp.rtems.org/pub/rtems/people/sebh/eng.pdf
> >
> > It may make sense to collect all howtos in a dedicated chapter:
> >
> > https://lists.rtems.org/pipermail/devel/2020-January/056849.html
> >
> > === Specification-to-X Tool ===
> >
> > The specification-to-X tool could generate the following content from the
> > specification (controlled by command line options or a configuration file):
> >
> > * document-specific glossaries (VC)
> >
> > * API documentation reST files for the Classic API Guide (VC)
> >
> > * API header files with Doxygen markup (VC)
> >
> > * interface specification document (Interface Control Document in ECSS
> > terms)
> >
> > * software requirements specification document
> >
> > * test plans
> >
> > * configuration files for static analysis tools
> >
> > * configuration file for Doxygen
> >
> > Generated content which is independent of a particular target system and
> > has a
> > low rate of change should be version controlled (VC).
> >
> > === Traceability of Specification Items ===
> >
> > For the traceability of specification items, please have a look at:
> >
> > https://docs.rtems.org/branches/master/eng/req-eng.html#traceability-of-specification-items
> >
> > There are some options available to provide a traceable history of
> > specification items.
> >
> > Standards demand forward and backward traceability between specification
> > items
> > (e.g. requirements). This is achieved through standard Doorstop features.
> >
> > The traceability between software requirements, architecture and design will
> > probably need some iterations to find to a good solution.
> >
> > === Additional Tools ===
> >
> > José Valdez from EDISOFT will give you shortly an overview of additional
> > tools
> > which cover the following topics:
> >
> > * static code analysis
> >
> > * test execution
> >
> > * code coverage
> >
> > * test output analysis
> >
> > * test report generation
> >
> > * test plan generation
> >
> > _______________________________________________
> > devel mailing list
> > devel@rtems.org
> > http://lists.rtems.org/mailman/listinfo/devel
> > _______________________________________________
> > devel mailing list
> > devel@rtems.org
> > http://lists.rtems.org/mailman/listinfo/devel
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to