> -----Original Message-----
> From: Chris Johns <chr...@rtems.org>
> Sent: Monday, August 19, 2019 17:33
> To: Kinsey Moore <kinsey.mo...@oarcorp.com>; devel@rtems.org
> Subject: Re: [PATCH v2] Add JSON log generation
> 
> On 20/8/19 2:13 am, Kinsey Moore wrote:
> > Add log formatter hooks and JSON log formatter to the test
> > infrastructure for consumption by automated processes or report generators.
> > ---
> >  tester/rt/test.py | 84
> > +++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 84 insertions(+)
> >
> > diff --git a/tester/rt/test.py b/tester/rt/test.py index
> > da0a11e..0ed799a 100644
> > --- a/tester/rt/test.py
> > +++ b/tester/rt/test.py
> > @@ -38,6 +38,7 @@ import re
> >  import sys
> >  import threading
> >  import time
> > +import json
> 
> The import list is sorted :)

I'll move it into the formatter to reduce scope and sort it there.
> 
> >  from rtemstoolkit import configuration  from rtemstoolkit import
> > error @@ -217,6 +218,69 @@ def killall(tests):
> >      for test in tests:
> >          test.kill()
> >
> > +def generate_json_report(args, reports, start_time, end_time, total,
> json_file):
> > +    import sys
> > +    json_log = {}
> > +    json_log['Command Line'] = " ".join(args)
> > +    json_log['Python'] = sys.version.replace('\n', '')
> > +    json_log['test_groups'] = []
> > +    json_log['Host'] = host.label(mode = 'all')
> > +    json_log['summary'] = {}
> > +    json_log['summary']['passed_count'] = reports.passed
> > +    json_log['summary']['failed_count'] = reports.failed
> > +    json_log['summary']['user-input_count'] = reports.user_input
> > +    json_log['summary']['expected-fail_count'] = reports.expected_fail
> > +    json_log['summary']['indeterminate_count'] = reports.indeterminate
> > +    json_log['summary']['benchmark_count'] = reports.benchmark
> > +    json_log['summary']['timeout_count'] = reports.timeouts
> > +    json_log['summary']['invalid_count'] = reports.invalids
> > +    json_log['summary']['wrong-version_count'] = reports.wrong_version
> > +    json_log['summary']['wrong-build_count'] = reports.wrong_build
> > +    json_log['summary']['wrong-tools_count'] = reports.wrong_tools
> > +    json_log['summary']['total_count'] = reports.total
> > +    json_log['summary']['average_test_time'] = str((end_time - start_time) 
> > /
> total)
> > +    json_log['summary']['testing_time'] = str(end_time - start_time)
> > +
> > +    result_types = ['failed', 'user-input', 'expected-fail',
> > + 'indeterminate', 'benchmark', 'timeout', 'invalid', 'wrong-version',
> > + 'wrong-build', 'wrong-tools']
> 
> There is a soft'ish limit that attempts to use 80 columns in the python code.
> This one is too long.
> 
> > +    json_results = {}
> > +    for result_type in result_types:
> > +        json_log['summary'][result_type] = []
> > +
> > +    # collate results for JSON log
> > +    for name in reports.results:
> > +        result_type = reports.results[name]['result']
> > +        test_parts = name.split("/")
> > +        test_category = test_parts[-2]
> > +        test_name = test_parts[-1]
> > +        if result_type != 'passed':
> > +            json_log['summary'][result_type].append(test_name)
> > +        if test_category not in json_results:
> > +            json_results[test_category] = []
> > +        json_result = {}
> > +        # remove the file extension
> > +        json_result["name"] = test_name.split('.')[0]
> > +        json_result["result"] = result_type
> > +        if result_type == "failed" or result_type == "timeout":
> > +            json_result["output"] = reports.results[name]["output"]
> > +        json_results[test_category].append(json_result)
> > +
> > +    # convert results to a better format for report generation
> > +    sorted_keys = sorted(json_results.keys())
> > +    for i in range(len(sorted_keys)):
> > +        results_log = {}
> > +        results_log["index"] = i + 1
> > +        results_log["name"] = sorted_keys[i]
> > +        results_log["results"] = json_results[sorted_keys[i]]
> > +        json_log["test_groups"].append(results_log)
> > +
> > +    # write out JSON log
> > +    with open(json_file, 'w') as outfile:
> > +        json.dump(json_log, outfile, sort_keys=True, indent=4)
> > +
> > +report_formatters = {
> > +        'json': generate_json_report
> > +}
> > +
> >  def run(args, command_path = None):
> >      import sys
> >      tests = []
> > @@ -227,6 +291,8 @@ def run(args, command_path = None):
> >          optargs = { '--rtems-tools':    'The path to the RTEMS tools',
> >                      '--rtems-bsp':      'The RTEMS BSP to run the test on',
> >                      '--user-config':    'Path to your local user 
> > configuration INI file',
> > +                    '--report-format':  'Formats in which to report test 
> > results in
> addition to txt: json',
> > +                    '--log':            'Log output location',
> 
> Is this option already taken by the options.py module which imports the
> rtemstoolkit's options? Would --report work?

The thought here was to use the same log location that was already being 
provided for that option. The options framework in use wouldn't let me grab the 
value of --log unless I defined it there. It may be better to separate reports 
from logs a bit more.

> 
> >                      '--report-mode':    'Reporting modes, failures 
> > (default),all,none',
> 
> I wonder if this is now looking a bit confusing?

I agree. If we're differentiating between reports and logs, this is more of a 
log mode than a report mode. Unfortunately, the terminology is already mixed 
and they're both delivering the same content in different formats.

Kinsey
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to