Hi guys,

First of all, thanks again for your help. I really appreciate it.

I found an interesting behavior. If I set loglevel=info in /etc/avocado/avocado.conf, it will not produce any content in stderr.expected and stdout.expected. If I set loglevel=debug, then it will work as it should. I don't mind running in debug mode, but I am not sure the behavior should be affected by loglevel.

Anyway, the question I have is about using --output-check-record when multiplexing. I notice that the files stdout.expected and stderr.expected get overwritten on each variant. I will assume there is a way to save each of the variant results and then use them to check. The problem is that I went through the documentation and didn't find anything that talks about it.

Thanks again.

BTW, is the whole development team Brazilian?


On 09/08/2016 08:05 PM, Lucas Meneghel Rodrigues wrote:
Yep, that was the problem. The pipe won't work if shell=False (default).

Now, on the rationale why shell=False is the default, basically, it's unsafe, see:

https://docs.python.org/2/library/subprocess.html

With shell=True, the expected standard output starts to appear.

On Thu, Sep 8, 2016 at 4:01 PM Lucas Meneghel Rodrigues <[email protected] <mailto:[email protected]>> wrote:

    By the way, you *must* set shell=True to use pipes (which I did
    forget). Otherwise the pipes won't work.

    On Thu, Sep 8, 2016 at 4:00 PM Lucas Meneghel Rodrigues
    <[email protected] <mailto:[email protected]>> wrote:

        I think you guys are missing the point here. I just tried
        Marcos's test here and that's what I get with Avocado LTS 36.2:

        $ avocado run xentest.py --output-check-record all
        JOB ID     : c44d95698bc6a20329691a9c10370c3b63c8944f
        JOB LOG    :
        /home/lmr/avocado/job-results/job-2016-09-08T15.54-c44d956/job.log
        TESTS      : 1
         (1/1) xentest.py:xentest.test_xen_start_stop: ERROR (0.01 s)
        RESULTS    : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 |
        INTERRUPT 0
        JOB HTML   :
        
/home/lmr/avocado/job-results/job-2016-09-08T15.54-c44d956/html/results.html
        TESTS TIME : 0.01 s
        $ ls
        xentest.py  xentest.pyc  xentest.py.data
        $ cat xentest.py.data/stdout.expected
        $ cat xentest.py.data/stderr.expected
        xc: error: Could not obtain handle on privileged command
        interface (2 = No such file or directory): Internal error
        libxl: error: libxl.c:116:libxl_ctx_alloc: cannot open libxc
        handle: No such file or directory
        cannot init xl context

        Keep in mind that I have installed only the basic xen
        packages, so my 'xl' doesn't work. Avocado does its job and
        creates xentest.py.data, with the expected outputs.

        So no, you don't need to avoid the pipes, and everything
        should work. The caveat is that avocado creates
        xentest.py.data with the recorded files in the same dir of
        your test file. You have to test that with your version of
        avocado and verify if the directory/files are created or not.
        If not, it's a bug, although I think we have a functional test
        to avoid such a bug.

        So for me here, everything is worked as designed. I modified
        xentest.py to use avocado libs:

        #!/usr/bin/env python
        # vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4

        import os
        import sys
        import string
        import logging
        import commands
        import time

        from avocado import Test
        from avocado import main
        from avocado.utils import process
        from autotest.client import utils
        from autotest.client import xen

        progress_log = logging.getLogger("progress")

        class xentest(Test):
            """
            Test
            """
            def exec_xl(self):
                """
                Start xen Guest
                """
                sleeptime = self.params.get('sleep_time')
                guestdef = self.params.get('guest_cfg')
                runaction = self.params.get('run_action')
                domname = self.params.get('domain_name')
        progress_log.info <http://progress_log.info>('%s: Running
        action %s', self.name <http://self.name>, runaction)
                time.sleep(sleeptime)
                if runaction == 'create':
                    cmd = ('/usr/sbin/xl %s %s' % (runaction, guestdef))
                    if process.system(cmd, ignore_status=False) == "0":
                        pass
                    else:
                        return False
                else:
                    cmd = ('/usr/sbin/xl %s %s' % (runaction, domname))
                    if process.system(cmd, ignore_status=False) == "0":
                        pass
                    else:
                        return False

            def test_xen_start_stop(self):
                """
                Get Xen version
                """
                fd = open('/tmp/xenver.log', 'w')
                fd.write('### %s\n' % time.time())
                cmd = "xl info|grep xen_version| awk '{print $3}'"
                xenversion = process.system_output(cmd,
        ignore_status=False)
                process.system('ls -la /usr/share/avocado/tests 2>&1')
                print('Xen Version = %s.\n' % xenversion)
                fd.write('Xen Version = %s.\n' % xenversion)
                fd.write('### %s\n' % time.time())
                if self.exec_xl() == "0":
                   pass
                else:
                    return False
                fd.close()

        if __name__ == "__main__":
            main()

        I hope this helps,

        Lucas

        On Thu, Sep 8, 2016 at 3:55 PM Marcos E. Matsunaga
        <[email protected]
        <mailto:[email protected]>> wrote:

            Thanks Amador,

            I assumed it would have the same behavior as utils.system
            and completely ignored the pipe.



            On 09/08/2016 07:39 PM, Amador Segundo wrote:

            On Sep 8, 2016 8:19 PM, "Marcos E. Matsunaga"
            <[email protected]
            <mailto:[email protected]>> wrote:
            >
            > Hi Cleber,
            >
            > Sorry for being a pain. I've tried to use
            process.system to execute commands, but I got an error,
            that's why I kept the autotest libraries (I actually
            copied from one of the files in io-github-autotest-libvirt).
            >
            > The error I'm getting is:
            >
            > 2016-09-08 13:49:25,844 sysinfo L0371 INFO | Profiler
            disabled
            > 2016-09-08 13:49:25,845 test  L0214 INFO | START
            
1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1
            > 2016-09-08 13:49:25,846 process L0334 INFO | Running
            'xl info|grep xen_version| awk '{print $3}''
            > 2016-09-08 13:49:25,863 process L0435 INFO | Command
            'xl info|grep xen_version| awk '{print $3}'' finished
            with 1 after 0.00377798080444s

            The problem here is the pipe. You can collect the output
            and parse it inside the python code. Something.like:

            output = process.system_output("xl info")
            for line in output.splitlines():
                if 'xen_version' in line:
                    var = line.split()[3]
                    break
            ...

            > 2016-09-08 13:49:25,863 stacktrace       L0038 ERROR|
            > 2016-09-08 13:49:25,863 stacktrace  L0041 ERROR|
            Reproduced traceback from:
            /usr/lib/python2.7/site-packages/avocado/core/test.py:448
            > 2016-09-08 13:49:25,864 stacktrace  L0044 ERROR|
            Traceback (most recent call last):
            > 2016-09-08 13:49:25,864 stacktrace  L0044 ERROR|   File
            "/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py",
            line 52, in test_xen_start_stop
            > 2016-09-08 13:49:25,864 stacktrace  L0044 ERROR|
            xenversion = process.system_output(cmd, ignore_status=False)
            > 2016-09-08 13:49:25,864 stacktrace  L0044 ERROR|   File
            "/usr/lib/python2.7/site-packages/avocado/utils/process.py",
            line 1139, in system_output
            > 2016-09-08 13:49:25,864 stacktrace  L0044 ERROR| sudo=sudo)
            > 2016-09-08 13:49:25,864 stacktrace  L0044 ERROR|   File
            "/usr/lib/python2.7/site-packages/avocado/utils/process.py",
            line 1046, in run
> 2016-09-08 13:49:25,864 stacktrace L0044 ERROR| raise CmdError(cmd, sp.result)
            > 2016-09-08 13:49:25,864 stacktrace  L0044 ERROR|
            CmdError: Command 'xl info|grep xen_version| awk '{print
            $3}'' failed (rc=1)
            > 2016-09-08 13:49:25,864 stacktrace  L0045 ERROR|
            > 2016-09-08 13:49:25,865 test  L0563 ERROR| Traceback
            (most recent call last):
            >
            > 2016-09-08 13:49:25,865 test  L0563 ERROR|   File
            "/usr/lib/python2.7/site-packages/avocado/core/test.py",
            line 503, in _run_avocado
            >     raise test_exception
            >
            > 2016-09-08 13:49:25,865 test  L0563 ERROR| CmdError:
            Command 'xl info|grep xen_version| awk '{print $3}''
            failed (rc=1)
            >
            > I looked at
            "/usr/lib/python2.7/site-packages/avocado/utils/process.py",
            line 1139 and didn't see anything that could be causing
            the problem. I even tried to add "sudo=False" argument in
            process.system, but I get the same error.
            >
            > Thanks again.
            >
            >
            > On 09/08/2016 06:36 PM, Cleber Rosa wrote:
            >>
            >> On 09/08/2016 01:50 PM, Marcos E. Matsunaga wrote:
            >>>
            >>> On 09/08/2016 05:44 PM, Cleber Rosa wrote:
            >>>>
            >>>> On 09/08/2016 11:34 AM, Marcos E. Matsunaga wrote:
            >>>>>
            >>>>> Hi Cleber,
            >>>>>
            >>>>> Thanks for your quick reply. That's exactly what I
            understood, but here
            >>>>> is what is happening
            >>>>>
            >>>>> I have a directory ~/avocado/xen/tests where I have
            the xentest.py
            >>>>> script. When I execute it, it does create the directory
            >>>>> ~/avocado/xen/tests/xentest.py.data with
            stderr.expected and
            >>>>> stdout.expected (empty). It also creates the two
            files (stdout and
            >>>>> stderr) in the job-results/latest directory, but
            also empty.
            >>>>>
            >>>>> The weird thing is that instead of saving, it
            reports to the job.log as
            >>>>> an error "L0151 ERROR| [stderr] Parsing config from
            /VM/guest1/vm.cf <http://vm.cf>".
            >>>>>
            >>>>> That's why I think I am missing something.
            >>>>
            >>>> Can you post the full test code and the resulting
            `job.log` file?
            >>>
            >>> Sure.. It is attached.
            >>> And the multiplex file I am using is:
            >>>
            >>> xentest:
            >>>      guest1:
            >>>          action: !mux
            >>>              start:
            >>>                  run_action: "create"
            >>>                  domain_name: "perf1"
            >>>                  sleep_time: 1
            >>>              stop:
            >>>                  run_action: "shutdown"
            >>>                  domain_name: "perf1"
            >>>                  sleep_time: 60
            >>>          guest_cfg: /Repo/VM/perf1/vm.cfg
            >>>
            >>>>> Thanks again for your help.
            >>>>>
            >>>>> On 09/08/2016 02:59 PM, Cleber Rosa wrote:
            >>>>>>
            >>>>>> On 09/08/2016 10:25 AM, Marcos E. Matsunaga wrote:
            >>>>>>>
            >>>>>>> Hi All,
            >>>>>>>
            >>>>>>> I am new to avocado and have just started to look
            into it.
            >>>>>>>
            >>>>>>> I have been playing with avocado on Fedora 24 for
            a few weeks. I
            >>>>>>> wrote a
            >>>>>>> small script to run commands and was exploring
            the option
            >>>>>>> "--output-check-record", but it never populate
            the files
            >>>>>>> stderr.expected
            >>>>>>> and stdout.expected. Instead, it prints an error
            with "[stderr]" in
            >>>>>>> the
            >>>>>>> job.log file. My understanding is that the output
            (stderr and stdout)
            >>>>>>> of commands/scripts executed by avocado would be
            captured and saved on
            >>>>>>> those files (like on synctest.py example), but it
            doesn't. I want to
            >>>>>>> know if I am doing something wrong or it is a bug.
            >>>>>>>
            >>>>>> Hi Marcos,
            >>>>>>
            >>>>>> Avocado creates the `stdout` and `stderr` files in
            the test result
            >>>>>> directory.  In the synctest example, for instance,
            my contains:
            >>>>>>
            >>>>>> $ avocado run examples/tests/synctest.py
            >>>>>> $ cat
            >>>>>>
            
~/avocado/job-results/latest/test-results/1-examples_tests_synctest.py\:SyncTest.test/stdout
            >>>>>>
            >>>>>>
            >>>>>>
            >>>>>> PAR : waiting
            >>>>>> PASS : sync interrupted
            >>>>>>
            >>>>>> `stderr` is actually empty for that test:
            >>>>>>
            >>>>>> $ wc -l
            >>>>>>
            
~/avocado/job-results/latest/test-results/1-examples_tests_synctest.py\:SyncTest.test/stderr
            >>>>>>
            >>>>>>
            >>>>>> 0
            >>>>>>
            
/home/cleber/avocado/job-results/latest/test-results/1-examples_tests_synctest.py:SyncTest.test/stderr
            >>>>>>
            >>>>>>
            >>>>>>
            >>>>>> What you have to do is, once you're satisfied with
            those outputs, and
            >>>>>> they're considered "the gold standard", you'd move
            those to the test
            >>>>>> *data directory*.
            >>>>>>
            >>>>>> So, if you test is hosted at, `/tests/xl.py`,
            you'd created the
            >>>>>> `/tests/xl.py.data`, and put those files there,
            named `stdout.expected`
            >>>>>> and `stderr.expected`.
            >>>>>>
            >>>>>> Whenever you run `avocado run
            --output-check-record all /tests/xl.py`,
            >>>>>> those files will be used and the output of the
            *current* test execution
            >>>>>> will be compared to those "gold standards".
            >>>>>>
            >>>>>>> The script is very simple and the way I execute
            the command is:
            >>>>>>>
            >>>>>>> cmd = ('/usr/sbin/xl create /VM/guest1/vm.cfg')
            >>>>>>> if utils.system(cmd) == "0":
            >>
            >> The issue seems to be related to the fact that you're
            using old autotest
            >> libraries to execute your external commands.
            >>
            >> The output record/check support is built into
            Avocado's libraries,
            >> namely `avocado.utils.process`.
            >>
            >> Try to replace your code with:
            >>
            >>     from avocado.utils import process
            >>     ...
            >>     process.system(cmd)
            >>
            >> Also, mixing and matching autotest and avocado
            libraries in the same
            >> test code is not something I'd recommend.  Not that
            it's guaranteed to
            >> break. I'd recomend to (try to) move to avocado
            libraries, or keep an
            >> eye on strange behavior like this.
            >>
            >>>>>>>          pass
            >>>>>>> else:
            >>>>>>>          return False
            >>>>>>>
            >>>>>>> The command send to stdout:
            >>>>>>>
            >>>>>>> Parsing config from /VM/guest1/vm.cfg
            >>>>>>>
            >>>>>>> I run the test as:
            >>>>>>>
            >>>>>>> avocado run --output-check-record all xentest.py
            >>>>>>>
            >>>>>>> The job.log file contains:
            >>>>>>>
            >>>>>>> 2016-09-07 13:04:48,015 test             L0214
            INFO | START
            >>>>>>>
            
1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1
            >>>>>>>
            >>>>>>>
            >>>>>>>
            >>>>>>> 2016-09-07 13:04:48,051 xentest          L0033 INFO |
            >>>>>>>
            
1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1:
            >>>>>>>
            >>>>>>>
            >>>>>>> Running action create
            >>>>>>> 2016-09-07 13:04:49,067 utils            L0151
            ERROR| [stderr] Parsing
            >>>>>>> config from /VM/guest1/vm.cfg
            >>>>>>> 2016-09-07 13:04:49,523 test             L0586
            INFO | PASS
            >>>>>>>
            
1-/root/avocado-vt/io-fs-autotest-xen/xen/tests/xentest.py:xentest.test_xen_start_stop;1
            >>>>>>>
            >>>>>>>
            >>>>>>>
            >>>>>>>
            >>>>>>> Thanks for your time and help.
            >>>>>>
            >>>>>> Let me know if it's clear now! And thanks for
            trying Avocado out!
            >>>>>>
            >
            > --
            >
            > Regards,
            >
            > Marcos Eduardo Matsunaga
            >
            > Oracle USA
            > Linux Engineering
            >
            > “The statements and opinions expressed here are my own
            and do not
            > necessarily represent those of Oracle Corporation.”
            >


--
            Regards,

            Marcos Eduardo Matsunaga

            Oracle USA
            Linux Engineering

            “The statements and opinions expressed here are my own and do not
            necessarily represent those of Oracle Corporation.”


--

Regards,

Marcos Eduardo Matsunaga

Oracle USA
Linux Engineering

“The statements and opinions expressed here are my own and do not
necessarily represent those of Oracle Corporation.”

Reply via email to