On 14/09/16 17:40, Lukáš Doktor wrote:
Dne 14.9.2016 v 13:32 Marcos E. Matsunaga napsal(a):
Hi Folks,
Hello Marcos


I have some questions about how avocado works.

1. If I run avocado and give it a directory that has all tests. Is there
a way to specify the order of execution? I mean, if I name the files
001-xxx.py, 010-aa.py, will it execute 001-xxx.py before 010-aa.py or it
doesn't follow an alphabetical order?
The current implementation does not sort the results, so it's the `os.walkdir` order, which is not defined. On my system it appears to be the order they were created in the directory, but one can't rely on that. Do you find this important?
I would not say important, but would be nice.


2. Lets take into consideration that same directory. Some of the scripts
will have multiplex configuration files. Does avocado automatically look
at some specific directory for those multiplex configuration files? I've
tried to add them to the data, cfg and even the <script>.data
directories, but it seems that it doesn't look for them automatically,
only when I specify the option --multiplex, but then, the file will be
used by all scripts and I was only able to specify a single multiplex file.
Nop, the current runner allows only a global multiplex configuration per all tests and there are no defaults (there was a proposal for this, but it was not defined and implemented. There is also a Job API which should allow one to create free-form jobs where one should be able to assign different attributes including multiplex variants to the tests, but it's not yet implemented (although it's a work in progress).


3. I tried to find if it was possible to start multiple test processes
in parallel, but it seems that avocado doesn't have anything like that.
Lets say I have 4 guests and I want to execute performance tests while
loading the 4 guests and I want to start tests on all 4 guests at the
same time. It doesn't have a feature that will do that, right?
Currently this is not possible. There was an attempt to allow multi-stream tests, which would allow test to spread across multiple machines (which is not exactly what you asked for, but could be one of the solutions), but we haven't come up to a conclusion, so we're waiting for other proposals.

Another solution could be to run the jobs separately, which'd produce multiple job results.
I do run performance tests often, and in a VM environment, it is really interesting to see results of jobs executed in parallel (time synchronized) to get an idea how the engine works. I think that the biggest challenge when running separately is to keep them all in sync and starting at the exact same time, and then collect the results. That's why I think multi-streams tests would be a good feature.

Last but not least, this is again a nice candidate for the Job API where you should be able (Cleber correct me if I'm wrong) to define different remote runners for different tests. Note this might not be available in the first version as currently remote execution supports only full jobs, but at least the way I understood the Job API RFC this is one of the use cases.


Thanks for your time and help.



Thank you for nice questions, I'm afraid the answers are not satisfying, but there are improvements on the way. The basic runner (avocado run) was suppose to be simple to allow developers to use it right away. The Job API is something which should allow the free-form execution in a way Autotest users used control files (hopefully even better).

Regards,
Lukáš


--

Regards,

Marcos Eduardo Matsunaga

Oracle USA
Linux Engineering

“The statements and opinions expressed here are my own and do not
necessarily represent those of Oracle Corporation.”

Reply via email to