Nir Soffer <[email protected]> writes:
> On Thu, Dec 16, 2021 at 10:14 PM Milan Zamazal <[email protected]> wrote:
>>
>> when I run Vdsm tests as a non-root user in vdsm-test-centos-8
>> container, they systematically fail on several storage tests. I run the
>> tests as
>>
>> podman run -it -v $HOME:$HOME --userns=keep-id vdsm-test-centos-8
>> $HOME/test.sh
>
> Why do you need the --userns option?
Because otherwise it's run as a fake root user and will fail on the
tests requiring real root permissions.
> You run this in a container because your development environment
> is not compatible - right?
Yes, I need to run the tests in a supported and predictable environment.
>> where test.sh is
>>
>> #!/bin/sh
>>
>> export TRAVIS_CI=1
>> cd .../vdsm
>> ./autogen.sh --system
>> make clean
>> make
>
> I never run autogen and make during the tests, this can be done once
> after checkout or when modifying the makefiles.
It can but it's safer to run the tests in a clean environment otherwise
they can fail after switching branches or rebases and then one may waste
time on looking at the errors. This part doesn't consume much running
time in comparison with some other parts so why not to have it there.
>> make lint
>> make tests
>
> You are missing:
>
> make storage
>
> Without this, a lot of tests will be skipped or xfailed.
I see. Why does tests-storage not depend on this target? E.g.
.PHONY: tests-storage
tests-storage: tox tests-storage-setup tests-storage-run
.PHONY: tests-storage-run
tests-storage-run:
tox -e "storage"
.PHONY: tests-storage-setup
tests-storage-setup:
python3 tests/storage/userstorage.py setup
But perhaps it is better the way it is because `make storage' requires
root access.
>> make tests-storage
>>
>> The failing tests are in devicemapper_test.py, outofprocess_test.py and
>> qemuimg_test.py. I have also seen a test failure in nbd_test.py but not
>> always. Is it a problem of the tests or of my environment?
>
> nbd_test.py should be skipped unless you run as root, or have running
> supervdsm serving your user.
Not all tests there are marked as requiring root privileges (and they
pass without them).
>> > export TRAVIS_CI=1
>>
>
> This is not correct for your local container - in travis we run a
> privileged container
> as root and we create loop devices before the test.
It is needed to skip tests in common/systemctl_test.py that require
systemd. Perhaps we should use a different mechanism to disable them
there.
Other than that there is no difference between having this variable or
not when running without root privileges.
> I tried this:
>
> podman run --rm -it -v `pwd`:/src:Z --userns=keep-id vdsm-test-centos-8
> cd src
> make tests-storage
>
> Got 2 test failures:
>
>
> ============================================================= FAILURES
> ==============================================================
> ______________________________________________________
> test_block_device_name
> _______________________________________________________
>
> def test_block_device_name():
> devs = glob.glob("/sys/block/*/dev")
> dev_name = os.path.basename(os.path.dirname(devs[0]))
> with open(devs[0], 'r') as f:
> major_minor = f.readline().rstrip()
>> assert devicemapper.device_name(major_minor) == dev_name
> E AssertionError: assert '7:1' == 'loop1'
> E - loop1
> E + 7:1
>
> This likey failed since there are no loop devices in the container:
>
> bash-4.4$ ls /dev/
> console core fd full mqueue null ptmx pts random shm stderr
> stdin stdout tty urandom zero
>
> And there is no way to create them, since your run as regular user
> and sudo does not work. Event if it works I think you will not be able
> to create the loop devices since the container is not privileged.
>
> It may be possible to map the loop devices from the host to the container
> but I never tried.
>
>
> storage/devicemapper_test.py:243: AssertionError
> ___________________________________________________
> test_stop_server_not_running
> ____________________________________________________
>
> @broken_on_ci
> def test_stop_server_not_running():
> # Stopping non-existing server should succeed.
>> nbd.stop_server("no-such-server-uuid")
>
> storage/nbd_test.py:806:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ../lib/vdsm/storage/nbd.py:179: in stop_server
> info = systemctl.show(service, properties=("LoadState",))
> ../lib/vdsm/common/systemctl.py:74: in show
> out = commands.run(cmd).decode("utf-8")
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> args = ['/usr/bin/systemctl', 'show', '--property=LoadState',
> 'vdsm-nbd-no-such-server-uuid.service'], input = None, cwd = None
> env = None, sudo = False, setsid = False, nice = None, ioclass = None,
> ioclassdata = None, reset_cpu_affinity = True
>
> def run(args, input=None, cwd=None, env=None, sudo=False, setsid=False,
> nice=None, ioclass=None, ioclassdata=None,
> reset_cpu_affinity=True):
> """
> Starts a command communicate with it, and wait until the command
> terminates. Ensures that the command is killed if an unexpected
> error is
> raised.
>
> args are logged when command starts, and are included in the
> exception if a
> command has failed. If args contain sensitive information that
> should not
> be logged, such as passwords, they must be wrapped with
> ProtectedPassword.
>
> The child process stdout and stderr are always buffered. If you have
> special needs, such as running the command without buffering
> stdout, or
> create a pipeline of several commands, use the lower level start()
> function.
>
> Arguments:
> args (list): Command arguments
> input (bytes): Data to send to the command via stdin.
> cwd (str): working directory for the child process
> env (dict): environment of the new child process
> sudo (bool): if set to True, run the command via sudo
> nice (int): if not None, run the command via nice command with
> the
> specified nice value
> ioclass (int): if not None, run the command with the ionice
> command
> using specified ioclass value.
> ioclassdata (int): if ioclass is set, the scheduling class
> data. 0-7
> are valid data (priority levels).
> reset_cpu_affinity (bool): Run the command via the taskset
> command,
> allowing the child process to run on all cpus (default
> True).
>
> Returns:
> The command output (bytes)
>
> Raises:
> OSError if the command could not start.
> cmdutils.Error if the command terminated with a non-zero exit
> code.
> utils.TerminatingFailure if command could not be terminated.
> """
> p = start(args,
> stdin=subprocess.PIPE if input else None,
> stdout=subprocess.PIPE,
> stderr=subprocess.PIPE,
> cwd=cwd,
> env=env,
> sudo=sudo,
> setsid=setsid,
> nice=nice,
> ioclass=ioclass,
> ioclassdata=ioclassdata,
> reset_cpu_affinity=reset_cpu_affinity)
>
> with terminating(p):
> out, err = p.communicate(input)
>
> log.debug(cmdutils.retcode_log_line(p.returncode, err))
>
> if p.returncode != 0:
>> raise cmdutils.Error(args, p.returncode, out, err)
> E vdsm.common.cmdutils.Error: Command ['/usr/bin/systemctl',
> 'show', '--property=LoadState', 'vdsm-nbd-no-such-server-uuid.service']
> failed with rc=1 out=b'' err=b"System has not been booted with systemd as
> init system (PID 1). Can't operate.\nFailed to connect to bus: Host is
> down\n"
>
>
> This fails because we have systemd in the container, but we did not start
> the
> container in the write way to make it happy.
>
> I'm not sure why the test was not skipped, probably a bug in the skip
> condition.
These are the tests where TRAVIS_CI=1 is needed.
But these are not storage tests. Do storage tests pass for you?
I always get the following error:
=====================================================================================
FAILURES
======================================================================================
_______________________________________________________________________
test_write_file_direct_true_unaligned
_______________________________________________________________________
oop_cleanup = None, tmpdir =
local('/var/tmp/vdsm/test_write_file_direct_true_un0')
def test_write_file_direct_true_unaligned(oop_cleanup, tmpdir):
iop = oop.getProcessPool("test")
path = str(tmpdir.join("file"))
with pytest.raises(OSError) as e:
> iop.writeFile(path, b"1\n2\n3\n", direct=True)
E Failed: DID NOT RAISE <class 'OSError'>
storage/outofprocess_test.py:1011: Failed
Any idea what could be wrong?
And in storage/qemuimg_test.py, the whole test suite always crashes on a
timeout in convert_to_qcow2, hiding the previous failures:
storage/qemuimg_test.py
.................................xxxxxxxxxxxxxxxxxxxxxxxxxxxx................................................F..F..
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Timeout
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Stack
of MainThread (139916756958080)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File "/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/bin/pytest", line 8, in
<module>
sys.exit(console_main())
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/config/__init__.py",
line 185, in console_main
code = main()
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/config/__init__.py",
line 163, in main
config=config
File "/usr/local/lib/python3.6/site-packages/pluggy/_hooks.py", line 265,
in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs,
firstresult)
File "/usr/local/lib/python3.6/site-packages/pluggy/_manager.py", line 80,
in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/usr/local/lib/python3.6/site-packages/pluggy/_callers.py", line 39,
in _multicall
res = hook_impl.function(*args)
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/main.py",
line 316, in pytest_cmdline_main
return wrap_session(config, _main)
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/main.py",
line 269, in wrap_session
session.exitstatus = doit(config, session) or 0
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/main.py",
line 323, in _main
config.hook.pytest_runtestloop(session=session)
File "/usr/local/lib/python3.6/site-packages/pluggy/_hooks.py", line 265,
in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs,
firstresult)
File "/usr/local/lib/python3.6/site-packages/pluggy/_manager.py", line 80,
in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/usr/local/lib/python3.6/site-packages/pluggy/_callers.py", line 39,
in _multicall
res = hook_impl.function(*args)
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/main.py",
line 348, in pytest_runtestloop
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
File "/usr/local/lib/python3.6/site-packages/pluggy/_hooks.py", line 265,
in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs,
firstresult)
File "/usr/local/lib/python3.6/site-packages/pluggy/_manager.py", line 80,
in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/usr/local/lib/python3.6/site-packages/pluggy/_callers.py", line 39,
in _multicall
res = hook_impl.function(*args)
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/runner.py",
line 109, in pytest_runtest_protocol
runtestprotocol(item, nextitem=nextitem)
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/runner.py",
line 126, in runtestprotocol
reports.append(call_and_report(item, "call", log))
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/runner.py",
line 215, in call_and_report
call = call_runtest_hook(item, when, **kwds)
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/runner.py",
line 255, in call_runtest_hook
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/runner.py",
line 311, in from_call
result: Optional[TResult] = func()
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/runner.py",
line 255, in <lambda>
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
File "/usr/local/lib/python3.6/site-packages/pluggy/_hooks.py", line 265,
in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs,
firstresult)
File "/usr/local/lib/python3.6/site-packages/pluggy/_manager.py", line 80,
in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/usr/local/lib/python3.6/site-packages/pluggy/_callers.py", line 39,
in _multicall
res = hook_impl.function(*args)
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/runner.py",
line 162, in pytest_runtest_call
item.runtest()
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/python.py",
line 1641, in runtest
self.ihook.pytest_pyfunc_call(pyfuncitem=self)
File "/usr/local/lib/python3.6/site-packages/pluggy/_hooks.py", line 265,
in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs,
firstresult)
File "/usr/local/lib/python3.6/site-packages/pluggy/_manager.py", line 80,
in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/usr/local/lib/python3.6/site-packages/pluggy/_callers.py", line 39,
in _multicall
res = hook_impl.function(*args)
File
"/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/lib/python3.6/site-packages/_pytest/python.py",
line 183, in pytest_pyfunc_call
result = testfunction(**testargs)
File "/home/pdm/ovirt/vdsm/vdsm-work/tests/storage/qemuimg_test.py", line
1481, in test_empty
self.check_measure(filename, compat, format, compressed)
File "/home/pdm/ovirt/vdsm/vdsm-work/tests/storage/qemuimg_test.py", line
1615, in check_measure
actual_size = converted_size(filename, compat=compat)
File "/home/pdm/ovirt/vdsm/vdsm-work/tests/storage/qemuimg_test.py", line
1815, in converted_size
converted = convert_to_qcow2(filename, compat=compat)
File "/home/pdm/ovirt/vdsm/vdsm-work/tests/storage/qemuimg_test.py", line
1827, in convert_to_qcow2
convert_cmd.run()
File "/home/pdm/ovirt/vdsm/vdsm-work/lib/vdsm/storage/qemuimg.py", line
356, in run
for data in self._operation.watch():
File "/home/pdm/ovirt/vdsm/vdsm-work/lib/vdsm/storage/operation.py", line
101, in watch
for src, data in cmdutils.receive(self._proc):
File "/home/pdm/ovirt/vdsm/vdsm-work/lib/vdsm/common/cmdutils.py", line
234, in receive
ready = poller.poll(remaining_msec)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Timeout
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PROFILE {"command": ["pytest", "-m", "not (integration or slow or stress)",
"--durations=20", "--cov=vdsm.storage", "--cov-report=html:htmlcov-storage",
"--cov-fail-under=68", "storage/qemuimg_test.py"], "cpu": 14.761214244462916,
"elapsed": 44.645256757736206, "idrss": 0, "inblock": 288, "isrss": 0, "ixrss":
0, "majflt": 2, "maxrss": 50532, "minflt": 540141, "msgrcv": 0, "msgsnd": 0,
"name": "storage", "nivcsw": 174, "nsignals": 0, "nswap": 0, "nvcsw": 283100,
"oublock": 144784, "start": 1639746712.4788334, "status": 1, "stime": 3.146725,
"utime": 3.443457}
ERROR: InvocationError for command
/home/pdm/ovirt/vdsm/vdsm-work/.tox/storage/bin/python profile storage pytest
-m 'not (integration or slow or stress)' --durations=20 --cov=vdsm.storage
--cov-report=html:htmlcov-storage --cov-fail-under=68 storage/qemuimg_test.py
(exited with code 1)
______________________________________________________________________________________
summary
______________________________________________________________________________________
ERROR: storage: commands failed
_______________________________________________
Devel mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/[email protected]/message/5Z5OR2BYEBA6TAM2BSLQIHGX5FHSVJWK/