On Mon, Sep 24, 2012 at 12:51:39PM +0200, Michael Stapelberg wrote:
> Package: systemd
> Version: 44-4
> Severity: important
> 
> 
> Hi,
> 
> I noticed that libvirt virtual machines are placed in the user’s session
> by default, so I installed a proper libvirtd.service:
> 
> $ cat <<EOF >/etc/systemd/system/libvirtd.service
> # NB we don't use socket activation. When libvirtd starts it will
> # spawn any virtual machines registered for autostart. We want this
> # to occur on every boot, regardless of whether any client connects
> # to a socket. Thus socket activation doesn't have any benefit
> 
> [Unit]
> Description=Virtualization daemon
> After=udev.target
> After=avahi.target
> After=dbus.target
> Before=libvirt-guests.service
> 
> [Service]
> KillMode=process
> ExecStart=/usr/sbin/libvirtd
> ExecReload=/bin/kill -HUP $MAINPID
> # Override the maximum number of opened files
> #LimitNOFILE=2048
> 
> [Install]
> WantedBy=multi-user.target
> EOF
> 
> $ systemctl stop libvirt-bin.service
> $ systemctl stop libvirt-guests.service
> $ killall libvirtd
> $ systemctl mask libvirt-bin.service
> $ systemctl mask libvirt-guests.service
> $ systemctl daemon-reload
> $ systemctl enable libvirtd.service
> $ systemctl start libvirtd.service
> 
> Now, when I create a VM, it will be placed in the cgroup of
> libvirtd.service, as expected:
> 
> $ systemctl status libvirtd.service
> libvirtd.service - Virtualization daemon
>           Loaded: loaded (/etc/systemd/system/libvirtd.service; enabled)
>           Active: active (running) since Mon, 24 Sep 2012 12:25:14 +0200; 1s 
> ago
>         Main PID: 24130 (libvirtd)
>           CGroup: name=systemd:/system/libvirtd.service
>                   └ 24130 /usr/sbin/libvirtd
> 
> $ virsh -c qemu:///system create /etc/libvirt/qemu/test2.xml
> Domain test2 created from /etc/libvirt/qemu/test2.xml
> 
> $ systemctl status libvirtd.service
> libvirtd.service - Virtualization daemon
>           Loaded: loaded (/etc/systemd/system/libvirtd.service; enabled)
>           Active: active (running) since Mon, 24 Sep 2012 12:25:14 +0200; 33s 
> ago
>         Main PID: 24130 (libvirtd)
>           CGroup: name=systemd:/system/libvirtd.service
>                   ├ 24130 /usr/sbin/libvirtd
>                   └ 24253 /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 256 -smp 
> 1,sockets=1,cores=1,threads=1 -name test2 -...
> 
> However, when I and start libvirtd.service, the VMs are killed because
> systemd cleans up the cgroups on start:
> 
> $ systemctl stop libvirtd.service
> $ systemctl status libvirtd.service
> libvirtd.service - Virtualization daemon
>           Loaded: loaded (/etc/systemd/system/libvirtd.service; enabled)
>           Active: inactive (dead) since Mon, 24 Sep 2012 12:26:54 +0200; 
> 700ms ago
>          Process: 24326 ExecStart=/usr/sbin/libvirtd (code=exited, 
> status=0/SUCCESS)
>           CGroup: name=systemd:/system/libvirtd.service
>                   └ 24456 /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 256 -smp 
> 1,sockets=1,cores=1,threads=1 -name test2 -...
> 
> $ ps auxf | grep 24456
> root     24492  20   0  0.0  0.0          |           \_ grep 24456
> 121      24456  20   0  2.0  0.1 /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 
> 256 -smp 1,sockets=1,cores=1,threads=1 -name test2 -uuid 
> 7ed8b939-8e56-3083-c63a-8b7a6ba15182 -nodefconfig -nodefaults -chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/test2.monitor,server,nowait 
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
> if=none,id=drive-ide0-1-0,readonly=on,format=raw -device 
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -chardev 
> pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 
> 127.0.0.1:0 -k en-us -vga cirrus -device 
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3
> $ systemctl start libvirtd.service
> $ ps auxf | grep 24456
> root     24610  20   0  0.0  0.0          |           \_ grep 24456
> 
> See also https://bugzilla.redhat.com/show_bug.cgi?id=805942. From the
> Fedora packaging, I extracted their patch for it and applied it to our
> package (find it attached). With that patch, the problem does no longer
> occur:
> 
> $ systemctl status libvirtd.service
> libvirtd.service - Virtualization daemon
>           Loaded: loaded (/etc/systemd/system/libvirtd.service; enabled)
>           Active: active (running) since Mon, 24 Sep 2012 12:27:03 +0200; 
> 11min ago
>         Main PID: 24502 (libvirtd)
>           CGroup: name=systemd:/system/libvirtd.service
>                   └ 24502 /usr/sbin/libvirtd
> 
> $ virsh -c qemu:///system create /etc/libvirt/qemu/test2.xml
> Domain test2 created from /etc/libvirt/qemu/test2.xml
> 
> $ systemctl status libvirtd.service
> libvirtd.service - Virtualization daemon
>           Loaded: loaded (/etc/systemd/system/libvirtd.service; enabled)
>           Active: active (running) since Mon, 24 Sep 2012 12:27:03 +0200; 
> 11min ago
>         Main PID: 24502 (libvirtd)
>           CGroup: name=systemd:/system/libvirtd.service
>                   ├ 19256 /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 256 -smp 
> 1,sockets=1,cores=1,threads=1 -name test2 -...
>                   └ 24502 /usr/sbin/libvirtd
> 
> $ systemctl stop libvirtd.service
> $ systemctl status libvirtd.service
> libvirtd.service - Virtualization daemon
>           Loaded: loaded (/etc/systemd/system/libvirtd.service; enabled)
>           Active: inactive (dead) since Mon, 24 Sep 2012 12:38:17 +0200; 
> 916ms ago
>         Main PID: 24502 (code=exited, status=0/SUCCESS)
>           CGroup: name=systemd:/system/libvirtd.service
>                   └ 19256 /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 256 -smp 
> 1,sockets=1,cores=1,threads=1 -name test2 -...
> 
> $ systemctl status libvirtd.service
> libvirtd.service - Virtualization daemon
>           Loaded: loaded (/etc/systemd/system/libvirtd.service; enabled)
>           Active: active (running) since Mon, 24 Sep 2012 12:38:21 +0200; 2s 
> ago
>         Main PID: 19293 (libvirtd)
>           CGroup: name=systemd:/system/libvirtd.service
>                   ├ 19256 /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 256 -smp 
> 1,sockets=1,cores=1,threads=1 -name test2 -...
>                   └ 19293 /usr/sbin/libvirtd
> 
> 
> I have been running systemd with this patch for over a month now and
> could not find any negative side-effects. I suggest we apply the patch
> and get a freeze exception.
> 
> I’ve set the severity to important because running into this issue is
> very unpleasant on production systems and might lead to severe data
> loss.
> 
> Thanks!
> 

> Description: disable killing on entering START_PRE, START
>  The killing worked fine with the added "control" sub-cgroup, but that
>  brought other problems:
>  https://bugzilla.redhat.com/show_bug.cgi?id=816842
>  The "control" sub-cgroup had to be removed. In order not to reintroduce
>  https://bugzilla.redhat.com/show_bug.cgi?id=805942, comment out the
>  killing for F17 GA. Hopefully we'll get a proper fix later.
>  
>  Almost a revert of commit 8f53a7b8ea9ba505f8fefe4df4aaa5a8aab1e2eb
>  "service: brutally slaughter processes that are running in the cgroup
>  when we enter START_PRE and START"
> Author: Michal Schmidt <mschm...@redhat.com>
> Origin: vendor
> Bug: https://bugzilla.redhat.com/show_bug.cgi?id=816842
> Forwarded: not-needed
> Reviewed-by: Michael Stapelberg <stapelb...@debian.org>
> Last-Update: 2012-09-24
> ---
> This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
> Index: systemd-44/src/service.c
> ===================================================================
> --- systemd-44.orig/src/service.c     2012-03-12 21:49:16.000000000 +0100
> +++ systemd-44/src/service.c  2012-09-24 12:30:27.238541632 +0200
> @@ -2094,7 +2094,8 @@
>          /* We want to ensure that nobody leaks processes from
>           * START_PRE here, so let's go on a killing spree, People
>           * should not spawn long running processes from START_PRE. */
> -        cgroup_bonding_kill_list(UNIT(s)->cgroup_bondings, SIGKILL, true, 
> NULL);
> +        // F17, bz816842, bz805942
> +        //cgroup_bonding_kill_list(UNIT(s)->cgroup_bondings, SIGKILL, true, 
> NULL);
>  
>          if (s->type == SERVICE_FORKING) {
>                  s->control_command_id = SERVICE_EXEC_START;
> @@ -2168,7 +2169,8 @@
>  
>                  /* Before we start anything, let's clear up what might
>                   * be left from previous runs. */
> -                cgroup_bonding_kill_list(UNIT(s)->cgroup_bondings, SIGKILL, 
> true, NULL);
> +                // F17, bz816842, bz805942
> +                //cgroup_bonding_kill_list(UNIT(s)->cgroup_bondings, 
> SIGKILL, true, NULL);
>  
>                  s->control_command_id = SERVICE_EXEC_START_PRE;
>  

It seems this is the fix that went into upstream git to resolve this:

        
http://cgit.freedesktop.org/systemd/systemd/commit/?id=ecedd90fcdf647f9a7b56b4934b65e30b2979b04

Would be great to have this fixed since newer libvirtd ship service
files by default an so currently all VMs get killed during package
upgrade which doesn't happen with sysvinit.
Cheers,
 -- Guido

> 
> 
> -- 
> Best regards,
> Michael


--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to