** Description changed:

- On linux-image-2.6.38-11-generic, destroying a container causes a kernel
- OOPS and an immediate reboot. This is totally repeatable. This is on
- XUbuntu (but I doubt that makes any difference as we've done it on a
- headless Ubuntu server too).
+ On linux-image-2.6.38-11-generic and linux-image-3.0.0-10-server,
+ destroying a container causes a kernel OOPS and hang. This is totally
+ repeatable.
  
  Procedure to repeat:
-   lxc-create -n foo
-   lxc-start -n foo
- Press ^C
  
- This happens with containers created otherwise than using lxc, so it is
- not a bug in lxc.
+ Use the attached perl program.
  
- The oops is in general not possible to catch as the reboot is immediate.
- However, I have attached an Oops from a marginally different kernel
- (2.6.38-10-server on Lucid) which is created in a different way, but
- Oopses at the same time and I believe is the same bug.
+ The perl program:
+ a) sets up a veth device
+ b) forks
+ c) does clone(NS_NEWNET) on the child
+ d) moves one end of the veth device into the child's network namespace
+ e) pings between the parent and the child and runs conntrack -L
+ f) kills the child after a while.
+ 
+ [NB: this section used to mention lxc - this is a red herring caused by
+ some surprising semantics of lxc, and in fact is nothing to do with the
+ bug]
+ 
+ The oops is in general not possible to catch save via the console as the
+ reboot/hang is immediate. However, I have attached an Oops from a
+ marginally different kernel (2.6.38-10-server on Lucid) which is created
+ in a marginally different way, but has the same call stack.
  
  Bug information as required
  
  1. System information.
  
  lsb_release -rd gives:
  
  Description:  Ubuntu 11.04
  Release:      11.04
  
+ or on another machine showing the same issue
+ 
+ $ lsb_release -rd
+ Description:  Ubuntu oneiric (development branch)
+ Release:      11.10
+ 
  2. apt-cache policy  linux-image-2.6.38-11-generic
  
  linux-image-2.6.38-11-generic:
-  Installed: 2.6.38-11.49
-  Candidate: 2.6.38-11.49
-  Version table:
+  Installed: 2.6.38-11.49
+  Candidate: 2.6.38-11.49
+  Version table:
  *** 2.6.38-11.49 0
-        500 http://gb.archive.ubuntu.com/ubuntu/ natty-proposed/main amd64 
Packages
-        100 /var/lib/dpkg/status
-     2.6.38-11.48 0
-        500 http://gb.archive.ubuntu.com/ubuntu/ natty-updates/main amd64 
Packages
-        500 http://security.ubuntu.com/ubuntu/ natty-security/main amd64 
Packages
+        500 http://gb.archive.ubuntu.com/ubuntu/ natty-proposed/main amd64 
Packages
+        100 /var/lib/dpkg/status
+     2.6.38-11.48 0
+        500 http://gb.archive.ubuntu.com/ubuntu/ natty-updates/main amd64 
Packages
+        500 http://security.ubuntu.com/ubuntu/ natty-security/main amd64 
Packages
+ 
+ or on the second machine:
+ 
+ $ apt-cache policy linux-image-3.0.0-10-server
+ linux-image-3.0.0-10-server:
+   Installed: 3.0.0-10.16
+   Candidate: 3.0.0-10.16
+   Version table:
+  *** 3.0.0-10.16 0
+         500 http://gb.archive.ubuntu.com/ubuntu/ oneiric/main amd64 Packages
+         100 /var/lib/dpkg/status
+ 
  
  3) What I expected to happen:
  
- Container deleted, command prompt returns.
+ Test program continues to run, showing ICMP traffic moving periodically
  
  4) What actually happened:
  
- Immediate machine reboot, all data lost
+ Kernel hang within 10-20 seconds, Oops on console, data lost
  
  5) We currently do not believe this to be a security vulnerability as
  containers cannot be created as non-root.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/843892

Title:
  Repeatable kernel oops on container delete

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-lts-backport-natty/+bug/843892/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to