This appears to be a bug with Ubuntu, not Kubernetes. Kube-proxy is
reponsible for managing these rules.

The rules inside the container appear fine even though they show an
error on the host.


On the host:

```
root@docker1:~# iptables-save | grep AAAREDACTED1
:KUBE-SEP-AAAREDACTED1 - [0:0]
-A KUBE-SEP-AAAREDACTED1 -s 10.99.99.190/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-AAAREDACTED1 -p tcp -m tcp -j DNAT [unsupported revision]
-A KUBE-SVC-123REDACTEDABC -j KUBE-SEP-AAAREDACTED1
root@docker1:/#
```

Inside the container:

```
root@docker1:~# docker exec -it kube-proxy bash
root@docker1:/# iptables-save | grep AAAREDACTED1
:KUBE-SEP-AAAREDACTED1 - [0:0]
-A KUBE-SEP-AAAREDACTED1 -s 10.99.99.190/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-AAAREDACTED1 -p tcp -m tcp -j DNAT --to-destination 
10.99.99.190:24231
-A KUBE-SVC-123REDACTEDABC -j KUBE-SEP-AAAREDACTED1
root@docker1:/# 
```

** Description changed:

  **What happened**:
  
- My Kubernetes 1.17 cluster (and Kubernetes 1.16 cluster) have 40-50 of
+ I'm running Kubernetes on my Ubuntu 18.04 systems. Kubernetes is
+ installed via Rancher (RKE).
+ 
+ My Kubernetes 1.17 cluster (and Kubernetes 1.16 cluster) has 40-50
  firewall rules that say `[unsupported revision]`. I am concerned this
  will cause problems with
  
  ```
  # iptables-save
  ...
  -A KUBE-SEP-AAAREDACTED1 -s 10.99.99.27/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED1 -p tcp -m tcp -j DNAT [unsupported revision]
  -A KUBE-SEP-AAAREDACTED2 -s 10.99.99.40/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED2 -p tcp -m tcp -j DNAT [unsupported revision]
  -A KUBE-SEP-AAAREDACTED3 -s 10.99.99.27/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED3 -p tcp -m tcp -j DNAT [unsupported revision]
  ```
  
  **What you expected to happen**:
  
  I expect these firewall rules to be valid, like so:
  
  ```
  # iptables-save
  ...
  -A KUBE-SEP-AAAREDACTED1 -s 10.99.99.27/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED1 -p tcp -m tcp -j DNAT --to-destination 
10.99.99.27:9153
  -A KUBE-SEP-AAAREDACTED2 -s 10.99.99.40/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED2 -p tcp -m tcp -j DNAT --to-destination 
10.99.99.40:3306
  -A KUBE-SEP-AAAREDACTED3 -s 10.99.99.27/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED3 -p tcp -m tcp -j DNAT --to-destination 10.99.99.27:53
  ```
  
  **How to reproduce it (as minimally and precisely as possible)**:
  
  1. Allocate a worker node
- 1. Install Ubuntu 18.04.5 
+ 1. Install Ubuntu 18.04.5
  2. Install Kubernetes with Canal
-   * I'm using Rancher & RKE, and I assume this happens with vanilla versions 
of Kubernetes as well.
+   * I'm using Rancher & RKE, and I assume this happens with vanilla versions 
of Kubernetes as well.
  3. Install the Ubuntu LTS Hardware Enablement (HWE) kernel via 
https://wiki.ubuntu.com/Kernel/LTSEnablementStack#Server
  4. Reboot
  5. When the system comes back online & Docker is running, look for invalid 
iptables rules as shown above.
  
  ** Environment **
  
  - 18.04.5 LTS (Bionic Beaver)
  - Kernel - Linux cntest13 5.4.0-48-generic #52~18.04.1-Ubuntu SMP Thu Sep 10 
12:50:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\
  - Kubernetes 1.17.11 - Installed via RKE
  - Canal version: rancher/calico-node:v3.13.4 (Not sure how to tell the other 
version numbers)
  - Docker version: 18.9.9
- 
  
  Happens with both Bare Metal and VM systems:
  - Bare metal nodes - AMD EPYC 7452 32-Core Processor, large memory, multiple 
NICs to different networks
  - VMs on VMware vSphere
  
  Default iptables version:
  
  ```
  # iptables --version
  iptables v1.6.1
  # ls -ld `which iptables`
  lrwxrwxrwx 1 root root 13 Nov 12  2017 /sbin/iptables -> xtables-multi
  #
  ```
  
  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: linux-generic-hwe-18.04 5.4.0.48.52~18.04.42
  ProcVersionSignature: Ubuntu 5.4.0-48.52~18.04.1-generic 5.4.60
  Uname: Linux 5.4.0-48-generic x86_64
  ApportVersion: 2.20.9-0ubuntu7.18
  Architecture: amd64
  Date: Tue Oct 13 19:19:20 2020
  ProcEnviron:
-  TERM=xterm-256color
-  PATH=(custom, no user)
-  XDG_RUNTIME_DIR=<set>
-  LANG=en_US
-  SHELL=/bin/bash
+  TERM=xterm-256color
+  PATH=(custom, no user)
+  XDG_RUNTIME_DIR=<set>
+  LANG=en_US
+  SHELL=/bin/bash
  SourcePackage: linux-meta-hwe-5.4
  UpgradeStatus: No upgrade log present (probably fresh install)

** Description changed:

+ This is a follow up from a bug that I filed for Kubernetes:
+ https://github.com/kubernetes/kubernetes/issues/95409
+ 
  **What happened**:
  
  I'm running Kubernetes on my Ubuntu 18.04 systems. Kubernetes is
  installed via Rancher (RKE).
  
  My Kubernetes 1.17 cluster (and Kubernetes 1.16 cluster) has 40-50
  firewall rules that say `[unsupported revision]`. I am concerned this
  will cause problems with
  
  ```
  # iptables-save
  ...
  -A KUBE-SEP-AAAREDACTED1 -s 10.99.99.27/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED1 -p tcp -m tcp -j DNAT [unsupported revision]
  -A KUBE-SEP-AAAREDACTED2 -s 10.99.99.40/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED2 -p tcp -m tcp -j DNAT [unsupported revision]
  -A KUBE-SEP-AAAREDACTED3 -s 10.99.99.27/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED3 -p tcp -m tcp -j DNAT [unsupported revision]
  ```
  
  **What you expected to happen**:
  
  I expect these firewall rules to be valid, like so:
  
  ```
  # iptables-save
  ...
  -A KUBE-SEP-AAAREDACTED1 -s 10.99.99.27/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED1 -p tcp -m tcp -j DNAT --to-destination 
10.99.99.27:9153
  -A KUBE-SEP-AAAREDACTED2 -s 10.99.99.40/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED2 -p tcp -m tcp -j DNAT --to-destination 
10.99.99.40:3306
  -A KUBE-SEP-AAAREDACTED3 -s 10.99.99.27/32 -j KUBE-MARK-MASQ
  -A KUBE-SEP-AAAREDACTED3 -p tcp -m tcp -j DNAT --to-destination 10.99.99.27:53
  ```
  
  **How to reproduce it (as minimally and precisely as possible)**:
  
  1. Allocate a worker node
  1. Install Ubuntu 18.04.5
  2. Install Kubernetes with Canal
    * I'm using Rancher & RKE, and I assume this happens with vanilla versions 
of Kubernetes as well.
  3. Install the Ubuntu LTS Hardware Enablement (HWE) kernel via 
https://wiki.ubuntu.com/Kernel/LTSEnablementStack#Server
  4. Reboot
  5. When the system comes back online & Docker is running, look for invalid 
iptables rules as shown above.
  
  ** Environment **
  
  - 18.04.5 LTS (Bionic Beaver)
  - Kernel - Linux cntest13 5.4.0-48-generic #52~18.04.1-Ubuntu SMP Thu Sep 10 
12:50:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\
  - Kubernetes 1.17.11 - Installed via RKE
  - Canal version: rancher/calico-node:v3.13.4 (Not sure how to tell the other 
version numbers)
  - Docker version: 18.9.9
  
  Happens with both Bare Metal and VM systems:
  - Bare metal nodes - AMD EPYC 7452 32-Core Processor, large memory, multiple 
NICs to different networks
  - VMs on VMware vSphere
  
  Default iptables version:
  
  ```
  # iptables --version
  iptables v1.6.1
  # ls -ld `which iptables`
  lrwxrwxrwx 1 root root 13 Nov 12  2017 /sbin/iptables -> xtables-multi
  #
  ```
  
  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: linux-generic-hwe-18.04 5.4.0.48.52~18.04.42
  ProcVersionSignature: Ubuntu 5.4.0-48.52~18.04.1-generic 5.4.60
  Uname: Linux 5.4.0-48-generic x86_64
  ApportVersion: 2.20.9-0ubuntu7.18
  Architecture: amd64
  Date: Tue Oct 13 19:19:20 2020
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=<set>
   LANG=en_US
   SHELL=/bin/bash
  SourcePackage: linux-meta-hwe-5.4
  UpgradeStatus: No upgrade log present (probably fresh install)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899690

Title:
  HWE Kernel causes incompatable behavior with Kubernetes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-meta-hwe-5.4/+bug/1899690/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to