** Description changed:

  [Impact]
  
  When an isolated network using provider networks for tenants (meaning
  without virtual routers: DVR or network node), metadata access occurs in
  the qdhcp ip netns rather than the qrouter netns.
  
  The following options are set in the dhcp_agent.ini file:
  force_metadata = True
  enable_isolated_metadata = True
  
  VMs on the provider tenant network are unable to access metadata as
  packets are dropped due to checksum.
  
  [Test Plan]
  
  1. Create an OpenStack deployment with DPDK options enabled and 'enable-
  local-dhcp-and-metadata: true' in neutron-openvswitch. A sample, simple
  3 node bundle can be found here[1].
  
  2. Create an external flat network and subnet:
  
  openstack network show dpdk_net || \
-   openstack network create --provider-network-type flat \
-                            --provider-physical-network physnet1 dpdk_net \
-                            --external
+   openstack network create --provider-network-type flat \
+                            --provider-physical-network physnet1 dpdk_net \
+                            --external
  
  openstack subnet show dpdk_net || \
-     openstack subnet create --allocation-pool 
start=10.230.58.100,end=10.230.58.200 \
-                             --subnet-range 10.230.56.0/21 --dhcp --gateway 
10.230.56.1 \
-                             --dns-nameserver 10.230.56.2 \
-                             --ip-version 4 --network dpdk_net dpdk_subnet
+     openstack subnet create --allocation-pool 
start=10.230.58.100,end=10.230.58.200 \
+                             --subnet-range 10.230.56.0/21 --dhcp --gateway 
10.230.56.1 \
+                             --dns-nameserver 10.230.56.2 \
+                             --ip-version 4 --network dpdk_net dpdk_subnet
  
- 
- 3. Create an instance attached to that network. The instance must have a 
flavor that uses huge pages.
+ 3. Create an instance attached to that network. The instance must have a
+ flavor that uses huge pages.
  
  openstack flavor create --ram 8192 --disk 50 --vcpus 4 m1.dpdk
  openstack flavor set m1.dpdk --property hw:mem_page_size=large
  
  openstack server create --wait --image xenial --flavor m1.dpdk --key-
  name testkey --network dpdk_net i1
  
  4. Log into the instance host and check the instance console. The
  instance will hang into the boot and show the following message:
  
  2020-11-20 09:43:26,790 - openstack.py[DEBUG]: Failed reading optional
  path http://169.254.169.254/openstack/2015-10-15/user_data due to:
  HTTPConnectionPool(host='169.254.169.254', port=80): Read timed out.
  (read timeout=10.0)
  
  5. Apply the fix in all computes, restart the DHCP agents in all
  computes and create the instance again.
  
  6. No errors should be shown and the instance quickly boots.
  
- 
  [Where problems could occur]
  
  * This change is only touched if datapath_type and ovs_use_veth. Those 
settings are mostly used for DPDK environments. The core of the fix is
  to toggle off checksum offload done by the DHCP namespace interfaces.
  This will have the drawback of adding some overhead on the packet processing 
for DHCP traffic but given DHCP does not demand too much data, this should be a 
minor proble.
  
  * Future changes on the syntax of the ethtool command could cause
  regressions
  
- 
  [Other Info]
  
-  * None
+  * None
  
  
+ QUEENS VERIFICATION DONE
+ 
+ 1 - Followed through the process above and confirmed that after installing 
the package the problems are solved.
+ 2 - Testing output can be seen here: https://paste.ubuntu.com/p/mM22nBsSG2/
+ 
  [1] https://gist.github.com/sombrafam/e0741138773e444960eb4aeace6e3e79

** Tags removed: verification-needed-bionic
** Tags added: verification-bionic-done

** Description changed:

  [Impact]
  
  When an isolated network using provider networks for tenants (meaning
  without virtual routers: DVR or network node), metadata access occurs in
  the qdhcp ip netns rather than the qrouter netns.
  
  The following options are set in the dhcp_agent.ini file:
  force_metadata = True
  enable_isolated_metadata = True
  
  VMs on the provider tenant network are unable to access metadata as
  packets are dropped due to checksum.
  
  [Test Plan]
  
  1. Create an OpenStack deployment with DPDK options enabled and 'enable-
  local-dhcp-and-metadata: true' in neutron-openvswitch. A sample, simple
  3 node bundle can be found here[1].
  
  2. Create an external flat network and subnet:
  
  openstack network show dpdk_net || \
    openstack network create --provider-network-type flat \
                             --provider-physical-network physnet1 dpdk_net \
                             --external
  
  openstack subnet show dpdk_net || \
      openstack subnet create --allocation-pool 
start=10.230.58.100,end=10.230.58.200 \
                              --subnet-range 10.230.56.0/21 --dhcp --gateway 
10.230.56.1 \
                              --dns-nameserver 10.230.56.2 \
                              --ip-version 4 --network dpdk_net dpdk_subnet
  
  3. Create an instance attached to that network. The instance must have a
  flavor that uses huge pages.
  
  openstack flavor create --ram 8192 --disk 50 --vcpus 4 m1.dpdk
  openstack flavor set m1.dpdk --property hw:mem_page_size=large
  
  openstack server create --wait --image xenial --flavor m1.dpdk --key-
  name testkey --network dpdk_net i1
  
  4. Log into the instance host and check the instance console. The
  instance will hang into the boot and show the following message:
  
  2020-11-20 09:43:26,790 - openstack.py[DEBUG]: Failed reading optional
  path http://169.254.169.254/openstack/2015-10-15/user_data due to:
  HTTPConnectionPool(host='169.254.169.254', port=80): Read timed out.
  (read timeout=10.0)
  
  5. Apply the fix in all computes, restart the DHCP agents in all
  computes and create the instance again.
  
  6. No errors should be shown and the instance quickly boots.
  
  [Where problems could occur]
  
  * This change is only touched if datapath_type and ovs_use_veth. Those 
settings are mostly used for DPDK environments. The core of the fix is
  to toggle off checksum offload done by the DHCP namespace interfaces.
  This will have the drawback of adding some overhead on the packet processing 
for DHCP traffic but given DHCP does not demand too much data, this should be a 
minor proble.
  
  * Future changes on the syntax of the ethtool command could cause
  regressions
  
  [Other Info]
  
   * None
  
- 
- QUEENS VERIFICATION DONE
+ BIONIC VERIFICATION DONE
  
  1 - Followed through the process above and confirmed that after installing 
the package the problems are solved.
  2 - Testing output can be seen here: https://paste.ubuntu.com/p/mM22nBsSG2/
  
  [1] https://gist.github.com/sombrafam/e0741138773e444960eb4aeace6e3e79

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832021

Title:
  Checksum drop of metadata traffic on isolated networks with DPDK

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1832021/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to