** Description changed: [Impact] - [Test Case] - I have a test Ceph cluster as an object storage with both Swift and S3 protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an image upload completes quickly enough. But with S3 backend Glance, it takes much more time to upload an image and it seems to rise exponentially. + + Glance with S3 backend cannot accept image uploads in a realistic time + frame. For example, an 1GB image upload takes ~60 minutes although other + backends such as swift can complete it with 10 seconds. + + [Test Plan] + + 1. Deploy a partial OpenStack with multiple Glance backends including S3 + (zaza test bundles can be used with "ceph" which will set up "rbd", "swift", and "s3" backends - https://opendev.org/openstack/charm-glance/src/branch/master/tests/tests.yaml) + 2. Upload multiple images with variety of sizes + 3. confirm the duration of uploading images are shorter in general after applying the updated package + (expected duration of 1GB is from ~60 minutes to 1-3 minutes) + + for backend in ceph swift s3; do + echo "[$backend]" + for i in {0,3,5,9,10,128,512,1024}; do + dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync + echo "${i}MiB" + time glance image-create \ + --store $backend \ + --file my-image.img --name "my-image-${backend}-${i}MiB" \ + --disk-format raw --container-format bare \ + --progress + done + done + + + [Where problems could occur] + + Since we bump WRITE_CHUNKSIZE from 64KiB to 5MiB, there might be a case where image uploads fail if the size of the image is less than WRITE_CHUNKSIZE. Or there might be an unexpected latency in the worst case scenario. We will try to address the concerns by testing multiple images uploads with multiple sizes including some corner cases as follows: + - 0 - zero + - 3MiB - less than the new WRITE_CHUNKSIZE(5MiB) + - 5MiB - exactly same as the new WRITE_CHUNKSIZE(5MiB) + - 9MiB - bigger than new WRITE_CHUNKSIZE(5MiB) but less than twice + - 10MiB - exactly twice as the new WRITE_CHUNKSIZE(5MiB) + - 128MiB, 512MiB, 1024MiB - some large images + + + ==== + + I have a test Ceph cluster as an object storage with both Swift and S3 + protocols enabled for Glance (Ussuri). When I use Swift backend with + Glance, an image upload completes quickly enough. But with S3 backend + Glance, it takes much more time to upload an image and it seems to rise + exponentially. It's worth noting that when uploading an image with S3 backend, a single core is consumed 100% by glance-api process. - - for backend in swift s3; do - for i in {8,16,32,64,128,512}; do - dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync - time glance image-create \ - --store $backend \ - --file my-image.img --name my-image \ - --disk-format raw --container-format bare \ - --progress - done - done [swift] 8MB - 2.4s 16MB - 2.8s 32MB - 2.6s 64MB - 2.7s 128MB - 3.1s ... 512MB - 5.9s [s3] 8MB - 2.2s 16MB - 2.9s 32MB - 5.5s 64MB - 16.3s 128MB - 54.9s ... 512MB - 14m26s Btw, downloading of 512MB image with S3 backend can complete with less than 10 seconds. $ time openstack image save --file downloaded.img 917c5424-4350-4bc5-98ca-66d40e101843 real 0m5.673s $ du -h downloaded.img 512M downloaded.img [/etc/glance/glance-api.conf] enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3 [swift] auth_version = 3 auth_address = http://192.168.151.131:5000/v3 ... container = glance large_object_size = 5120 large_object_chunk_size = 200 [s3] s3_store_host = http://192.168.151.137:80/ ... s3_store_bucket = zaza-glance-s3-test s3_store_large_object_size = 5120 s3_store_large_object_chunk_size = 200 - ProblemType: Bug - DistroRelease: Ubuntu 20.04 + ProblemType: BugDistroRelease: Ubuntu 20.04 Package: python3-glance-store 2.0.0-0ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119 Uname: Linux 5.4.0-77-generic x86_64 - NonfreeKernelModules: bluetooth ecdh_generic ecc tcp_diag inet_diag binfmt_misc veth zfs zunicode zlua zavl icp zcommon znvpair spl unix_diag nft_masq nft_chain_nat bridge stp llc vhost_vsock vmw_vsock_virtio_transport_common vhost vsock ebtable_filter ebtables ip6table_raw ip6table_mangle ip6table_nat ip6table_filter ip6_tables iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter nf_tables nfnetlink dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua kvm_amd ccp input_leds kvm joydev mac_hid serio_raw qemu_fw_cfg sch_fq_codel ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul crc32_pclmul cirrus ghash_clmulni_intel drm_kms_helper virtio_net syscopyarea aesni_intel sysfillrect sysimgblt fb_sys_fops crypto_simd cryptd drm virtio_blk glue_helper net_failover psmouse failover floppy i2c_piix4 pata_acpi ApportVersion: 2.20.11-0ubuntu27.18 Architecture: amd64 CasperMD5CheckResult: skip Date: Wed Jul 7 04:46:05 2021 PackageArchitecture: all ProcEnviron: TERM=screen-256color PATH=(custom, no user) LANG=C.UTF-8 - SHELL=/bin/bash - SourcePackage: python-glance-store + SHELL=/bin/bashSourcePackage: python-glance-store UpgradeStatus: No upgrade log present (probably fresh install)
-- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1934849 Title: s3 backend takes time exponentially To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1934849/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs