** Description changed:

  The "radosgw-admin bucket limit check" command has a bug in octopus.
  
  Since we do not clear the bucket list in RGWRadosUser::list_buckets()
  before asking for the next "max_entries", they are appended to the
  existing list and we end up counting the first ones again. This causes
  duplicated entries in the output of "ragodgw-admin bucket limit check"
  
  This bug is triggered if bucket count exceeds 1000 (default
  max_entries).
  
  ------
  
  $ dpkg -l | grep ceph
  ii ceph 15.2.12-0ubuntu0.20.04.1 amd64 distributed storage and file system
  ii ceph-base 15.2.12-0ubuntu0.20.04.1 amd64 common ceph daemon libraries and 
management tools
  ii ceph-common 15.2.12-0ubuntu0.20.04.1 amd64 common utilities to mount and 
interact with a ceph storage cluster
  ii ceph-mds 15.2.12-0ubuntu0.20.04.1 amd64 metadata server for the ceph 
distributed file system
  ii ceph-mgr 15.2.12-0ubuntu0.20.04.1 amd64 manager for the ceph distributed 
file system
  ii ceph-mgr-modules-core 15.2.12-0ubuntu0.20.04.1 all ceph manager modules 
which are always enabled
  ii ceph-mon 15.2.12-0ubuntu0.20.04.1 amd64 monitor server for the ceph 
storage system
  ii ceph-osd 15.2.12-0ubuntu0.20.04.1 amd64 OSD server for the ceph storage 
system
  ii libcephfs2 15.2.12-0ubuntu0.20.04.1 amd64 Ceph distributed file system 
client library
  ii python3-ceph-argparse 15.2.12-0ubuntu0.20.04.1 amd64 Python 3 utility 
libraries for Ceph CLI
  ii python3-ceph-common 15.2.12-0ubuntu0.20.04.1 all Python 3 utility 
libraries for Ceph
  ii python3-cephfs 15.2.12-0ubuntu0.20.04.1 amd64 Python 3 libraries for the 
Ceph libcephfs library
  
  $ sudo radosgw-admin bucket list | jq .[] | wc -l
  5572
  $ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
  20572
  $ sudo radosgw-admin bucket limit check | jq '.[].buckets[] | 
select(.bucket=="bucket_1095")'
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  
  ------------------------------------------------------------------------------
  
  Fix proposed through https://github.com/ceph/ceph/pull/43381
  
  diff --git a/src/rgw/rgw_sal.cc b/src/rgw/rgw_sal.cc
  index 2b7a313ed91..65880a4757f 100644
  --- a/src/rgw/rgw_sal.cc
  +++ b/src/rgw/rgw_sal.cc
  @@ -35,6 +35,7 @@ int RGWRadosUser::list_buckets(const string& marker, const 
string& end_marker,
     RGWUserBuckets ulist;
     bool is_truncated = false;
     int ret;
  +  buckets.clear();
  
     ret = store->ctl()->user->list_buckets(info.user_id, marker, end_marker, 
max,
                                           need_stats, &ulist, &is_truncated);
  
  ------------------------------------------------------------------------------
  
  tested and verified the fix works:
  
  $ sudo dpkg -l | grep ceph
  ii ceph 15.2.14-0ubuntu0.20.04.3 amd64 distributed storage and file system
  ii ceph-base 15.2.14-0ubuntu0.20.04.3 amd64 common ceph daemon libraries and 
management tools
  ii ceph-common 15.2.14-0ubuntu0.20.04.3 amd64 common utilities to mount and 
interact with a ceph storage cluster
  ii ceph-mds 15.2.14-0ubuntu0.20.04.3 amd64 metadata server for the ceph 
distributed file system
  ii ceph-mgr 15.2.14-0ubuntu0.20.04.3 amd64 manager for the ceph distributed 
file system
  ii ceph-mgr-modules-core 15.2.14-0ubuntu0.20.04.3 all ceph manager modules 
which are always enabled
  ii ceph-mon 15.2.14-0ubuntu0.20.04.3 amd64 monitor server for the ceph 
storage system
  ii ceph-osd 15.2.14-0ubuntu0.20.04.3 amd64 OSD server for the ceph storage 
system
  ii libcephfs2 15.2.14-0ubuntu0.20.04.3 amd64 Ceph distributed file system 
client library
  ii python3-ceph-argparse 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 utility 
libraries for Ceph CLI
  ii python3-ceph-common 15.2.14-0ubuntu0.20.04.3 all Python 3 utility 
libraries for Ceph
  ii python3-cephfs 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 libraries for the 
Ceph libcephfs library
  ubuntu@crush-ceph-rgw01:~$ sudo apt-cache policy ceph
  ceph:
  Installed: 15.2.14-0ubuntu0.20.04.3
  Candidate: 15.2.14-0ubuntu0.20.04.3
  
  $ sudo radosgw-admin bucket list | jq .[] | wc -l
  5572
  $ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
  5572
  $ sudo radosgw-admin bucket limit check | jq '.[].buckets[] | 
select(.bucket=="bucket_1095")'
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
+ 
+ ----------
+ 
+ [Impact]
+ 
+ duplicated bucket name entries appear in the customers outputs when they
+ script the `radosgw-admin bucket limit check` commands.
+ 
+ To reproduce:
+ 
+ Create more than 1000 (default value of max_entries) buckets in a
+ cluster, and run 'radosgw-admin bucket limit check'
+ 
+ Duplicated entries are seen in the output on Octopus. For example,
+ 
+ $ sudo radosgw-admin bucket list | jq .[] | wc -l
+ 5572
+ 
+ $ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
+ 20572
+ 
+ [Test case]
+ 
+ Create more than 1000 buckets in a cluster, then run the 'radosgw-admin
+ bucket limit check' command. There should be no duplicated entries in
+ the output. Below is correct output, where the numbers match.
+ 
+ $ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
+ 5572
+ 
+ $ sudo radosgw-admin bucket list | jq .[] | wc -l
+ 5572
+ 
+ [Where problems could occur]
+ 
+ The duplicate entries could end up causing admins or even scripts to
+ assume that there are more buckets than there really are.
+ 
+ [Other Info]
+ - The patch was provided by Nikhil Kshirsagar (attached here)
+ - Upstream tracker: https://tracker.ceph.com/issues/52813
+ - Upstream PR: https://github.com/ceph/ceph/pull/43381
+ - Patched into Octopus upstream release.

** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946211

Title:
  [SRU] "radosgw-admin bucket limit check" has duplicate entries if
  bucket count exceeds 1000 (max_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946211/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to