DieterDP-ng commented on code in PR #6506:
URL: https://github.com/apache/hbase/pull/6506#discussion_r2026487501
##########
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.java:
##########
@@ -420,7 +421,8 @@ public void registerBulkLoad(TableName tableName, byte[]
region,
* @param rows the row keys of the entries to be deleted
*/
public void deleteBulkLoadedRows(List<byte[]> rows) throws IOException {
- try (BufferedMutator bufferedMutator =
connection.getBufferedMutator(bulkLoadTableName)) {
+ try (BufferedMutator bufferedMutator = connection
+ .getBufferedMutator(new
BufferedMutatorParams(bulkLoadTableName).setMaxMutations(1000))) {
Review Comment:
This pointer led me to conclude that it's not needed to specify batching at
all, because it's already present.
Our method uses `Connection#getBufferedMutator`, for which the only
non-delegating implementation is
`ConnectionOverAsyncConnection#getBufferedMutator`. That one calls
`AsyncConnectionImpl#getBufferedMutatorBuilder(TableName)`. In that method we
can see it uses a `AsyncConnectionConfiguration` object that has already
extracted a batching value:
```
AsyncConnectionConfiguration(Configuration conf) {
...
this.bufferedMutatorMaxMutations =
conf.getInt(BUFFERED_MUTATOR_MAX_MUTATIONS_KEY,
conf.getInt(HConstants.BATCH_ROWS_THRESHOLD_NAME,
BUFFERED_MUTATOR_MAX_MUTATIONS_DEFAULT));
}
```
So I'll remove my change that introduced the batchsize.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]