This is an automated email from the ASF dual-hosted git repository.

aswinshakil pushed a commit to branch HDDS-10239-container-reconciliation
in repository https://gitbox.apache.org/repos/asf/ozone.git

commit 3c4b3bec36e7f8c8b2bb4793d2e1088b60fc148f
Merge: 0a53c73de2 da53b5b4d6
Author: Aswin Shakil Balasubramanian <[email protected]>
AuthorDate: Fri Jun 20 13:58:33 2025 +0530

    Merge branch 'master' of https://github.com/apache/ozone into 
HDDS-10239-container-reconciliation
    
    Commits: 62
    
    da53b5b4d6 HDDS-13299. Fix failures related to delete (#8665)
    8c1b439d51 HDDS-13296. Integration check always passes due to missing 
output (#8662)
    732985958d HDDS-13023. Container checksum is missing after container import 
(#8459)
    a0af93e210 HDDS-13292. Change `<? extends KeyValue>` to `<KeyValue>` in 
test (#8657)
    f3050cffff HDDS-13276. Use KEY_ONLY/VALUE_ONLY iterator in SCM/Datanode. 
(#8638)
    e9c0a45888 HDDS-13262. Simplify key name validation (#8619)
    f713e57b19 HDDS-12482. Avoid using CommonConfigurationKeys (#8647)
    b574709dd6 HDDS-12924. datanode used space calculation optimization (#8365)
    de683aad88 HDDS-13263. Refactor DB Checkpoint Utilities. (#8620)
    97262aa6d6 HDDS-13256. Updated OM Snapshot Grafana Dashboard to reflect 
metric updates from HDDS-13181. (#8639)
    9d2b4158e7 HDDS-13234. Expired secret key can abort leader OM startup. 
(#8601)
    d9049a2aea HDDS-13220. Change Recon 'Negative usedBytes' message loglevel 
to DEBUG (#8648)
    6df3077fe1 HDDS-9223. Use protobuf for SnapshotDiffJobCodec (#8503)
    a7fc290c20 HDDS-13236. Change Table methods not to throw IOException. 
(#8645)
    9958f5bff0 HDDS-13287. Upgrade commons-beanutils to 1.11.0 due to 
CVE-2025-48734 (#8646)
    48aefeaad0 HDDS-13277. [Docs] Native C/C++ Ozone clients (#8630)
    052d912444 HDDS-13037. Let container create command support STANDALONE , 
RATIS and EC containers (#8559)
    90ed60b7c4 HDDS-13279. Skip verifying Apache Ranger binaries in CI (#8633)
    9bc53b21eb HDDS-11513. All deletion configurations should be configurable 
without restart (#8003)
    ac511ac4ea HDDS-13259. Deletion Progress - Grafana Dashboard (#8617)
    3370f42015 HDDS-13246. Change `<? extend KeyValue>` to `<KeyValue>` in 
hadoop-hdds (#8631)
    7af8c44009 HDDS-11454. Ranger integration for Docker Compose environment 
(#8575)
    5a3e4e79c3 HDDS-13273. Bump awssdk to 2.31.63 (#8626)
    77138b884a HDDS-13254. Change table iterator to optionally read key or 
value. (#8621)
    ce288b6ed0 HDDS-13265. Simplify the page Access Ozone using HTTPFS REST API 
(#8629)
    36fe8880fb HDDS-13275. Improve CheckNative implementation (#8628)
    d38484ef31 HDDS-13274. Bump sqlite-jdbc to 3.50.1.0 (#8627)
    3f3ec43ec0 HDDS-13266. `ozone debug checknative` to show OpenSSL lib (#8623)
    8983a63374 HDDS-13272. Bump junit to 5.13.1 (#8625)
    a9271131c7 HDDS-13271. [Docs] Minor text updates, reference links. (#8624)
    7e770586bd HDDS-13112. [Docs] OM Bootstrap can also happen when follower 
falls behind too much. (#8600)
    fd1330072f HDDS-10775. Support bucket ownership verification (#8558)
    3ecf3450b3 HDDS-13207. [Docs] Third party systems compatible with Ozone S3. 
(#8584)
    ad5a507dfa HDDS-13035. SnapshotDeletingService should hold write locks 
while purging deleted snapshots (#8554)
    38a9186d61 HDDS-12637. Increase max buffer size for tar entry read/write 
(#8618)
    f31c264e38 HDDS-13045. Implement Immediate Triggering of Heartbeat when 
Volume Full (#8590)
    0701d6a20a HDDS-13248. Remove `ozone debug replicas verify` option 
--output-dir (#8612)
    ca1afe8519 HDDS-13257. Remove separate split for shell integration tests 
(#8616)
    5d6fe94891 HDDS-13216. Standardize Container[Replica]NotFoundException 
messages (#8599)
    1e472174f7 HDDS-13168. Fix error response format in 
CheckUploadContentTypeFilter (#8614)
    6d4d423814 HDDS-13181. Added metrics for internal Snapshot Operations. 
(#8606)
    4a461b2418 HDDS-10490. Intermittent NPE in 
TestSnapshotDiffManager#testLoadJobsOnStartUp (#8596)
    bf29f7ffb7 HDDS-13235. The equals/hashCode methods in anonymous KeyValue 
classes may not work. (#8607)
    6ff3ad6624 HDDS-12873. Improve ContainerData statistics synchronization. 
(#8305)
    09d3b2757d HDDS-13244. TestSnapshotDeletingServiceIntegrationTest should 
close snapshots after deleting them (#8611)
    931bc2d8a9 HDDS-13243. copy-rename-maven-plugin version is missing (#8605)
    3b5985c29c HDDS-13244. Disable TestSnapshotDeletingServiceIntegrationTest
    6bf009c202 HDDS-12927. metrics and log to indicate datanode crossing disk 
limits (#8573)
    752da2be72 HDDS-12760. Intermittent Timeout in 
testImportedContainerIsClosed (#8349)
    8c32363072 HDDS-13050. Update StartFromDockerHub.md. (#8586)
    ba1887ca9a HDDS-13241. Fix some potential resource leaks (#8602)
    bbaf71e71e HDDS-13130. Rename all instances of Disk Usage to Namespace 
usage (#8571)
    06283866b4 HDDS-13142. Correct SCMPerformanceMetrics for delete operation. 
(#8592)
    516bc9659b HDDS-13148. [Docs] Update Transparent Data Encryption doc. 
(#8530)
    5787135483 HDDS-13229. [Doc] Fix incorrect CLI argument order in OM upgrade 
docs (#8598)
    ba950741b3 HDDS-13107. Support limiting output of `ozone admin datanode 
list` (#8595)
    e7f554497b HDDS-13171. Replace pipelineID if nodes are changed (#8562)
    3c9d4d875f HDDS-13103. Correct transaction metrics in 
SCMBlockDeletingService. (#8516)
    f62eb8a466 HDDS-13160. Remove SnapshotDirectoryCleaningService and refactor 
AbstractDeletingService (#8547)
    b46e6b2686 HDDS-13150. Fixed SnapshotLimitCheck when failures occur. (#8532)
    203c1d35f0 HDDS-13206. Update documentation for Apache Ranger (#8583)
    2072ef09f6 HDDS-13214. populate-cache fails due to unused dependency (#8594)
    
    Conflicts:
            
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
            
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
            
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyValueContainerUtil.java
            
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingTask.java

 .github/workflows/ci.yml                           |    2 +-
 .../hadoop/hdds/scm/storage/BlockInputStream.java  |    8 +-
 .../hdds/scm/client/TestHddsClientUtils.java       |    1 +
 .../apache/hadoop/hdds/scm/client/ScmClient.java   |    3 +
 .../hdds/scm/container/ContainerException.java     |    9 -
 .../scm/container/ContainerNotFoundException.java  |   23 +-
 .../ContainerReplicaNotFoundException.java         |   18 +-
 .../apache/hadoop/hdds/scm/pipeline/Pipeline.java  |   58 +-
 .../protocol/StorageContainerLocationProtocol.java |    2 +
 .../hadoop/hdds/utils/BackgroundService.java       |   17 +-
 .../common/src/main/resources/ozone-default.xml    |   10 -
 .../hadoop/hdds/scm/pipeline/TestPipeline.java     |   44 +-
 .../ozone/HddsDatanodeClientProtocolServer.java    |    4 +-
 .../apache/hadoop/ozone/HddsDatanodeService.java   |   21 +-
 .../common/impl/BlockDeletingService.java          |   37 +
 .../ozone/container/common/impl/ContainerData.java |  319 ++--
 .../container/common/impl/HddsDispatcher.java      |   36 +-
 .../ContainerDeletionChoosingPolicyTemplate.java   |    6 +-
 .../commandhandler/DeleteBlocksCommandHandler.java |    6 +-
 .../transport/server/ratis/XceiverServerRatis.java |    2 +-
 .../ozone/container/common/volume/HddsVolume.java  |   71 +
 .../container/common/volume/MutableVolumeSet.java  |   11 +
 .../container/common/volume/StorageVolume.java     |   28 +-
 .../container/common/volume/VolumeInfoMetrics.java |   29 +
 .../ozone/container/common/volume/VolumeUsage.java |   11 +-
 .../container/keyvalue/KeyValueContainer.java      |   56 +-
 .../container/keyvalue/KeyValueContainerData.java  |   77 +-
 .../KeyValueContainerMetadataInspector.java        |   12 +-
 .../ozone/container/keyvalue/KeyValueHandler.java  |    6 +-
 .../container/keyvalue/TarContainerPacker.java     |   15 +
 .../keyvalue/helpers/KeyValueContainerUtil.java    |   96 +-
 .../container/keyvalue/impl/BlockManagerImpl.java  |   16 +-
 .../keyvalue/impl/ChunkManagerDispatcher.java      |    2 +-
 .../keyvalue/impl/FilePerBlockStrategy.java        |    2 +-
 .../statemachine/background/BlockDeletingTask.java |   45 +-
 .../container/metadata/AbstractDatanodeStore.java  |   18 +-
 .../ozone/container/metadata/DatanodeTable.java    |   54 +-
 .../metadata/SchemaOneDeletedBlocksTable.java      |   69 +-
 .../container/ozoneimpl/ContainerController.java   |    3 +-
 .../ozone/container/ozoneimpl/OzoneContainer.java  |   20 +-
 .../container/common/TestBlockDeletingService.java |   12 +-
 .../common/TestKeyValueContainerData.java          |   32 +-
 .../TestSchemaOneBackwardsCompatibility.java       |   10 +-
 .../common/impl/TestContainerPersistence.java      |   20 +-
 .../container/common/impl/TestHddsDispatcher.java  |   64 +-
 .../TestCloseContainerCommandHandler.java          |   12 +-
 .../container/common/volume/TestHddsVolume.java    |   54 +
 .../container/keyvalue/TestKeyValueContainer.java  |    2 +
 .../TestKeyValueContainerIntegrityChecks.java      |    4 +-
 .../container/keyvalue/TestTarContainerPacker.java |    4 +-
 .../keyvalue/impl/TestFilePerBlockStrategy.java    |    6 +-
 .../container/ozoneimpl/TestOzoneContainer.java    |    8 +-
 .../replication/TestReplicationSupervisor.java     |    3 +-
 hadoop-hdds/dev-support/checkstyle/checkstyle.xml  |    3 +-
 hadoop-hdds/docs/content/_index.md                 |    5 +-
 .../content/design/dn-usedspace-calculation.md     |   91 ++
 hadoop-hdds/docs/content/design/omprepare.md       |   11 +-
 .../docs/content/design/upgrade-dev-primer.md      |    8 +-
 hadoop-hdds/docs/content/feature/OM-HA.md          |   30 +
 hadoop-hdds/docs/content/interface/ReconApi.md     |   10 +-
 hadoop-hdds/docs/content/interface/ReconApi.zh.md  |    4 +-
 hadoop-hdds/docs/content/interface/S3.md           |   10 +
 hadoop-hdds/docs/content/interface/native-cpp.md   |  138 ++
 .../content/recipe/PythonRequestsOzoneHttpFS.md    |   26 +-
 hadoop-hdds/docs/content/security/SecuringTDE.md   |  150 +-
 .../docs/content/security/SecurityWithRanger.md    |    4 +-
 hadoop-hdds/docs/content/start/OnPrem.md           |    7 +
 .../docs/content/start/StartFromDockerHub.md       |  124 +-
 hadoop-hdds/docs/content/start/_index.md           |    2 +-
 hadoop-hdds/docs/content/start/ozone-recon.png     |  Bin 0 -> 408165 bytes
 hadoop-hdds/docs/content/start/ozone-scm.png       |  Bin 0 -> 322764 bytes
 .../static/swagger-resources/recon-api.yaml        |   10 +-
 .../main/java/org/apache/hadoop/hdds/fs/DU.java    |   30 +-
 .../org/apache/hadoop/hdds/fs/DUOptimized.java     |   66 +
 .../apache/hadoop/hdds/fs/DUOptimizedFactory.java  |   58 +
 .../hadoop/hdds/fs/SpaceUsageCheckFactory.java     |   14 +-
 .../hadoop/hdds/fs/SpaceUsageCheckParams.java      |    9 +
 ...inerLocationProtocolClientSideTranslatorPB.java |   31 +-
 .../hadoop/hdds/server/http/HttpServer2.java       |   16 +-
 .../org/apache/hadoop/hdds/utils/Archiver.java     |   34 +-
 .../hadoop/hdds/utils/DBCheckpointServlet.java     |   50 +-
 .../apache/hadoop/hdds/utils/HddsServerUtil.java   |    8 +-
 .../hadoop/hdds/utils/db/RDBBatchOperation.java    |   24 +-
 .../org/apache/hadoop/hdds/utils/db/RDBStore.java  |    7 +-
 .../hdds/utils/db/RDBStoreAbstractIterator.java    |   19 +-
 .../hdds/utils/db/RDBStoreByteArrayIterator.java   |   18 +-
 .../hdds/utils/db/RDBStoreCodecBufferIterator.java |   15 +-
 .../org/apache/hadoop/hdds/utils/db/RDBTable.java  |   89 +-
 .../apache/hadoop/hdds/utils/db/RawKeyValue.java   |   85 -
 .../org/apache/hadoop/hdds/utils/db/Table.java     |  234 ++-
 .../apache/hadoop/hdds/utils/db/TableIterator.java |    9 +-
 .../apache/hadoop/hdds/utils/db/TypedTable.java    |  204 +--
 .../org/apache/hadoop/hdds/fs/TestDUOptimized.java |   71 +
 .../hadoop/hdds/fs/TestDUOptimizedFactory.java     |   55 +
 .../hadoop/hdds/utils/MapBackedTableIterator.java  |   10 +-
 .../org/apache/hadoop/hdds/utils/TestArchiver.java |   49 +
 .../hadoop/hdds/utils/TestRDBSnapshotProvider.java |    8 +-
 .../hadoop/hdds/utils/db/InMemoryTestTable.java    |   15 +-
 .../utils/db/TestRDBStoreByteArrayIterator.java    |   33 +-
 .../utils/db/TestRDBStoreCodecBufferIterator.java  |   17 +-
 .../hadoop/hdds/utils/db/TestRDBTableStore.java    |   27 +-
 .../hdds/utils/db/TestTypedRDBTableStore.java      |    7 +-
 .../hadoop/hdds/utils/db/TestTypedTable.java       |   45 +
 .../src/main/proto/ScmAdminProtocol.proto          |    3 +-
 hadoop-hdds/server-scm/pom.xml                     |    4 +
 .../hadoop/hdds/scm/block/DeletedBlockLogImpl.java |   16 +-
 .../scm/block/DeletedBlockLogStateManager.java     |    5 +-
 .../scm/block/DeletedBlockLogStateManagerImpl.java |   31 +-
 .../hdds/scm/block/SCMBlockDeletingService.java    |    9 +-
 .../SCMDeletedBlockTransactionStatusManager.java   |    4 +-
 .../scm/block/ScmBlockDeletingServiceMetrics.java  |  113 +-
 .../hdds/scm/container/ContainerManagerImpl.java   |   38 +-
 .../scm/container/ContainerStateManagerImpl.java   |    7 +-
 .../hdds/scm/container/balancer/MoveManager.java   |    2 +-
 .../algorithms/SCMContainerPlacementRackAware.java |    8 +
 .../placement/metrics/SCMPerformanceMetrics.java   |   26 +-
 .../hdds/scm/ha/SCMDBCheckpointProvider.java       |    5 +-
 .../hadoop/hdds/scm/ha/SequenceIdGenerator.java    |   21 +-
 .../scm/pipeline/PipelineStateManagerImpl.java     |    7 +-
 .../hadoop/hdds/scm/pipeline/PipelineStateMap.java |    2 +-
 ...inerLocationProtocolServerSideTranslatorPB.java |    8 +-
 .../hdds/scm/server/SCMBlockProtocolServer.java    |   22 +-
 .../hadoop/hdds/scm/server/SCMCertStore.java       |   22 +-
 .../hdds/scm/server/SCMClientProtocolServer.java   |   18 +-
 .../hdds/scm/server/SCMDatanodeProtocolServer.java |    4 +-
 .../hadoop/hdds/scm/block/TestDeletedBlockLog.java |    4 +-
 .../scm/block/TestSCMBlockDeletingService.java     |    2 +-
 .../scm/container/TestUnknownContainerReport.java  |    2 +-
 .../TestContainerBalancerDatanodeNodeLimit.java    |    2 +-
 .../scm/container/balancer/TestMoveManager.java    |    2 +-
 .../TestSCMContainerPlacementRackAware.java        |   12 +
 .../scm/pipeline/TestPipelineStateManagerImpl.java |    2 +-
 .../pipeline/TestWritableECContainerProvider.java  |    2 +-
 .../hdds/scm/cli/ContainerOperationClient.java     |   13 +-
 .../hdds/scm/cli/container/CreateSubcommand.java   |   23 +-
 .../hdds/scm/cli/datanode/ListInfoSubcommand.java  |    8 +
 .../ozone/admin/nssummary/DiskUsageSubCommand.java |    2 +-
 .../ozone/admin/om/lease/LeaseRecoverer.java       |   14 +-
 .../hdds/scm/cli/container/TestInfoSubCommand.java |    4 +-
 .../scm/cli/datanode/TestListInfoSubcommand.java   |    2 +
 .../cli/pipeline/TestClosePipelinesSubCommand.java |    2 +-
 .../cli/pipeline/TestListPipelinesSubCommand.java  |    2 +-
 .../ozone/shell/volume/DeleteVolumeHandler.java    |    7 +-
 .../client/checksum/ECFileChecksumHelper.java      |    2 +-
 .../checksum/ReplicatedFileChecksumHelper.java     |   12 +-
 .../apache/hadoop/ozone/client/rpc/RpcClient.java  |   14 +-
 .../org/apache/hadoop/ozone/om/OMConfigKeys.java   |    5 -
 .../java/org/apache/hadoop/ozone/om/OmConfig.java  |   20 +
 .../hadoop/ozone/om/helpers/SnapshotDiffJob.java   |   13 +-
 .../org/apache/hadoop/ozone/om/TestOmConfig.java   |    2 +
 .../helpers/OldSnapshotDiffJobCodecForTesting.java |   56 +
 .../om/helpers/TestOmSnapshotDiffJobCodec.java     |   73 +
 hadoop-ozone/dev-support/checks/_post_process.sh   |   10 +-
 hadoop-ozone/dev-support/checks/junit.sh           |    4 +-
 hadoop-ozone/dev-support/checks/rat.sh             |    3 +-
 .../dashboards/Ozone - DeleteKey Metrics.json      |    6 +-
 .../Ozone - DeleteKeyProgress Metrics.json         | 1712 ++++++++++++++++++++
 .../grafana/dashboards/Ozone - OM Snapshot.json    |  295 +++-
 .../dist/src/main/compose/common/ranger.yaml       |   47 +
 .../dist/src/main/compose/ozonesecure-ha/.env      |    5 +
 .../src/main/compose/ozonesecure-ha/ranger.yaml    |   54 +
 .../src/main/compose/ozonesecure-ha/test-ranger.sh |   61 +
 hadoop-ozone/dist/src/main/compose/testlib.sh      |   32 +-
 .../main/smoketest/debug/ozone-debug-tests.robot   |    2 +-
 .../src/main/smoketest/debug/ozone-debug.robot     |    2 +-
 .../src/main/smoketest/recon/recon-nssummary.robot |    4 +-
 .../hadoop/ozone/freon/DNRPCLoadGenerator.java     |    8 +-
 .../hadoop/ozone/freon/RandomKeyGenerator.java     |   26 +-
 .../http/server/CheckUploadContentTypeFilter.java  |   21 +-
 .../TestReconInsightsForDeletedDirectories.java    |   12 +-
 .../ozone/recon/TestReconWithOzoneManagerHA.java   |   15 +-
 .../dev-support/findbugsExcludeFile.xml            |    4 +
 .../ozone/s3/awssdk/v2/AbstractS3SDKV2Tests.java   |  616 +++++++
 hadoop-ozone/integration-test/pom.xml              |    5 +
 .../java/org/apache/hadoop/fs/ozone/TestHSync.java |   14 +-
 .../java/org/apache/hadoop/hdds/TestRemoteEx.java  |   55 +
 .../hadoop/hdds/scm/TestAllocateContainer.java     |   23 +
 .../hadoop/hdds/scm/TestContainerOperations.java   |   20 +
 .../hdds/scm/TestSCMDbCheckpointServlet.java       |   17 +-
 .../java/org/apache/hadoop/ozone/TestDataUtil.java |    5 +-
 .../ozone/client/rpc/TestSecureOzoneRpcClient.java |    8 +-
 .../ozone/container/TestContainerReplication.java  |    1 -
 .../commandhandler/TestBlockDeletion.java          |    8 +-
 .../commandhandler/TestDeleteContainerHandler.java |    5 +-
 .../TestRefreshVolumeUsageHandler.java             |    7 +
 .../hadoop/ozone/om/TestOMDbCheckpointServlet.java |   19 +-
 .../hadoop/ozone/om/TestOMRatisSnapshots.java      |    3 +
 .../ozone/om/TestObjectStoreWithLegacyFS.java      |    3 +-
 .../org/apache/hadoop/ozone/om/TestOmMetrics.java  |    3 +-
 .../om/TestOzoneManagerHAWithStoppedNodes.java     |    3 +-
 .../om/TestOzoneManagerListVolumesSecure.java      |    7 +
 .../TestDirectoryDeletingServiceWithFSO.java       |   16 +-
 .../om/service}/TestRootedDDSWithFSO.java          |    5 +-
 ...TestSnapshotDeletingServiceIntegrationTest.java |  539 +++---
 .../ozone/om/snapshot/TestOMDBCheckpointUtils.java |  100 ++
 .../om/snapshot/TestOzoneManagerHASnapshot.java    |    8 +-
 .../snapshot/TestSnapshotBackgroundServices.java   |    6 +-
 .../TestSnapshotDirectoryCleaningService.java      |    3 +-
 .../reconfig/TestDatanodeReconfiguration.java      |   18 +-
 .../ozone/reconfig/TestOmReconfiguration.java      |    2 +
 .../hadoop/ozone/repair/om/TestFSORepairTool.java  |    7 +-
 .../hadoop/ozone/shell/TestOzoneDebugShell.java    |    2 +-
 .../hadoop/ozone/shell/TestReconfigShell.java      |    2 +-
 .../tools/contract/AbstractContractDistCpTest.java |    3 +-
 hadoop-ozone/native-client/libo3fs/o3fs.c          |    2 +-
 hadoop-ozone/native-client/libo3fs/o3fs.h          |    4 +-
 .../apache/hadoop/ozone/common/BekInfoUtils.java   |    4 +-
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java |    2 +-
 .../hadoop/ozone/om/OMDBCheckpointServlet.java     |   74 +-
 .../java/org/apache/hadoop/ozone/om/OMMetrics.java |   36 -
 .../hadoop/ozone/om/OmSnapshotInternalMetrics.java |  118 ++
 .../org/apache/hadoop/ozone/om/OzoneManager.java   |   20 +-
 .../om/request/file/OMDirectoryCreateRequest.java  |    6 +-
 .../ozone/om/request/file/OMFileCreateRequest.java |   13 +-
 .../ozone/om/request/key/OMKeyCommitRequest.java   |    9 +-
 .../ozone/om/request/key/OMKeyCreateRequest.java   |   10 +-
 .../ozone/om/request/key/OMKeyRenameRequest.java   |    8 +-
 .../hadoop/ozone/om/request/key/OMKeyRequest.java  |   75 -
 .../request/snapshot/OMSnapshotCreateRequest.java  |    3 +-
 .../snapshot/OMSnapshotMoveTableKeysRequest.java   |    5 +
 .../request/snapshot/OMSnapshotPurgeRequest.java   |    8 +-
 .../snapshot/OMSnapshotSetPropertyRequest.java     |    8 +-
 .../om/service/AbstractKeyDeletingService.java     |  586 +------
 .../ozone/om/service/DirectoryDeletingService.java |  308 +++-
 .../ozone/om/service/KeyDeletingService.java       |  213 ++-
 .../hadoop/ozone/om/service/QuotaRepairTask.java   |   38 +-
 .../ozone/om/service/SnapshotDeletingService.java  |   74 +-
 .../service/SnapshotDirectoryCleaningService.java  |  484 ------
 .../ozone/om/snapshot/OMDBCheckpointUtils.java     |   80 +
 .../ozone/om/snapshot/SnapshotDiffManager.java     |    2 +-
 .../OzoneDelegationTokenSecretManager.java         |   16 +-
 .../apache/hadoop/ozone/om/TestKeyManagerImpl.java |   14 +-
 .../hadoop/ozone/om/TestOmSnapshotManager.java     |   47 +-
 .../ozone/om/request/key/TestOMKeyRequest.java     |   22 -
 .../snapshot/TestOMSnapshotCreateRequest.java      |   46 +
 .../TestOMSnapshotMoveTableKeysRequest.java        |   16 +
 .../TestOMSnapshotPurgeRequestAndResponse.java     |   19 +-
 ...estOMSnapshotSetPropertyRequestAndResponse.java |   24 +-
 .../snapshot/TestOMSnapshotCreateResponse.java     |    5 +-
 .../snapshot/TestOMSnapshotDeleteResponse.java     |    6 +-
 .../TestOMSnapshotMoveTableKeysResponse.java       |   56 +-
 .../ozone/om/service/TestKeyDeletingService.java   |   23 +-
 .../om/service/TestOpenKeyCleanupService.java      |    3 +-
 .../om/snapshot/TestFSODirectoryPathResolver.java  |   19 +-
 .../ozone/om/snapshot/TestSnapshotDiffManager.java |    3 -
 .../snapshot/TestSnapshotRequestAndResponse.java   |    9 +
 .../filter/TestReclaimableRenameEntryFilter.java   |    5 +-
 .../TestOzoneDelegationTokenSecretManager.java     |   41 +
 .../apache/hadoop/ozone/recon/ReconConstants.java  |    2 +-
 .../org/apache/hadoop/ozone/recon/ReconUtils.java  |    2 +-
 .../hadoop/ozone/recon/api/NSSummaryEndpoint.java  |    2 +-
 .../scm/ReconStorageContainerManagerFacade.java    |    7 +-
 .../recon/spi/ReconContainerMetadataManager.java   |    6 -
 .../impl/ReconContainerMetadataManagerImpl.java    |    5 +-
 .../ozone/recon/tasks/ContainerSizeCountTask.java  |    2 +-
 .../webapps/recon/ozone-recon-web/api/routes.json  |   38 +-
 .../src/components/navBar/navBar.tsx               |   10 +-
 .../src/constants/breadcrumbs.constants.tsx        |    2 +-
 .../src/v2/components/duMetadata/duMetadata.tsx    |    2 +-
 .../src/v2/components/navBar/navBar.tsx            |   14 +-
 .../src/v2/constants/breadcrumbs.constants.tsx     |    2 +-
 .../src/v2/pages/diskUsage/diskUsage.tsx           |   30 +-
 .../src/views/diskUsage/diskUsage.tsx              |   10 +-
 .../hadoop/ozone/recon/api/TestEndpoints.java      |    7 +-
 .../ozone/recon/tasks/TestFileSizeCountTask.java   |    9 +-
 .../ozone/recon/tasks/TestOmTableInsightTask.java  |    3 +-
 .../hadoop/ozone/s3/endpoint/BucketEndpoint.java   |   42 +-
 .../hadoop/ozone/s3/endpoint/ObjectEndpoint.java   |   25 +-
 .../apache/hadoop/ozone/s3/endpoint/S3Owner.java   |   60 +
 .../hadoop/ozone/s3/exception/S3ErrorTable.java    |    4 +
 .../org/apache/hadoop/ozone/s3/util/S3Consts.java  |    5 +
 .../ozone/s3/endpoint/BucketEndpointBuilder.java   |    1 +
 .../hadoop/ozone/s3/endpoint/TestBucketAcl.java    |   25 +-
 .../hadoop/ozone/s3/endpoint/TestBucketDelete.java |    2 -
 .../hadoop/ozone/s3/endpoint/TestBucketList.java   |   53 +-
 .../hadoop/ozone/s3/endpoint/TestBucketPut.java    |    8 +-
 .../ozone/s3/endpoint/TestPermissionCheck.java     |   13 +-
 .../hadoop/ozone/s3/endpoint/TestS3Owner.java      |  133 ++
 .../ozone/s3/metrics/TestS3GatewayMetrics.java     |   17 +-
 .../org/apache/hadoop/ozone/debug/CheckNative.java |   63 +-
 .../debug/replicas/BlockExistenceVerifier.java     |   14 +-
 .../ozone/debug/replicas/ChecksumVerifier.java     |   14 +-
 .../debug/replicas/ContainerStateVerifier.java     |   18 +-
 .../ozone/debug/replicas/ReplicaVerifier.java      |    2 +-
 .../ozone/debug/replicas/ReplicasVerify.java       |    7 +-
 .../debug/replicas/chunk/ChunkKeyHandler.java      |    7 +-
 .../apache/hadoop/ozone/debug/TestCheckNative.java |    1 +
 pom.xml                                            |   36 +-
 288 files changed, 7428 insertions(+), 3860 deletions(-)

diff --cc 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
index 949849f7a3,f7175bd95c..11092fe1f7
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
@@@ -111,7 -107,7 +111,7 @@@ public abstract class ContainerData 
    private transient Optional<Instant> lastDataScanTime = Optional.empty();
  
    public static final Charset CHARSET_ENCODING = StandardCharsets.UTF_8;
-   private static final String ZERO_CHECKSUM = new String(new byte[64],
 -  public static final String DUMMY_CHECKSUM = new String(new byte[64],
++  public static final String ZERO_CHECKSUM = new String(new byte[64],
        CHARSET_ENCODING);
  
    // Common Fields need to be stored in .container file.
@@@ -569,15 -454,11 +465,7 @@@
      this.isEmpty = true;
    }
  
-   /**
-    * Set's number of blocks in the container.
-    * @param count
-    */
-   public void setBlockCount(long count) {
-     this.blockCount.set(count);
 -  public void setChecksumTo0ByteArray() {
 -    this.checksum = DUMMY_CHECKSUM;
--  }
--
 -  public void setChecksum(String checkSum) {
 +  public void setContainerFileChecksum(String checkSum) {
      this.checksum = checkSum;
    }
  
diff --cc 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerData.java
index 4628c3dca1,8a7758cd84..13603640b4
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerData.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerData.java
@@@ -276,6 -261,39 +262,40 @@@ public class KeyValueContainerData exte
      return deleteTransactionId;
    }
  
+   ContainerReplicaProto buildContainerReplicaProto() throws 
StorageContainerException {
+     return 
getStatistics().setContainerReplicaProto(ContainerReplicaProto.newBuilder())
+         .setContainerID(getContainerID())
+         .setState(getContainerReplicaProtoState(getState()))
+         .setIsEmpty(isEmpty())
+         .setOriginNodeId(getOriginNodeId())
+         .setReplicaIndex(getReplicaIndex())
+         .setBlockCommitSequenceId(getBlockCommitSequenceId())
+         .setDeleteTransactionId(getDeleteTransactionId())
++        .setDataChecksum(getDataChecksum())
+         .build();
+   }
+ 
+   // TODO remove one of the State from proto
+   static ContainerReplicaProto.State 
getContainerReplicaProtoState(ContainerDataProto.State state)
+       throws StorageContainerException {
+     switch (state) {
+     case OPEN:
+       return ContainerReplicaProto.State.OPEN;
+     case CLOSING:
+       return ContainerReplicaProto.State.CLOSING;
+     case QUASI_CLOSED:
+       return ContainerReplicaProto.State.QUASI_CLOSED;
+     case CLOSED:
+       return ContainerReplicaProto.State.CLOSED;
+     case UNHEALTHY:
+       return ContainerReplicaProto.State.UNHEALTHY;
+     case DELETED:
+       return ContainerReplicaProto.State.DELETED;
+     default:
+       throw new StorageContainerException("Invalid container state: " + 
state, INVALID_CONTAINER_STATE);
+     }
+   }
+ 
    /**
     * Add the given localID of a block to the finalizedBlockSet.
     */
diff --cc 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/TarContainerPacker.java
index 46a2a94975,bc12d8f067..d42eb208e1
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/TarContainerPacker.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/TarContainerPacker.java
@@@ -93,6 -99,15 +99,15 @@@ public class TarContainerPacke
        Files.createDirectories(destContainerDir);
      }
      if (FileUtils.isEmptyDirectory(destContainerDir.toFile())) {
+ 
+       //before state change to RECOVERING, we need to verify the checksum for 
untarContainerData.
+       if (descriptorFileContent != null) {
+         KeyValueContainerData untarContainerData =
+             (KeyValueContainerData) ContainerDataYaml
+                 .readContainer(descriptorFileContent);
 -        ContainerUtils.verifyChecksum(untarContainerData, conf);
++        ContainerUtils.verifyContainerFileChecksum(untarContainerData, conf);
+       }
+ 
        // Before the atomic move, the destination dir is empty and doesn't 
have a metadata directory.
        // Writing the .container file will fail as the metadata dir doesn't 
exist.
        // So we instead save the container file to the containerUntarDir.
diff --cc 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyValueContainerUtil.java
index fc16e0c71b,e927536ba6..884f16f77a
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyValueContainerUtil.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyValueContainerUtil.java
@@@ -34,9 -32,9 +34,10 @@@ import org.apache.hadoop.hdds.protocol.
  import org.apache.hadoop.hdds.utils.MetadataKeyFilters;
  import org.apache.hadoop.hdds.utils.db.Table;
  import org.apache.hadoop.ozone.OzoneConsts;
 +import 
org.apache.hadoop.ozone.container.checksum.ContainerChecksumTreeManager;
  import org.apache.hadoop.ozone.container.common.helpers.BlockData;
  import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
+ import org.apache.hadoop.ozone.container.common.impl.ContainerData;
  import org.apache.hadoop.ozone.container.common.interfaces.BlockIterator;
  import org.apache.hadoop.ozone.container.common.interfaces.DBHandle;
  import 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeConfiguration;
@@@ -208,7 -219,10 +222,10 @@@ public final class KeyValueContainerUti
      long containerID = kvContainerData.getContainerID();
  
      // Verify Checksum
-     ContainerUtils.verifyContainerFileChecksum(kvContainerData, config);
+     // skip verify checksum if the state has changed to RECOVERING during 
container import
+     if (!skipVerifyChecksum) {
 -      ContainerUtils.verifyChecksum(kvContainerData, config);
++      ContainerUtils.verifyContainerFileChecksum(kvContainerData, config);
+     }
  
      if (kvContainerData.getSchemaVersion() == null) {
        // If this container has not specified a schema version, it is in the 
old
diff --cc 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java
index 395f79fb5e,1cf3421c0b..0c89ce91ac
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java
@@@ -98,78 -99,6 +99,78 @@@ public class BlockManagerImpl implement
          data, endOfBlock);
    }
  
 +  /**
 +   * {@inheritDoc}
 +   */
 +  @Override
 +  public long putBlockForClosedContainer(Container container, BlockData data, 
boolean overwriteBcsId)
 +          throws IOException {
 +    Preconditions.checkNotNull(data, "BlockData cannot be null for put 
operation.");
 +    Preconditions.checkState(data.getContainerID() >= 0, "Container Id cannot 
be negative");
 +
 +    KeyValueContainerData containerData = (KeyValueContainerData) 
container.getContainerData();
 +
 +    // We are not locking the key manager since RocksDB serializes all actions
 +    // against a single DB. We rely on DB level locking to avoid conflicts.
 +    try (DBHandle db = BlockUtils.getDB(containerData, config)) {
 +      Preconditions.checkNotNull(db, DB_NULL_ERR_MSG);
 +
 +      long blockBcsID = data.getBlockCommitSequenceId();
 +      long containerBcsID = containerData.getBlockCommitSequenceId();
 +
 +      // Check if the block is already present in the DB of the container to 
determine whether
 +      // the blockCount is already incremented for this block in the DB or 
not.
 +      long localID = data.getLocalID();
 +      boolean incrBlockCount = false;
 +
 +      // update the blockData as well as BlockCommitSequenceId here
 +      try (BatchOperation batch = db.getStore().getBatchHandler()
 +          .initBatchOperation()) {
 +        // If block already exists in the DB, blockCount should not be 
incremented.
 +        if 
(db.getStore().getBlockDataTable().get(containerData.getBlockKey(localID)) == 
null) {
 +          incrBlockCount = true;
 +        }
 +
 +        db.getStore().getBlockDataTable().putWithBatch(batch, 
containerData.getBlockKey(localID), data);
 +        if (overwriteBcsId && blockBcsID > containerBcsID) {
 +          db.getStore().getMetadataTable().putWithBatch(batch, 
containerData.getBcsIdKey(), blockBcsID);
 +        }
 +
 +        // Set Bytes used, this bytes used will be updated for every write and
 +        // only get committed for every put block. In this way, when datanode
 +        // is up, for computation of disk space by container only committed
 +        // block length is used, And also on restart the blocks committed to 
DB
 +        // is only used to compute the bytes used. This is done to keep the
 +        // current behavior and avoid DB write during write chunk operation.
 +        db.getStore().getMetadataTable().putWithBatch(batch, 
containerData.getBytesUsedKey(),
 +            containerData.getBytesUsed());
 +
 +        // Set Block Count for a container.
 +        if (incrBlockCount) {
 +          db.getStore().getMetadataTable().putWithBatch(batch, 
containerData.getBlockCountKey(),
 +              containerData.getBlockCount() + 1);
 +        }
 +
 +        db.getStore().getBatchHandler().commitBatchOperation(batch);
 +      }
 +
 +      if (overwriteBcsId && blockBcsID > containerBcsID) {
 +        container.updateBlockCommitSequenceId(blockBcsID);
 +      }
 +
 +      // Increment block count in-memory after the DB update.
 +      if (incrBlockCount) {
-         containerData.incrBlockCount();
++        containerData.getStatistics().incrementBlockCount();
 +      }
 +
 +      if (LOG.isDebugEnabled()) {
 +        LOG.debug("Block {} successfully persisted for closed container {} 
with bcsId {} chunk size {}",
 +            data.getBlockID(), containerData.getContainerID(), blockBcsID, 
data.getChunks().size());
 +      }
 +      return data.getSize();
 +    }
 +  }
 +
    public long persistPutBlock(KeyValueContainer container,
        BlockData data, boolean endOfBlock)
        throws IOException {
diff --cc 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/FilePerBlockStrategy.java
index fd4ce8c9bf,a402c7a2b3..4047b2535c
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/FilePerBlockStrategy.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/FilePerBlockStrategy.java
@@@ -171,27 -170,10 +171,27 @@@ public class FilePerBlockStrategy imple
        ChunkUtils.validateChunkSize(channel, info, chunkFile.getName());
      }
  
 -    ChunkUtils
 -        .writeData(channel, chunkFile.getName(), data, offset, len, volume);
 +    long fileLengthBeforeWrite;
 +    try {
 +      fileLengthBeforeWrite = channel.size();
 +    } catch (IOException e) {
 +      throw new StorageContainerException("Encountered an error while getting 
the file size for "
 +          + chunkFile.getName(), CHUNK_FILE_INCONSISTENCY);
 +    }
 +
 +    ChunkUtils.writeData(channel, chunkFile.getName(), data, offset, 
chunkLength, volume);
 +
 +    // When overwriting, update the bytes used if the new length is greater 
than the old length
 +    // This is to ensure that the bytes used is updated correctly when 
overwriting a smaller chunk
 +    // with a larger chunk at the end of the block.
 +    if (overwrite) {
 +      long fileLengthAfterWrite = offset + chunkLength;
 +      if (fileLengthAfterWrite > fileLengthBeforeWrite) {
-         containerData.incrBytesUsed(fileLengthAfterWrite - 
fileLengthBeforeWrite);
++        containerData.getStatistics().updateWrite(fileLengthAfterWrite - 
fileLengthBeforeWrite, false);
 +      }
 +    }
  
 -    containerData.updateWriteStats(len, overwrite);
 +    containerData.updateWriteStats(chunkLength, overwrite);
    }
  
    @Override
diff --cc 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingTask.java
index c39cd3e9ef,411aab97b1..c2776b3033
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingTask.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingTask.java
@@@ -38,8 -40,6 +39,7 @@@ import org.apache.hadoop.hdds.utils.Bac
  import org.apache.hadoop.hdds.utils.MetadataKeyFilters.KeyPrefixFilter;
  import org.apache.hadoop.hdds.utils.db.BatchOperation;
  import org.apache.hadoop.hdds.utils.db.Table;
- import org.apache.hadoop.hdds.utils.db.TableIterator;
 +import 
org.apache.hadoop.ozone.container.checksum.ContainerChecksumTreeManager;
  import org.apache.hadoop.ozone.container.common.helpers.BlockData;
  import 
org.apache.hadoop.ozone.container.common.helpers.BlockDeletingServiceMetrics;
  import org.apache.hadoop.ozone.container.common.impl.BlockDeletingService;
@@@ -195,10 -191,8 +194,9 @@@ public class BlockDeletingTask implemen
          return crr;
        }
  
 -      List<String> succeedBlocks = new LinkedList<>();
 +      List<Long> succeedBlockIDs = new LinkedList<>();
 +      List<String> succeedBlockDBKeys = new LinkedList<>();
-       LOG.debug("Container : {}, To-Delete blocks : {}",
-           containerData.getContainerID(), toDeleteBlocks.size());
+       LOG.debug("{}, toDeleteBlocks: {}", containerData, 
toDeleteBlocks.size());
  
        Handler handler = Objects.requireNonNull(ozoneContainer.getDispatcher()
            .getHandler(container.getContainerType()));
diff --cc 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerController.java
index 9f328fee4d,e315e1bf4a..671cf6448b
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerController.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerController.java
@@@ -28,9 -29,8 +28,10 @@@ import org.apache.hadoop.hdds.protocol.
  import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerDataProto.State;
  import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerType;
  import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
+ import org.apache.hadoop.hdds.scm.container.ContainerID;
  import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
 +import org.apache.hadoop.ozone.container.checksum.ContainerMerkleTreeWriter;
 +import org.apache.hadoop.ozone.container.checksum.DNContainerOperationClient;
  import org.apache.hadoop.ozone.container.common.impl.ContainerData;
  import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
  import org.apache.hadoop.ozone.container.common.interfaces.Container;
diff --cc 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
index e8a25aae1d,4448d127ef..33495fab90
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
@@@ -64,9 -66,7 +66,8 @@@ import org.apache.hadoop.hdds.security.
  import org.apache.hadoop.hdds.utils.HddsServerUtil;
  import org.apache.hadoop.hdds.utils.IOUtils;
  import org.apache.hadoop.hdds.utils.db.Table;
- import org.apache.hadoop.hdds.utils.db.TableIterator;
  import org.apache.hadoop.ozone.HddsDatanodeService;
 +import 
org.apache.hadoop.ozone.container.checksum.ContainerChecksumTreeManager;
  import org.apache.hadoop.ozone.container.common.DatanodeLayoutStorage;
  import org.apache.hadoop.ozone.container.common.helpers.ContainerMetrics;
  import org.apache.hadoop.ozone.container.common.impl.BlockDeletingService;
diff --cc 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestKeyValueContainerData.java
index 639f3793a3,97798f2bb4..6af2e00ade
--- 
a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestKeyValueContainerData.java
+++ 
b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestKeyValueContainerData.java
@@@ -75,14 -74,11 +74,12 @@@ public class TestKeyValueContainerData 
          .getState());
      assertEquals(0, kvData.getMetadata().size());
      assertEquals(0, kvData.getNumPendingDeletionBlocks());
-     assertEquals(val.get(), kvData.getReadBytes());
-     assertEquals(val.get(), kvData.getWriteBytes());
-     assertEquals(val.get(), kvData.getReadCount());
-     assertEquals(val.get(), kvData.getWriteCount());
-     assertEquals(val.get(), kvData.getBlockCount());
-     assertEquals(val.get(), kvData.getNumPendingDeletionBlocks());
+     final ContainerData.Statistics statistics = kvData.getStatistics();
+     statistics.assertRead(0, 0);
+     statistics.assertWrite(0, 0);
+     statistics.assertBlock(0, 0, 0);
      assertEquals(MAXSIZE, kvData.getMaxSize());
 +    assertEquals(0, kvData.getDataChecksum());
  
      kvData.setState(state);
      kvData.setContainerDBType(containerDBType);
diff --cc 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestHddsDispatcher.java
index bd72c358f6,3d8399f283..106ea9ac2e
--- 
a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestHddsDispatcher.java
+++ 
b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestHddsDispatcher.java
@@@ -155,11 -173,12 +174,12 @@@ public class TestHddsDispatcher 
        ContainerMetrics metrics = ContainerMetrics.create(conf);
        Map<ContainerType, Handler> handlers = Maps.newHashMap();
        for (ContainerType containerType : ContainerType.values()) {
 -        handlers.put(containerType,
 -            Handler.getHandlerForContainerType(containerType, conf,
 -                context.getParent().getDatanodeDetails().getUuidString(),
 -                containerSet, volumeSet, volumeChoosingPolicy, metrics, 
NO_OP_ICR_SENDER));
 +        handlers.put(containerType, 
Handler.getHandlerForContainerType(containerType, conf,
 +            context.getParent().getDatanodeDetails().getUuidString(),
 +            containerSet, volumeSet, volumeChoosingPolicy, metrics, 
NO_OP_ICR_SENDER,
 +            new ContainerChecksumTreeManager(conf)));
        }
+       // write successfully to first container
        HddsDispatcher hddsDispatcher = new HddsDispatcher(
            conf, containerSet, volumeSet, handlers, context, metrics, null);
        hddsDispatcher.setClusterId(scmId.toString());
diff --cc 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
index 51a949e496,216d748b4a..7ba72f937e
--- 
a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
+++ 
b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
@@@ -387,6 -387,8 +387,8 @@@ public class TestKeyValueContainer 
            containerData.getMaxSize());
        assertEquals(keyValueContainerData.getBytesUsed(),
            containerData.getBytesUsed());
 -      assertNotNull(containerData.getChecksum());
 -      assertNotEquals(containerData.DUMMY_CHECKSUM, 
container.getContainerData().getChecksum());
++      assertNotNull(containerData.getContainerFileChecksum());
++      assertNotEquals(containerData.ZERO_CHECKSUM, 
container.getContainerData().getContainerFileChecksum());
  
        //Can't overwrite existing container
        KeyValueContainer finalContainer = container;
diff --cc 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/impl/TestFilePerBlockStrategy.java
index 243fe218c5,075dc6e865..3a95af7f70
--- 
a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/impl/TestFilePerBlockStrategy.java
+++ 
b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/impl/TestFilePerBlockStrategy.java
@@@ -162,179 -131,6 +162,179 @@@ public class TestFilePerBlockStrategy e
      assertEquals(data.rewind().toByteString().substring(start, start + 
length), readData2.toByteString());
    }
  
 +  @ParameterizedTest
 +  @MethodSource("getNonClosedStates")
 +  public void testWriteChunkAndPutBlockFailureForNonClosedContainer(
 +      ContainerProtos.ContainerDataProto.State state) throws IOException {
 +    KeyValueContainer keyValueContainer = getKeyValueContainer();
 +    keyValueContainer.getContainerData().setState(state);
 +    ContainerSet containerSet = newContainerSet();
 +    containerSet.addContainer(keyValueContainer);
 +    KeyValueHandler keyValueHandler = createKeyValueHandler(containerSet);
 +    ChunkBuffer.wrap(getData());
 +    Assertions.assertThrows(IOException.class, () -> 
keyValueHandler.writeChunkForClosedContainer(
 +        getChunkInfo(), getBlockID(), ChunkBuffer.wrap(getData()), 
keyValueContainer));
 +    Assertions.assertThrows(IOException.class, () -> 
keyValueHandler.putBlockForClosedContainer(keyValueContainer,
 +            new BlockData(getBlockID()), 0L, true));
 +  }
 +
 +  @Test
 +  public void testWriteChunkForClosedContainer()
 +      throws IOException {
 +    ChunkBuffer writeChunkData = ChunkBuffer.wrap(getData());
 +    KeyValueContainer kvContainer = getKeyValueContainer();
 +    KeyValueContainerData containerData = kvContainer.getContainerData();
 +    closedKeyValueContainer();
 +    ContainerSet containerSet = newContainerSet();
 +    containerSet.addContainer(kvContainer);
 +    KeyValueHandler keyValueHandler = createKeyValueHandler(containerSet);
 +    keyValueHandler.writeChunkForClosedContainer(getChunkInfo(), 
getBlockID(), writeChunkData, kvContainer);
 +    ChunkBufferToByteString readChunkData = 
keyValueHandler.getChunkManager().readChunk(kvContainer,
 +        getBlockID(), getChunkInfo(), WRITE_STAGE);
 +    rewindBufferToDataStart();
 +    Assertions.assertEquals(writeChunkData, readChunkData);
-     Assertions.assertEquals(containerData.getWriteBytes(), 
writeChunkData.remaining());
++    Assertions.assertEquals(containerData.getStatistics().getWriteBytes(), 
writeChunkData.remaining());
 +    Assertions.assertEquals(containerData.getBytesUsed(), 
writeChunkData.remaining());
 +
 +    // Test Overwrite
 +    keyValueHandler.writeChunkForClosedContainer(getChunkInfo(), getBlockID(),
 +        writeChunkData, kvContainer);
 +    readChunkData = keyValueHandler.getChunkManager().readChunk(kvContainer,
 +        getBlockID(), getChunkInfo(), WRITE_STAGE);
 +    rewindBufferToDataStart();
 +    Assertions.assertEquals(writeChunkData, readChunkData);
-     Assertions.assertEquals(containerData.getWriteBytes(), 2L * 
writeChunkData.remaining());
++    Assertions.assertEquals(containerData.getStatistics().getWriteBytes(), 2L 
* writeChunkData.remaining());
 +    // Overwrites won't increase the bytesUsed of a Container
 +    Assertions.assertEquals(containerData.getBytesUsed(), 
writeChunkData.remaining());
 +
 +    // Test new write chunk after overwrite
 +    byte[] bytes = "testing write chunks with after 
overwrite".getBytes(UTF_8);
 +    ChunkBuffer newWriteChunkData = 
ChunkBuffer.allocate(bytes.length).put(bytes);
 +    newWriteChunkData.rewind();
 +
 +    // Write chunk after the previous overwrite chunk.
 +    ChunkInfo newChunkInfo = new ChunkInfo(String.format("%d.data.%d", 
getBlockID()
 +        .getLocalID(), writeChunkData.remaining()), 
writeChunkData.remaining(), bytes.length);
 +    keyValueHandler.writeChunkForClosedContainer(newChunkInfo, getBlockID(),
 +        newWriteChunkData, kvContainer);
 +    readChunkData = keyValueHandler.getChunkManager().readChunk(kvContainer,
 +        getBlockID(), newChunkInfo, WRITE_STAGE);
 +    newWriteChunkData.rewind();
 +    Assertions.assertEquals(newWriteChunkData, readChunkData);
-     Assertions.assertEquals(containerData.getWriteBytes(), 2L * 
writeChunkData.remaining()
++    Assertions.assertEquals(containerData.getStatistics().getWriteBytes(), 2L 
* writeChunkData.remaining()
 +        + newWriteChunkData.remaining());
 +    Assertions.assertEquals(containerData.getBytesUsed(), 
writeChunkData.remaining() + newWriteChunkData.remaining());
 +  }
 +
 +  @Test
 +  public void testPutBlockForClosedContainer() throws IOException {
 +    KeyValueContainer kvContainer = getKeyValueContainer();
 +    KeyValueContainerData containerData = kvContainer.getContainerData();
 +    closedKeyValueContainer();
 +    ContainerSet containerSet = newContainerSet();
 +    containerSet.addContainer(kvContainer);
 +    KeyValueHandler keyValueHandler = createKeyValueHandler(containerSet);
 +    List<ContainerProtos.ChunkInfo> chunkInfoList = new ArrayList<>();
 +    ChunkInfo info = new ChunkInfo(String.format("%d.data.%d", 
getBlockID().getLocalID(), 0), 0L, 20L);
 +
 +    chunkInfoList.add(info.getProtoBufMessage());
 +    BlockData putBlockData = new BlockData(getBlockID());
 +    putBlockData.setChunks(chunkInfoList);
 +
 +    ChunkBuffer chunkData = ContainerTestHelper.getData(20);
 +    keyValueHandler.writeChunkForClosedContainer(info, getBlockID(), 
chunkData, kvContainer);
 +    keyValueHandler.putBlockForClosedContainer(kvContainer, putBlockData, 1L, 
true);
 +    assertEquals(1L, containerData.getBlockCommitSequenceId());
 +    assertEquals(1L, containerData.getBlockCount());
 +    assertEquals(20L, containerData.getBytesUsed());
 +
 +    try (DBHandle dbHandle = BlockUtils.getDB(containerData, new 
OzoneConfiguration())) {
 +      long localID = putBlockData.getLocalID();
 +      BlockData getBlockData = dbHandle.getStore().getBlockDataTable()
 +          .get(containerData.getBlockKey(localID));
 +      Assertions.assertTrue(blockDataEquals(putBlockData, getBlockData));
 +      assertEquals(20L, 
dbHandle.getStore().getMetadataTable().get(containerData.getBytesUsedKey()));
 +    }
 +
 +    // Add another chunk and check the put block data
 +    ChunkInfo newChunkInfo = new ChunkInfo(String.format("%d.data.%d", 
getBlockID().getLocalID(), 1L), 20L, 20L);
 +    chunkInfoList.add(newChunkInfo.getProtoBufMessage());
 +    putBlockData.setChunks(chunkInfoList);
 +
 +    chunkData = ContainerTestHelper.getData(20);
 +    keyValueHandler.writeChunkForClosedContainer(newChunkInfo, getBlockID(), 
chunkData, kvContainer);
 +    keyValueHandler.putBlockForClosedContainer(kvContainer, putBlockData, 2L, 
true);
 +    assertEquals(2L, containerData.getBlockCommitSequenceId());
 +    assertEquals(1L, containerData.getBlockCount());
 +    assertEquals(40L, containerData.getBytesUsed());
 +
 +    try (DBHandle dbHandle = BlockUtils.getDB(containerData, new 
OzoneConfiguration())) {
 +      long localID = putBlockData.getLocalID();
 +      BlockData getBlockData = dbHandle.getStore().getBlockDataTable()
 +          .get(containerData.getBlockKey(localID));
 +      Assertions.assertTrue(blockDataEquals(putBlockData, getBlockData));
 +      assertEquals(40L, 
dbHandle.getStore().getMetadataTable().get(containerData.getBytesUsedKey()));
 +    }
 +
 +    // Replace the last chunk with a chunk of greater size, This should only 
update the bytesUsed with
 +    // difference in length between the old last chunk and new last chunk
 +    newChunkInfo = new ChunkInfo(String.format("%d.data.%d", 
getBlockID().getLocalID(), 1L), 20L, 30L);
 +    chunkInfoList.remove(chunkInfoList.size() - 1);
 +    chunkInfoList.add(newChunkInfo.getProtoBufMessage());
 +    putBlockData.setChunks(chunkInfoList);
 +
 +    chunkData = ContainerTestHelper.getData(30);
 +    keyValueHandler.writeChunkForClosedContainer(newChunkInfo, getBlockID(), 
chunkData, kvContainer);
 +    keyValueHandler.putBlockForClosedContainer(kvContainer, putBlockData, 2L, 
true);
 +    assertEquals(2L, containerData.getBlockCommitSequenceId());
 +    assertEquals(1L, containerData.getBlockCount());
 +    // Old chunk size 20, new chunk size 30, difference 10. So bytesUsed 
should be 40 + 10 = 50
 +    assertEquals(50L, containerData.getBytesUsed());
 +
 +    try (DBHandle dbHandle = BlockUtils.getDB(containerData, new 
OzoneConfiguration())) {
 +      long localID = putBlockData.getLocalID();
 +      BlockData getBlockData = dbHandle.getStore().getBlockDataTable()
 +          .get(containerData.getBlockKey(localID));
 +      Assertions.assertTrue(blockDataEquals(putBlockData, getBlockData));
 +      assertEquals(50L, 
dbHandle.getStore().getMetadataTable().get(containerData.getBytesUsedKey()));
 +    }
 +
 +    keyValueHandler.putBlockForClosedContainer(kvContainer, putBlockData, 2L, 
true);
 +    assertEquals(2L, containerData.getBlockCommitSequenceId());
 +  }
 +
 +  private boolean blockDataEquals(BlockData putBlockData, BlockData 
getBlockData) {
 +    return getBlockData.getSize() == putBlockData.getSize() &&
 +        Objects.equals(getBlockData.getBlockID(), putBlockData.getBlockID()) 
&&
 +        Objects.equals(getBlockData.getMetadata(), 
putBlockData.getMetadata()) &&
 +        Objects.equals(getBlockData.getChunks(), putBlockData.getChunks());
 +  }
 +
 +  private static Stream<Arguments> getNonClosedStates() {
 +    return Stream.of(
 +        Arguments.of(ContainerProtos.ContainerDataProto.State.OPEN),
 +        Arguments.of(ContainerProtos.ContainerDataProto.State.RECOVERING),
 +        Arguments.of(ContainerProtos.ContainerDataProto.State.CLOSING),
 +        Arguments.of(ContainerProtos.ContainerDataProto.State.INVALID));
 +  }
 +
 +  public KeyValueHandler createKeyValueHandler(ContainerSet containerSet)
 +      throws IOException {
 +    OzoneConfiguration conf = new OzoneConfiguration();
 +    String dnUuid = UUID.randomUUID().toString();
 +    Path dataVolume = Paths.get(tempDir.toString(), "data");
 +    Path metadataVolume = Paths.get(tempDir.toString(), "metadata");
 +    conf.set(HDDS_DATANODE_DIR_KEY, dataVolume.toString());
 +    conf.set(OZONE_METADATA_DIRS, metadataVolume.toString());
 +    MutableVolumeSet volumeSet = new MutableVolumeSet(dnUuid, conf,
 +        null, StorageVolume.VolumeType.DATA_VOLUME, null);
 +    return ContainerTestUtils.getKeyValueHandler(conf, dnUuid, containerSet, 
volumeSet);
 +  }
 +
 +  public void closedKeyValueContainer() {
 +    
getKeyValueContainer().getContainerData().setState(ContainerProtos.ContainerDataProto.State.CLOSED);
 +  }
 +
    @Override
    protected ContainerLayoutTestInfo getStrategy() {
      return ContainerLayoutTestInfo.FILE_PER_BLOCK;
diff --cc 
hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/components/navBar/navBar.tsx
index 51a6c67f4e,0957f059d0..b0bdf187cb
--- 
a/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/components/navBar/navBar.tsx
+++ 
b/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/components/navBar/navBar.tsx
@@@ -18,7 -18,7 +18,7 @@@
  
  import React from 'react';
  import axios from 'axios';
--import { Layout, Menu } from 'antd';
++import {Layout, Menu} from 'antd';
  import {
    BarChartOutlined,
    ClusterOutlined,
@@@ -31,12 -31,12 +31,12 @@@
    LayoutOutlined,
    PieChartOutlined
  } from '@ant-design/icons';
--import { withRouter, Link } from 'react-router-dom';
--import { RouteComponentProps } from 'react-router';
++import {Link, withRouter} from 'react-router-dom';
++import {RouteComponentProps} from 'react-router';
  
  
  import logo from '../../logo.png';
--import { showDataFetchError } from '@/utils/common';
++import {showDataFetchError} from '@/utils/common';
  import './navBar.less';
  
  const { Sider } = Layout;
diff --cc 
hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/v2/components/navBar/navBar.tsx
index 1dd1ede48d,03341fd9cd..badca682ef
--- 
a/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/v2/components/navBar/navBar.tsx
+++ 
b/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/v2/components/navBar/navBar.tsx
@@@ -16,9 -16,9 +16,9 @@@
   * limitations under the License.
   */
  
--import React, { useState, useEffect, useRef } from 'react';
--import axios, { AxiosResponse } from 'axios';
--import { Layout, Menu, Spin } from 'antd';
++import React, {useEffect, useRef, useState} from 'react';
++import {AxiosResponse} from 'axios';
++import {Layout, Menu} from 'antd';
  import {
    BarChartOutlined,
    ClusterOutlined,
@@@ -31,12 -31,12 +31,12 @@@
    LayoutOutlined,
    PieChartOutlined
  } from '@ant-design/icons';
--import { useLocation, Link } from 'react-router-dom';
++import {Link, useLocation} from 'react-router-dom';
  
  
  import logo from '@/logo.png';
--import { showDataFetchError } from '@/utils/common';
--import { AxiosGetHelper, cancelRequests } from '@/utils/axiosRequestHelper';
++import {showDataFetchError} from '@/utils/common';
++import {AxiosGetHelper, cancelRequests} from '@/utils/axiosRequestHelper';
  
  import './navBar.less';
  
diff --cc 
hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/v2/pages/diskUsage/diskUsage.tsx
index ee6f8c6c6c,b826ea469c..7240cda156
--- 
a/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/v2/pages/diskUsage/diskUsage.tsx
+++ 
b/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/v2/pages/diskUsage/diskUsage.tsx
@@@ -16,24 -16,24 +16,20 @@@
   * limitations under the License.
   */
  
--import React, { useRef, useState } from 'react';
--import { AxiosError } from 'axios';
--import {
--  Alert, Button, Tooltip
--} from 'antd';
--import {
--  InfoCircleFilled, ReloadOutlined,
--} from '@ant-design/icons';
--import { ValueType } from 'react-select';
++import React, {useRef, useState} from 'react';
++import {AxiosError} from 'axios';
++import {Alert, Button, Tooltip} from 'antd';
++import {InfoCircleFilled, ReloadOutlined,} from '@ant-design/icons';
++import {ValueType} from 'react-select';
  
  import DUMetadata from '@/v2/components/duMetadata/duMetadata';
  import DUPieChart from '@/v2/components/plots/duPieChart';
--import SingleSelect, { Option } from '@/v2/components/select/singleSelect';
++import SingleSelect, {Option} from '@/v2/components/select/singleSelect';
  import DUBreadcrumbNav from '@/v2/components/duBreadcrumbNav/duBreadcrumbNav';
--import { showDataFetchError } from '@/utils/common';
--import { AxiosGetHelper, cancelRequests } from '@/utils/axiosRequestHelper';
++import {showDataFetchError} from '@/utils/common';
++import {AxiosGetHelper, cancelRequests} from '@/utils/axiosRequestHelper';
  
--import { DUResponse } from '@/v2/types/diskUsage.types';
++import {DUResponse} from '@/v2/types/diskUsage.types';
  
  import './diskUsage.less';
  


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to