This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/ozone-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 3afbe9da [auto] Generated docs from Apache Ozone master 
a5e1cd0a6923d965441b1e69be9744099e045e56
3afbe9da is described below

commit 3afbe9da4bdf4b7c503ebaaf16883c407dac5c2a
Author: Github Actions <[email protected]>
AuthorDate: Wed Sep 3 12:46:32 2025 +0000

    [auto] Generated docs from Apache Ozone master 
a5e1cd0a6923d965441b1e69be9744099e045e56
---
 docs/edge/en/sitemap.xml            |   4 +-
 docs/edge/feature/topology.html     | 124 ++++++++++++++++--------------------
 docs/edge/security/securingtde.html |  12 ++--
 docs/edge/sitemap.xml               |   2 +-
 4 files changed, 63 insertions(+), 79 deletions(-)

diff --git a/docs/edge/en/sitemap.xml b/docs/edge/en/sitemap.xml
index 3c93820b..fe268710 100644
--- a/docs/edge/en/sitemap.xml
+++ b/docs/edge/en/sitemap.xml
@@ -142,7 +142,7 @@
     <lastmod>2025-08-12T20:20:41+05:30</lastmod>
   </url><url>
     <loc>/security/securingtde.html</loc>
-    <lastmod>2025-07-02T11:15:53-07:00</lastmod>
+    <lastmod>2025-09-03T00:34:09-04:00</lastmod>
     <xhtml:link
                 rel="alternate"
                 hreflang="zh"
@@ -207,7 +207,7 @@
                 />
   </url><url>
     <loc>/feature/topology.html</loc>
-    <lastmod>2025-08-13T18:57:49-07:00</lastmod>
+    <lastmod>2025-09-02T11:42:12-07:00</lastmod>
     <xhtml:link
                 rel="alternate"
                 hreflang="zh"
diff --git a/docs/edge/feature/topology.html b/docs/edge/feature/topology.html
index 21afafbd..972b100f 100644
--- a/docs/edge/feature/topology.html
+++ b/docs/edge/feature/topology.html
@@ -613,10 +613,12 @@ s=d.getElementsByTagName('script')[0];
 <li>Prioritized reads from topologically closest DataNodes (read path).</li>
 </ol>
 <h2 id="applicability-to-container-types">Applicability to Container Types</h2>
-<p>Ozone&rsquo;s topology-aware placement strategies vary by container 
replication type and state:</p>
+<p>Ozone&rsquo;s topology-aware strategies apply differently depending on the 
operation:</p>
 <ul>
-<li><strong>RATIS Replicated Containers:</strong> Ozone uses RAFT replication 
for Open containers (write), and an async replication for closed, immutable 
containers (cold data). Topology awareness placement is implemented for both 
open and closed RATIS containers, ensuring rack diversity and fault tolerance 
during both write and re-replication operations. See the <a 
href="../concept/containers.html">page about Containers</a> for more 
information related to Open vs Closed containers.</li>
+<li><strong>Write Path (Open Containers):</strong> When a client writes data, 
topology awareness is used during <strong>pipeline creation</strong> to ensure 
the set of datanodes forming the pipeline are on different racks. This provides 
fault tolerance for the initial write.</li>
+<li><strong>Re-replication Path (Closed Containers):</strong> When a replica 
of a <strong>closed</strong> container is needed (due to node failure, 
decommissioning, or balancing), a topology-aware policy is used to select the 
best datanode for the new replica.</li>
 </ul>
+<p>See the <a href="../concept/containers.html">page about Containers</a> for 
more information related to Open vs Closed containers.</p>
 <h2 id="configuring-topology-hierarchy">Configuring Topology Hierarchy</h2>
 <p>Ozone determines DataNode network locations (e.g., racks) using 
Hadoop&rsquo;s rack awareness, configured via 
<code>net.topology.node.switch.mapping.impl</code> in 
<code>ozone-site.xml</code>. This key specifies a 
<code>org.apache.hadoop.net.CachedDNSToSwitchMapping</code> implementation. 
[1]</p>
 <p>Two primary methods exist:</p>
@@ -644,8 +646,7 @@ datanode103.example.com /rack2
 <h3 id="2-dynamic-list-scriptbasedmapping">2. Dynamic List: 
<code>ScriptBasedMapping</code></h3>
 <p>Uses an external script to resolve rack locations for IPs.</p>
 <ul>
-<li>
-<p><strong>Configuration:</strong> Set 
<code>net.topology.node.switch.mapping.impl</code> to 
<code>org.apache.hadoop.net.ScriptBasedMapping</code> and 
<code>net.topology.script.file.name</code> to the script&rsquo;s path. [1]</p>
+<li><strong>Configuration:</strong> Set 
<code>net.topology.node.switch.mapping.impl</code> to 
<code>org.apache.hadoop.net.ScriptBasedMapping</code> and 
<code>net.topology.script.file.name</code> to the script&rsquo;s path. [1]
 <div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-xml" data-lang="xml"><span style="display:flex;"><span><span 
style="color:#f92672">&lt;property&gt;</span>
 </span></span><span style="display:flex;"><span>  <span 
style="color:#f92672">&lt;name&gt;</span>net.topology.node.switch.mapping.impl<span
 style="color:#f92672">&lt;/name&gt;</span>
 </span></span><span style="display:flex;"><span>  <span 
style="color:#f92672">&lt;value&gt;</span>org.apache.hadoop.net.ScriptBasedMapping<span
 style="color:#f92672">&lt;/value&gt;</span>
@@ -655,9 +656,8 @@ datanode103.example.com /rack2
 </span></span><span style="display:flex;"><span>  <span 
style="color:#f92672">&lt;value&gt;</span>/etc/ozone/determine_rack.sh<span 
style="color:#f92672">&lt;/value&gt;</span>
 </span></span><span style="display:flex;"><span><span 
style="color:#f92672">&lt;/property&gt;</span>
 </span></span></code></pre></div></li>
-<li>
-<p><strong>Script:</strong> Admin-provided, executable script. Ozone passes 
IPs (up to <code>net.topology.script.number.args</code>, default 100) as 
arguments; script outputs rack paths (one per line).
-Example <code>determine_rack.sh</code>:</p>
+<li><strong>Script:</strong> Admin-provided, executable script. Ozone passes 
IPs (up to <code>net.topology.script.number.args</code>, default 100) as 
arguments; script outputs rack paths (one per line).
+Example <code>determine_rack.sh</code>:
 <div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-bash" data-lang="bash"><span style="display:flex;"><span><span 
style="color:#75715e">#!/bin/bash
 </span></span></span><span style="display:flex;"><span><span 
style="color:#75715e"></span><span style="color:#75715e"># This is a simplified 
example. A real script might query a CMDB or use other logic.</span>
 </span></span><span style="display:flex;"><span><span 
style="color:#66d9ef">while</span> <span style="color:#f92672">[</span> $# -gt 
<span style="color:#ae81ff">0</span> <span style="color:#f92672">]</span> ; 
<span style="color:#66d9ef">do</span>
@@ -671,75 +671,66 @@ Example <code>determine_rack.sh</code>:</p>
 </span></span><span style="display:flex;"><span>  <span 
style="color:#66d9ef">fi</span>
 </span></span><span style="display:flex;"><span>  shift
 </span></span><span style="display:flex;"><span><span 
style="color:#66d9ef">done</span>
-</span></span></code></pre></div><p>Ensure the script is executable 
(<code>chmod +x /etc/ozone/determine_rack.sh</code>).</p>
-<p><strong>Note:</strong> For production environments, implement robust error 
handling and validation in your script. This should include handling network 
timeouts, invalid inputs, CMDB query failures, and logging errors 
appropriately. The example above is simplified for illustration purposes 
only.</p>
-</li>
+</span></span></code></pre></div></li>
 </ul>
+<p>Ensure the script is executable (<code>chmod +x 
/etc/ozone/determine_rack.sh</code>).</p>
+<p><strong>Note:</strong> For production environments, implement robust error 
handling and validation in your script. This should include handling network 
timeouts, invalid inputs, CMDB query failures, and logging errors 
appropriately. The example above is simplified for illustration purposes 
only.</p>
 <p><strong>Topology Mapping Best Practices:</strong></p>
 <ul>
 <li><strong>Accuracy:</strong> Mappings must be accurate and current.</li>
 <li><strong>Static Mapping:</strong> Simpler for small, stable clusters; 
requires manual updates.</li>
 <li><strong>Dynamic Mapping:</strong> Flexible for large/dynamic clusters. 
Script performance, correctness, and reliability are vital; ensure it&rsquo;s 
idempotent and handles batch lookups efficiently.</li>
 </ul>
-<h2 id="pipeline-choosing-policies">Pipeline Choosing Policies</h2>
-<p>Ozone supports several policies for selecting a pipeline when placing 
containers. The policy for Ratis containers is configured by the property 
<code>hdds.scm.pipeline.choose.policy.impl</code> for SCM. The policy for EC 
(Erasure Coded) containers is configured by the property 
<code>hdds.scm.ec.pipeline.choose.policy.impl</code>. For both, the default 
value is 
<code>org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.RandomPipelineChoosePolicy</code>.</p>
-<p>These policies help optimize for different goals such as load balancing, 
health, or simplicity:</p>
+<h2 id="placement-and-selection-policies">Placement and Selection Policies</h2>
+<p>Ozone uses three distinct types of policies to manage how and where data is 
written.</p>
+<h3 id="1-pipeline-creation-policy">1. Pipeline Creation Policy</h3>
+<p>This policy selects a set of datanodes to form a new pipeline. Its purpose 
is to ensure new pipelines are internally fault-tolerant by spreading their 
nodes across racks, while also balancing the number of pipelines across the 
datanodes. This is the primary mechanism for topology awareness on the write 
path for open containers.</p>
+<p>The policy is configured by the 
<code>ozone.scm.pipeline.placement.impl</code> property in 
<code>ozone-site.xml</code>.</p>
 <ul>
-<li>
-<p><strong>RandomPipelineChoosePolicy</strong> (Default): Selects a pipeline 
at random from the available list, without considering utilization or health. 
This policy is simple and does not optimize for any particular metric.</p>
-</li>
-<li>
-<p><strong>CapacityPipelineChoosePolicy</strong>: Picks two random pipelines 
and selects the one with lower utilization, favoring pipelines with more 
available capacity and helping to balance the load across the cluster.</p>
-</li>
-<li>
-<p><strong>RoundRobinPipelineChoosePolicy</strong>: Selects pipelines in a 
round-robin order. This policy is mainly used for debugging and testing, 
ensuring even distribution but not considering health or capacity.</p>
-</li>
-<li>
-<p><strong>HealthyPipelineChoosePolicy</strong>: Randomly selects pipelines 
but only returns a healthy one. If no healthy pipeline is found, it returns the 
last tried pipeline as a fallback.</p>
+<li><strong><code>PipelinePlacementPolicy</code> (Default)</strong>
+<ul>
+<li><strong>Function:</strong> This is the default and only supported policy 
for pipeline creation. It chooses datanodes based on load balancing (pipeline 
count per node) and network topology. It filters out nodes that are too heavily 
engaged in other pipelines and then selects nodes to ensure rack diversity. 
This policy is recommended for most production environments.</li>
+<li><strong>Use Cases:</strong> General purpose pipeline creation in a 
rack-aware cluster.</li>
+</ul>
 </li>
 </ul>
-<p>These policies can be configured to suit different deployment needs and 
workloads.</p>
-<h2 
id="container-placement-policies-for-replicated-ratis-containers">Container 
Placement Policies for Replicated (RATIS) Containers</h2>
-<p>SCM uses a pluggable policy to place additional replicas of <em>closed</em> 
RATIS-replicated containers. This is configured using the 
<code>ozone.scm.container.placement.impl</code> property in 
<code>ozone-site.xml</code>. Available policies are found in the 
<code>org.apache.hadoop.hdds.scm.container.placement.algorithms</code> package 
[1, 3].</p>
-<p>These policies are applied when SCM needs to re-replicate containers, such 
as during container balancing.</p>
-<h3 id="1-scmcontainerplacementrackaware-default">1. 
<code>SCMContainerPlacementRackAware</code> (Default)</h3>
+<h3 id="2-pipeline-selection-load-balancing-policy">2. Pipeline Selection 
(Load Balancing) Policy</h3>
+<p>After a pool of healthy, open, and rack-aware pipelines has been created, 
this policy is used to <strong>select one</strong> of them to handle a 
client&rsquo;s write request. Its purpose is <strong>load balancing</strong>, 
not topology awareness, as the topology has already been handled during 
pipeline creation.</p>
+<p>The policy is configured by 
<code>hdds.scm.pipeline.choose.policy.impl</code> in 
<code>ozone-site.xml</code>.</p>
+<ul>
+<li><strong><code>RandomPipelineChoosePolicy</code> (Default):</strong> 
Selects a pipeline at random from the available list. This policy is simple and 
distributes load without considering other metrics.</li>
+<li><strong><code>CapacityPipelineChoosePolicy</code>:</strong> Picks two 
random pipelines and selects the one with lower utilization, favoring pipelines 
with more available capacity.</li>
+<li><strong><code>RoundRobinPipelineChoosePolicy</code>:</strong> Selects 
pipelines in a round-robin order. This is mainly for debugging and testing.</li>
+<li><strong><code>HealthyPipelineChoosePolicy</code>:</strong> Randomly 
selects pipelines but only returns a healthy one.</li>
+</ul>
+<p>Note: When configuring these values, include the full class name prefix: 
for example, 
org.apache.hadoop.hdds.scm.pipeline.choose.algorithms.CapacityPipelineChoosePolicy</p>
+<h3 id="3-closed-container-replication-policy">3. Closed Container Replication 
Policy</h3>
+<p>This is configured using the 
<code>ozone.scm.container.placement.impl</code> property in 
<code>ozone-site.xml</code>. The available policies are:</p>
+<ul>
+<li>
+<p><strong><code>SCMContainerPlacementRackAware</code> (Default)</strong></p>
 <ul>
-<li><strong>Function:</strong> Distributes replicas across racks for fault 
tolerance (e.g., for 3 replicas, aims for at least two racks). Similar to HDFS 
placement. [1]</li>
+<li><strong>Function:</strong> Distributes the datanodes of a pipeline across 
racks for fault tolerance (e.g., for a 3-node pipeline, it aims for at least 
two racks). Similar to HDFS placement. [1]</li>
 <li><strong>Use Cases:</strong> Production clusters needing rack-level fault 
tolerance.</li>
-<li><strong>Configuration:</strong>
-<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-xml" data-lang="xml"><span style="display:flex;"><span><span 
style="color:#f92672">&lt;property&gt;</span>
-</span></span><span style="display:flex;"><span>  <span 
style="color:#f92672">&lt;name&gt;</span>ozone.scm.container.placement.impl<span
 style="color:#f92672">&lt;/name&gt;</span>
-</span></span><span style="display:flex;"><span>  <span 
style="color:#f92672">&lt;value&gt;</span>org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAware<span
 style="color:#f92672">&lt;/value&gt;</span>
-</span></span><span style="display:flex;"><span><span 
style="color:#f92672">&lt;/property&gt;</span>
-</span></span></code></pre></div></li>
-<li><strong>Best Practices:</strong> Requires accurate topology mapping.</li>
 <li><strong>Limitations:</strong> Designed for single-layer rack topologies 
(e.g., <code>/rack/node</code>). Not recommended for multi-layer hierarchies 
(e.g., <code>/dc/row/rack/node</code>) as it may not interpret deeper levels 
correctly. [1]</li>
 </ul>
-<h3 id="2-scmcontainerplacementrandom">2. 
<code>SCMContainerPlacementRandom</code></h3>
+</li>
+<li>
+<p><strong><code>SCMContainerPlacementRandom</code></strong></p>
 <ul>
-<li><strong>Function:</strong> Randomly selects healthy, available DataNodes 
meeting basic criteria (space, no existing replica), ignoring rack topology. 
[1, 4]</li>
-<li><strong>Use Cases:</strong> Small/dev/test clusters, or if rack fault 
tolerance for closed replicas isn&rsquo;t critical.</li>
-<li><strong>Configuration:</strong>
-<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-xml" data-lang="xml"><span style="display:flex;"><span><span 
style="color:#f92672">&lt;property&gt;</span>
-</span></span><span style="display:flex;"><span>  <span 
style="color:#f92672">&lt;name&gt;</span>ozone.scm.container.placement.impl<span
 style="color:#f92672">&lt;/name&gt;</span>
-</span></span><span style="display:flex;"><span>  <span 
style="color:#f92672">&lt;value&gt;</span>org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRandom<span
 style="color:#f92672">&lt;/value&gt;</span>
-</span></span><span style="display:flex;"><span><span 
style="color:#f92672">&lt;/property&gt;</span>
-</span></span></code></pre></div></li>
-<li><strong>Best Practices:</strong> Not for production needing rack failure 
resilience.</li>
+<li><strong>Function:</strong> Randomly selects healthy, available DataNodes, 
ignoring rack topology. [3]</li>
+<li><strong>Use Cases:</strong> Small/dev/test clusters where rack fault 
tolerance is not critical.</li>
 </ul>
-<h3 id="3-scmcontainerplacementcapacity">3. 
<code>SCMContainerPlacementCapacity</code></h3>
+</li>
+<li>
+<p><strong><code>SCMContainerPlacementCapacity</code></strong></p>
 <ul>
-<li><strong>Function:</strong> Selects DataNodes by available capacity (favors 
lower disk utilization) to balance disk usage. [5, 6]</li>
+<li><strong>Function:</strong> Selects DataNodes by available capacity (favors 
lower disk utilization) to balance disk usage across the cluster. [4]</li>
 <li><strong>Use Cases:</strong> Heterogeneous storage clusters or where even 
disk utilization is key.</li>
-<li><strong>Configuration:</strong>
-<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-xml" data-lang="xml"><span style="display:flex;"><span><span 
style="color:#f92672">&lt;property&gt;</span>
-</span></span><span style="display:flex;"><span>  <span 
style="color:#f92672">&lt;name&gt;</span>ozone.scm.container.placement.impl<span
 style="color:#f92672">&lt;/name&gt;</span>
-</span></span><span style="display:flex;"><span>  <span 
style="color:#f92672">&lt;value&gt;</span>org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementCapacity<span
 style="color:#f92672">&lt;/value&gt;</span>
-</span></span><span style="display:flex;"><span><span 
style="color:#f92672">&lt;/property&gt;</span>
-</span></span></code></pre></div></li>
-<li><strong>Best Practices:</strong> Prevents uneven node filling.</li>
-<li><strong>Interaction:</strong> This container placement policy selects 
datanodes by randomly picking two nodes from a pool of healthy, available nodes 
and then choosing the one with lower utilization (more free space). This 
approach aims to distribute containers more evenly across the cluster over 
time, favoring less utilized nodes without overwhelming newly added nodes.</li>
 </ul>
+</li>
+</ul>
+<p>Note: When configuring these values, include the full class name prefix: 
for example, 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementCapacity</p>
 <h2 id="optimizing-read-paths">Optimizing Read Paths</h2>
 <p>Enable by setting <code>ozone.network.topology.aware.read</code> to 
<code>true</code> in <code>ozone-site.xml</code>. [1]</p>
 <div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-xml" data-lang="xml"><span style="display:flex;"><span><span 
style="color:#f92672">&lt;property&gt;</span>
@@ -749,18 +740,11 @@ Example <code>determine_rack.sh</code>:</p>
 </span></span></code></pre></div><p>This directs clients (replicated data) to 
read from topologically closest DataNodes, reducing latency and cross-rack 
traffic. Recommended with accurate topology.</p>
 <h2 id="summary-of-best-practices">Summary of Best Practices</h2>
 <ul>
-<li>
-<p><strong>Accurate Topology:</strong> Maintain an accurate, up-to-date 
topology map (static or dynamic script); this is foundational.</p>
-</li>
-<li>
-<p><strong>Replicated (RATIS) Containers:</strong> For production rack fault 
tolerance, use <code>SCMContainerPlacementRackAware</code> (mindful of its 
single-layer topology limitation) or <code>SCMContainerPlacementCapacity</code> 
(verify rack interaction) over <code>SCMContainerPlacementRandom</code>.</p>
-</li>
-<li>
-<p><strong>Read Operations:</strong> Enable 
<code>ozone.network.topology.aware.read</code> with accurate topology.</p>
-</li>
-<li>
-<p><strong>Monitor &amp; Validate:</strong> Regularly monitor placement and 
balance; use tools like Recon to verify topology awareness.</p>
-</li>
+<li><strong>Accurate Topology:</strong> Maintain an accurate, up-to-date 
topology map (static or dynamic script); this is foundational.</li>
+<li><strong>Pipeline Creation:</strong> For production environments, use the 
default <code>PipelinePlacementPolicy</code> for 
<code>ozone.scm.pipeline.placement.impl</code> to ensure both rack fault 
tolerance and pipeline load balancing.</li>
+<li><strong>Pipeline Selection:</strong> The default 
<code>RandomPipelineChoosePolicy</code> for 
<code>hdds.scm.pipeline.choose.policy.impl</code> is suitable for general load 
balancing.</li>
+<li><strong>Read Operations:</strong> Enable 
<code>ozone.network.topology.aware.read</code> with accurate topology.</li>
+<li><strong>Monitor &amp; Validate:</strong> Regularly monitor placement and 
balance; use tools like Recon to verify topology awareness.</li>
 </ul>
 <h2 id="references">References</h2>
 <ol>
@@ -788,7 +772,7 @@ Example <code>determine_rack.sh</code>:</p>
 <footer class="footer">
   <div class="container">
     <span class="small text-muted">
-      Version: 2.1.0-SNAPSHOT, Last Modified: August 13, 2025 <a 
class="hide-child link primary-color" 
href="https://github.com/apache/ozone/commit/cc2a42d80cabcf29ae60a87096fe59ef369a5d4a";>cc2a42d80c</a>
+      Version: 2.1.0-SNAPSHOT, Last Modified: September 2, 2025 <a 
class="hide-child link primary-color" 
href="https://github.com/apache/ozone/commit/3948ca052d32071da8f10765fefdc39824d94342";>3948ca052d</a>
     </span>
   </div>
 </footer>
diff --git a/docs/edge/security/securingtde.html 
b/docs/edge/security/securingtde.html
index dc38281f..ad0e6472 100644
--- a/docs/edge/security/securingtde.html
+++ b/docs/edge/security/securingtde.html
@@ -641,10 +641,10 @@ Ranger KMS supports both 128 and 256 bits. Hadoop KMS is 
also commonly used with
 <p>For example:</p>
 <div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>  
hadoop key create enckey -size <span style="color:#ae81ff">256</span> -cipher 
AES/CTR/NoPadding -description <span style="color:#e6db74">&#34;Encryption key 
for my_bucket&#34;</span>
 </span></span></code></pre></div><h3 
id="creating-an-encrypted-bucket">Creating an Encrypted Bucket</h3>
-<p>Use the Ozone shell <code>ozone sh bucket create</code> command with the 
<code>-k</code> (or <code>--key</code>) option to specify the encryption 
key:</p>
-<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>  
ozone sh bucket create --key &lt;key_name&gt; 
/&lt;volume_name&gt;/&lt;bucket_name&gt;
+<p>Use the Ozone shell <code>ozone sh bucket create</code> command with the 
<code>-k</code> (or <code>--bucketkey</code>) option to specify the encryption 
key:</p>
+<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>  
ozone sh bucket create --bucketkey &lt;key_name&gt; 
/&lt;volume_name&gt;/&lt;bucket_name&gt;
 </span></span></code></pre></div><p>For example:</p>
-<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>  
ozone sh bucket create --key enckey /vol1/encrypted_bucket
+<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>  
ozone sh bucket create --bucketkey enckey /vol1/encrypted_bucket
 </span></span></code></pre></div><p>Now, all data written to 
<code>/vol1/encrypted_bucket</code> will be encrypted at rest. As long as the 
client is configured correctly to use the key, such encryption is completely 
transparent to the end users.</p>
 <h3 id="performance-optimization-for-tde">Performance Optimization for TDE</h3>
 <p>Since Ozone leverages Hadoop&rsquo;s encryption library, performance 
optimization strategies similar to HDFS encryption apply:</p>
@@ -703,11 +703,11 @@ at 
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.&lt;init&gt;(OpensslAesCtrC
 <li><strong>Create the bucket under the <code>/s3v</code> volume:</strong>
 The <code>/s3v</code> volume is the default volume for S3 buckets.</li>
 </ol>
-<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>  
ozone sh bucket create --key &lt;key_name&gt; /s3v/&lt;bucket_name&gt; 
--layout<span style="color:#f92672">=</span>OBJECT_STORE
+<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>  
ozone sh bucket create --bucketkey &lt;key_name&gt; /s3v/&lt;bucket_name&gt; 
--layout<span style="color:#f92672">=</span>OBJECT_STORE
 </span></span></code></pre></div><ol start="2">
 <li><strong>Alternatively, create an encrypted bucket elsewhere and link 
it:</strong></li>
 </ol>
-<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>  
ozone sh bucket create --key &lt;key_name&gt; 
/&lt;volume_name&gt;/&lt;bucket_name&gt; --layout<span 
style="color:#f92672">=</span>OBJECT_STORE
+<div class="highlight"><pre tabindex="0" 
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
 class="language-shell" data-lang="shell"><span style="display:flex;"><span>  
ozone sh bucket create --bucketkey &lt;key_name&gt; 
/&lt;volume_name&gt;/&lt;bucket_name&gt; --layout<span 
style="color:#f92672">=</span>OBJECT_STORE
 </span></span><span style="display:flex;"><span>  ozone sh bucket link 
/&lt;volume_name&gt;/&lt;bucket_name&gt; /s3v/&lt;link_name&gt;
 </span></span></code></pre></div><p>Note 1: An encrypted bucket cannot be 
created via S3 APIs. It must be done using Ozone shell commands as shown above.
 After creating an encrypted bucket, all the keys added to this bucket using 
s3g will be encrypted.</p>
@@ -799,7 +799,7 @@ See <a href="../feature/prefixfso.html">Prefix based File 
System Optimization</a
 <footer class="footer">
   <div class="container">
     <span class="small text-muted">
-      Version: 2.1.0-SNAPSHOT, Last Modified: July 2, 2025 <a 
class="hide-child link primary-color" 
href="https://github.com/apache/ozone/commit/f2ddbf6a474e0e0f2f16e4f79053ec5e985733c8";>f2ddbf6a47</a>
+      Version: 2.1.0-SNAPSHOT, Last Modified: September 3, 2025 <a 
class="hide-child link primary-color" 
href="https://github.com/apache/ozone/commit/696978cc1e4639b98d2ad3f08f297999509bb5a6";>696978cc1e</a>
     </span>
   </div>
 </footer>
diff --git a/docs/edge/sitemap.xml b/docs/edge/sitemap.xml
index 458f80c7..5ce29858 100644
--- a/docs/edge/sitemap.xml
+++ b/docs/edge/sitemap.xml
@@ -4,7 +4,7 @@
   <sitemap>
     <loc>/en/sitemap.xml</loc>
     
-      <lastmod>2025-08-13T18:57:49-07:00</lastmod>
+      <lastmod>2025-09-03T00:34:09-04:00</lastmod>
     
   </sitemap>
   


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to