ChenSammi commented on code in PR #4912:
URL: https://github.com/apache/hadoop/pull/4912#discussion_r1145661765


##########
hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md:
##########
@@ -251,9 +251,52 @@ please raise your issues with them.
       </description>
     </property>
 
+    <property>
+      <name>fs.oss.fast.upload.buffer</name>
+      <value>disk</value>
+      <description>
+        The buffering mechanism to use.
+        Values: disk, array, bytebuffer, array_disk, bytebuffer_disk.
+
+        "disk" will use the directories listed in fs.oss.buffer.dir as
+        the location(s) to save data prior to being uploaded.
+
+        "array" uses arrays in the JVM heap
+
+        "bytebuffer" uses off-heap memory within the JVM.
+
+        Both "array" and "bytebuffer" will consume memory in a single stream 
up to the number
+        of blocks set by:
+
+            fs.oss.multipart.upload.size * fs.oss.upload.active.blocks.
+
+        If using either of these mechanisms, keep this value low
+
+        The total number of threads performing work across all threads is set 
by
+        fs.oss.multipart.download.threads, with fs.oss.max.total.tasks values 
setting the number of queued

Review Comment:
   It's fine to share the same thread pool between download and upload. Just 
the explanation statement here doesn't carrry this message clearly. Suggest to 
improve it. 
   
   Here is an example, 
   Currently fast upload shares the same thread tool with download. The thread 
pool size is specified in " fs.oss.multipart.download.threads". 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to