smlHao opened a new issue, #1026: URL: https://github.com/apache/incubator-uniffle/issues/1026
### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) ### Search before asking - [X] I have searched in the [issues](https://github.com/apache/incubator-uniffle/issues?q=is%3Aissue) and found no similar issues. ### Describe the bug hi, when huge table join huge table, shuffle server have blocked threads , Is it right? server conf : rss.rpc.server.port 20000 rss.jetty.http.port 20001 rss.storage.basePath /app/rss-0.7.1/data rss.storage.type MEMORY_LOCALFILE_HDFS rss.coordinator.quorum 172.100.3.70:19999,172.100.3.71:19999,172.100.3.72:19999 rss.server.disk.capacity 50g rss.server.flush.thread.alive 30 rss.server.flush.threadPool.size 10 rss.server.buffer.capacity 40g rss.server.read.buffer.capacity 20g rss.server.heartbeat.interval 10000 rss.rpc.message.max.size 1073741824 rss.server.preAllocation.expired 120000 rss.server.commit.timeout 600000 rss.server.app.expired.withoutHeartbeat 120000 rss.server.flush.cold.storage.threshold.size 512m rss client conf : spark.shuffle.manager=org.apache.spark.shuffle.RssShuffleManager spark.rss.coordinator.quorum=172.100.3.70:19999,172.100.3.71:19999,172.100.3.72:19999 # in production need to chagne to MEMORY_LOCALFILE_HDFS spark.rss.storage.type=MEMORY_LOCALFILE_HDFS spark.rss.remote.storage.path=hdfs://ns1/rss/sml    the executor have no daemon thread holding and hava no error log   ### Affects Version(s) 0.7.1 ### Uniffle Server Log Output _No response_ ### Uniffle Engine Log Output _No response_ ### Uniffle Server Configurations _No response_ ### Uniffle Engine Configurations _No response_ ### Additional context _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
