Try increasing heap size of the client via HADOOP_CLIENT_OPTS. The default
is 128M IIRC
This might improve the performance.
You can bump it upto 1G.

On Tue, Jan 16, 2018 at 10:03 PM, ping wang <[email protected]> wrote:

> Hi advisers,
> We use "hdfs setfacl -R"  for file ACL control. As the data directory is
> big with 60,000+ sub-directories and files, the command is very
> time-consuming. Seems it can not finish in hours, we can not image this
> command will cost several days.
> Any settings can help improve this?
> Thanks a lot for any help!
>
>

Reply via email to