Hi,

I don't have much knowledge about Hadoop/HDFS, my question can be simple,
or not...

Then, I have a Hadoop/HDFS environment, but my disks are not very big.

One applicacion is writing in files. But, sometimes the disk is filled with
large file sizes.

Then, my question is:

Exist any form to limitating the maximum file sizes written in HDFS?

I was thinking of something like:
When a file have a size of >= 1Gb, then new data written to this file,
cause that the first data written to this file deleted. In this way the
file size would always be limited, as a rolled file.

Howto do this task?

Regards,
Cesar Jorge

Reply via email to