Martin Meredith wrote: > I've had a problem where somehow, I've managed to end up with > approximately 1,000,000 session files on my server.
That is definitely a very large number of files all in one place! > Due to the large amount of files, the current crontab to clear them > was unable to deal with it (xargs would fail to take in the HUGE > list of files). > > It seems that rather than using xargs (even with the limit), that > using the -exec option of find might be a little bit more sane? I realize that the cron script has already changed in later releases but just the same this shouldn't have been a problem with the previous version that used xargs. Using xargs is an older but quite acceptable way to deal with a very large list of filenames. I think that there was some different problem related to this that wasn't diagnosed. In other words, while using 'find . -exec command {} +' is more efficient than 'find . -print0 | xargs -0 command' they are both equivalent in functionality. And using 'find . -conditions -delete' is even better for safety and efficiency the 'xargs -0' version is still functionally correct. Today it is best to use -delete since it is newly available in find. But before the -delete and the "{} +" forms were available then the 'find . -print0 | xargs -0 ...' form was the standard of excellence and will handle directories with millions of files in it. In yet other words, I feel certain that the actual problem was something else and not the use of 'find . ... -print0 | xargs -0 ...' there. For example perhaps memory was so limited that the Linux kernel out-of-memory killer (OOM Killer) came into place? Just as an example and not necessarily saying that was the specific problem you hit. Bob -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org