On Mon, Mar 20, 2017 at 12:17:39PM -0300, Noilson Caio wrote: > 1 - Using mkdir -p {0..9}/{0..9}/{0..9}/{0..9}/{0..9}/ - ( 5 levels ) No > problems
10 to the 5th power (100,000) strings generated. Sloppy, but viable on today's computers. You're relying on your operating system to allow an extraordinary large set of arguments to processes. I'm guessing Linux. > 2 - Using mkdir -p {0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/ - (6 levels ) > We have a problem - "Argument list too long". You have two problems. The first is that you are generating 10^6 (1 million) strings in memory, all at once. The second is that you are attempting to pass all of these strings as arguments to a single mkdir process. Apparently even your system won't permit that. > 3 - Using mkdir -p {0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/ - (7 > levels ) - Ops, we don't have more "Argument list too long" now we have > "Cannot > allocate memory". 10 million strings, all at once. Each one is ~15 bytes (counting the NUL and slashes), so you're looking at something like 150 megabytes. This is not a bash bug. It's a problem with your approach. You wouldn't call it a bug in C, if you wrote a C program that tried to allocate 150 megabytes of variables and got an "out of memory" as a result. The same applies to any other programming language. What you need to do is actually think about how big a chunk of memory (and argument list) you can handle in a single call to mkdir -p, and just do that many at once. Call mkdir multiple times, in order to get the full task done. Don't assume bash will handle that for you.