Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-redhat-linux-gnu'
-DCONF_VENDOR='redhat' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H -I. -I. -I./include -I./lib -D_GNU_SOURCE
-DRECYCLES_PIDS -DDEFAULT_PATH_VALUE='/usr/local/bin:/usr/bin' -O2 -g
-pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic
uname output: Linux SPFBL-POC-CENTOS-7 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue
Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Machine Type: x86_64-redhat-linux-gnu
Bash Version: 4.2
Patch Level: 46
Release Status: release
Description:
Hello bash crew.
My name is Noilson Caio and i assume that there is something weird/strange
in bash. I'm working with huge folder structures and a lot of files every
day and the best way to describe the 'problem' is using a example.
Example task: Build a folder structure using 0-9 at 2 levels
Something like that:
.
|-- 0
| |-- 0
| |-- 1
| |-- 2
| |-- 3
| |-- 4
| |-- 5
| |-- 6
| |-- 7
| |-- 8
| `-- 9
|-- 1
| |-- 0
| |-- 1
| |-- 2
| |-- 3
| |-- 4
| |-- 5
| |-- 6
| |-- 7
| |-- 8
| `-- 9
|-- 2
| |-- 0
| |-- 1
| |-- 2
| |-- 3
| |-- 4
| |-- 5
| |-- 6
| |-- 7
| |-- 8
| `-- 9
|-- 3
| |-- 0
| |-- 1
| |-- 2
| |-- 3
| |-- 4
| |-- 5
| |-- 6
| |-- 7
| |-- 8
| `-- 9
|-- 4
| |-- 0
| |-- 1
| |-- 2
| |-- 3
| |-- 4
| |-- 5
| |-- 6
| |-- 7
| |-- 8
| `-- 9
|-- 5
| |-- 0
| |-- 1
| |-- 2
| |-- 3
| |-- 4
| |-- 5
| |-- 6
| |-- 7
| |-- 8
| `-- 9
|-- 6
| |-- 0
| |-- 1
| |-- 2
| |-- 3
| |-- 4
| |-- 5
| |-- 6
| |-- 7
| |-- 8
| `-- 9
|-- 7
| |-- 0
| |-- 1
| |-- 2
| |-- 3
| |-- 4
| |-- 5
| |-- 6
| |-- 7
| |-- 8
| `-- 9
|-- 8
| |-- 0
| |-- 1
| |-- 2
| |-- 3
| |-- 4
| |-- 5
| |-- 6
| |-- 7
| |-- 8
| `-- 9
`-- 9
|-- 0
|-- 1
|-- 2
|-- 3
|-- 4
|-- 5
|-- 6
|-- 7
|-- 8
`-- 9
110 directories, 0 files
For this kind of job i've been using 'curly braces' '{}' for almost 10
years. In response to the question: mkdir -p {0..9}/{0..9}/
Well, so far so good. But when i grow the arguments list (folder levels)
strange things happen =]. let me show examples and facts.
1 - Using mkdir -p {0..9}/{0..9}/{0..9}/{0..9}/{0..9}/ - ( 5 levels ) No
problems
2 - Using mkdir -p {0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/ - (6 levels )
We have a problem - "Argument list too long". Not really a problem for the
bash, it's a problem to system operator. I know that's a ARG_MAX limitation.
When this happen, the operator fixed it with other tools/split/ways. Don't
make sense you do increase more arguments in this task, but let's go;
3 - Using mkdir -p {0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/ - (7
levels ) - Ops, we don't have more "Argument list too long" now we have "Cannot
allocate memory".
Strace sample:
access("/usr/bin/mkdir", X_OK) = 0 stat("/usr/bin/mkdir",
{st_mode=S_IFREG|0755, st_size=39680, ...}) = 0 geteuid() = 0 getegid() = 0
getuid() = 0 getgid() = 0 access("/usr/bin/mkdir", R_OK) = 0
stat("/usr/bin/mkdir", {st_mode=S_IFREG|0755, st_size=39680, ...}) = 0
stat("/usr/bin/mkdir", {st_mode=S_IFREG|0755, st_size=39680, ...}) = 0
geteuid() = 0 getegid() = 0 getuid() = 0 getgid() = 0
access("/usr/bin/mkdir", X_OK) = 0 stat("/usr/bin/mkdir",
{st_mode=S_IFREG|0755, st_size=39680, ...}) = 0 geteuid() = 0 getegid() = 0
getuid() = 0 getgid() = 0 access("/usr/bin/mkdir", R_OK) = 0
rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 clone(child_stack=0,
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
child_tidptr=0x7fc12637e9d0) = -1 ENOMEM (Cannot allocate memory) write(2,
"-bash: fork: Cannot allocate mem"..., 36) = 36
Basically all RAM MEMORY it was eaten. after that, bash cannot will be able
to create a new 'fork'
4 - Using mkdir -p {0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/
- ( 8 levels or more ). Well, in this case all ram memory and swap (if
there is) will be consumed. And only stop when kernel oom score send a SIG
to kill the bash process. Exhaustive brk() calls. Maybe the assumed limit
is virtual memory that for default is unlimited
PS:
Maximum RAM tested: 16 GB
Other bash version tested:
--