On 6/7/22 7:57 AM, Gergely wrote: >> Because you haven't forced bash to write outside its own address space or >> corrupt another area on the stack. This is a resource exhaustion issue, >> no more. > > > I did force it to write out of bounds, hence the segfault.
That's backwards. You got a SIGSEGV, but it doesn't mean you forced bash to write beyond its address space. You get SIGSEGV when you exceed your stack or VM resource limits. Given the nature of the original script, it's probably the former. > Not really, a programmer can't know how large the stack is and how many > more recursions bash can take. This is also kernel/distro/platform > dependent. I get that it's a hard limit to hit, but to say the programmer > has complete control is not quite true. True, the programmer can't know the stack size. But in a scenario where you really need to recurse hundreds or thousands of times (is there one?), the programmer can try to increase the stack size with `ulimit -s' and warn the user if that fails. > Sure, for unmitigated disasters of code like infinite recursions, I agree > with you. This problem is not about that though. It's about a bounded - > albeit large - number of recursions. This is not an example of a bounded number of recursions, since the second process sends a continuous stream of SIGUSR1s. > For the sake of example, consider a program with a somewhat slow signal > handler. This program might be forced to segfault by another program that > can send it large amounts of signals in quick succession. This is another example of recursive execution that results in a stack size resource limit failure, and wouldn't be helped by any of the things we're talking about -- though there is an EVALNEST_MAX define that could. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU c...@case.edu http://tiswww.cwru.edu/~chet/