Date: Wed, 6 Jul 2022 01:34:53 +0300 From: Yair Lenga <yair.le...@gmail.com> Message-ID: <6b7e3c85-5fcd-459e-a41c-e2803b0e7...@gmail.com>
| Function main () { | Local x. # x is local | For x in a b ; do process $x ; done | } | | Vs. | # x is global, all function will see it. | For x in a b ; do process $x ; done That is not how local variables work... Consider: cat /tmp/script; echo =========================== func1() { echo In func1 x=$x } func2() { local x=foo echo In func2 x=$x } func3() { local x=bar echo In func3 x=$x func1 } x=anything echo Globally x=$x func1 func2 func3 echo Globally x=$x =========================== Then bash /tmp/script Globally x=anything In func1 x=anything In func2 x=foo In func3 x=bar In func1 x=bar Globally x=anything Note the call of func1 from func3 and which x value it sees. The (ideal) shell model (IMO) for variables is that all variables are global, always. If we don't have that then it is hard to see how to make things like OPTIND or IFS local, and work as expected, (PATH as well) and explain how those work differently than how you would expect other variables to work. What local should do is arrange for any changes to a variable listed as local to be undone as soon as the function that declared it local is no longer active (however that happens), so when func3() makes x local, the old value of x is saved somewhere, x is set (to bar in the above script) and remains that way while func1 runs, when func3 is done, x gets restored to its previous value (the effect of the local ends). The model bash uses isn't exactly that, it is much harder to explain (and to my mind, irrational, but that's neither here nor there), but it isn't all that far away either. In neither of these are local variables in bash anything like local variables in languages designed as a programming language. Remember that bash (and all of the other posix style shells, there are quite a few) is primarily a shell - an environment for users to use to type commands. It has its script ability from the genius of Ken Thompson, who (about 50 years ago, when this idea was revolutionary) recognised that if users could put the commands they habitually entered into a script, and have the shell simply run the script, they wouldn't need to type the same command sequences over and over again. Seems simple, and obvious, today, it wasn't then - that's not how other contemporary OS's worked. Users typing make decisions based upon what happens in earlier commands, so the scripts needed to be able to do that as well, hence we gained if (and later the looping commands - originally just if and goto), which handle any anticipated decisions that users need to make, but users also deal with unanticipated problems - things that can go wrong which weren't planned for, that no-one had considered might happen. The typical user reaction to one of those when it happens interactively is to just stop, and either think, or seek help - certainly not just plow on with whatever was supposed to happen next, which is what a script would do. That's where -e (errexit) comes from, when something unanticipated fails, stop. Note the unanticipated - when something that we expect might fail, fails, we have already provided the alternate code sequence, for that, so we don't stop. This all sounds simple, but when (as you are discovering now I think) you start to look at all of the weird cases, it isn't so simple - the effect was that the implementation was a bit strange, and didn't do things quite the way that people often thought it did - but that was the implementation, and hence the standard, and now it is more or less locked in - other than in the simplest cases, -e is useless. But the solution to this isn't your idea, which will (almost inevitably) end up being just as convoluted as -e, and just as weird (though differently) as you try and cope with all the odd cases and make them behave as people expect should happen. Instead (if you're writing a shell script, bash or any other shell) the solution is as (I think) Greg suggested - simply anticipate that anything might fail, decide what should happen in each case if it does fail (which is anything from "I don't care" to "abandon the entire script" with a whole range of possibilities in between). If you do that, then neither -e, nor your proposed option, has anything to do, they're both no-ops, and the code always behaves as implemented by its author. But since you seem to be attempting to emulate features of some other languages, languages not primarily designed to be shells, the better question is why you're not simply programming in one of those other languages. There are lots, both fully compiled, and interpreted (and semi-compiled) to choose from. Pick the one best suited for your application, and use that. From the way it has been described, that's very unlikely to be any kind of shell script. Basically, if a program is complex enough that you need to start out with writing it as an application, then the shell (and its language) is not going to be the right solution. On the other hand, if you start out giving commands on the command line, and find that you're repeating a similar sequence multiple times, that is a perfect place for a shell script. Often once it exists, it will grow, gain more features, and become more complex than you ever would have typed - but everything in it (except perhaps some of the control logic) would be things that you would or could have entered on the command line, by hand. Rather than trying to force bash to provide the solution that you need, just use something which is already suitable. Long term, you'll be happier. So will everyone else who doesn't need to cope with attempting to understand what your changes mean or how they might work, or not. kre ps: note how I avoid showing any prompts, or #! (and hence to path to bash) to avoid disappointing people here from discovering that the quaint notion that /bin/bash is something they can expect to work, isn't correct.