Re: [BUG] Associative array initial reference name is made available in another context
> > What makes you think so? Variables are always visible in invoked > functions, unless you shadow them using local/declare/typeset. > Thank you very much for this information, I didn't know invoked functions inherited their parent functions variables. I understand better now the use of declare -n "reference", to make a copy of the referenced variable. Best, On Sat, Jul 1, 2023 at 10:38 PM Lawrence Velázquez wrote: > On Sat, Jul 1, 2023, at 3:55 PM, Top Dawn wrote: > > I believe there is a bug with associative arrays, when once referenced in > > another function through the -n option, both the new reference name and > the > > old one are made available. > > > > ```bash > > > > #!/bin/bash > > function my_function(){ > > declare -A my_array > > my_array=(["one"]="one") > > other_function "my_array" > > } > > > > function other_function(){ > > declare -n other_array="${1-}" > > echo "${other_array["one"]}" > > echo "${my_array["one"]}" > > } > > > > my_function > > > > ``` > > > > will output : > > > > ```bash > > > > one > > one > > > > ``` > > > > I believe this to be a bug. > > What makes you think so? Variables are always visible in invoked > functions, unless you shadow them using local/declare/typeset. > > % cat /tmp/foo.bash; echo > function my_function(){ > declare -A my_array > my_array=(["one"]="one") > other_function > } > > function other_function(){ > echo "${my_array["one"]}" > } > > my_function > > % bash /tmp/foo.bash > one > > -- > vq >
Re: Enable compgen even when programmable completions are not available?
Chet Ramey writes: > On 6/29/23 11:16 PM, Eli Schwartz wrote: > >> I assume that, given the option exists in the configure script, there >> are people who will want to use the option made available to them. One >> reason might be because they are configuring it for a system that isn't >> fussed about using bash for an interactive shell (either it is >> administrated via non-interactive means, or simply because the preferred >> interactive site shell is e.g. zsh). In such cases, a rationale that >> readily comes to mind is "this user wanted a smaller, leaner bash binary >> by disabling unimportant bits that they do not use". > > Maybe. I don't think it's that big a win. > >> And because this is conditional on readline, which is usually an >> external library dependency (a global system policy decision), reducing >> the number of shared libraries the dynamic loader has to deal with might >> be especially interesting. > > The dynamic loader has to know where the library is. If you don't call > readline, it shouldn't ever have to actually map it into the process. I think it still has to map it even with lazy binding, but I'm not really worried about this point much. > >> (This is all theorizing -- I quite like bash as an interactive shell >> and >> have no intention of building systems with readline disabled. It is >> nonetheless true that the topic came up because there are Gentoo users >> who apparently decided to try to do so.) > > Yes, but the question is whether or not that makes sense in the modern age, > and whether there should be extra features to accommodate that decision. > > >> The thing is, does it really matter? I think there's a larger issue >> here, which I mentioned in the Gentoo bug report but probably makes >> sense to copy/paste here: > > >> >>> The problem with compgen is that it is only available for use when >>> bash is configured with --enable-progcomp / --enable-readline, which >>> feels like a powerful argument that script authors are not allowed to >>> assume that it will exist, regardless of how useful it may be in >>> non-programmable-completion contexts. >>> >>> Maybe the answer is to ask that it always be available as a builtin, >>> even when the programmable completion system isn't enabled. > > But this isn't right. You have to explicitly disable those configuration > options -- they're on by default. You don't have to do anything to get > readline support compiled into bash. You have to do things to disable it. > If you take that extra configuration step to disable it, there are going > to be consequences. > There's a few open questions here: 1. Is the inconsistency between functions and variables desirable here? (i.e. functions being something we can handle via `declare`, but variables needing `compgen`?) 2. Is something on-by-default enough of a signal that you should rely on it being available? Can I suggest editing the configure help text to make clear you strongly recommend it be enabled? 3. On the Gentoo side, why do we need to turn it off while bootstrapping prefix? I strongly suspect we can at least fall back to the bundled bash readline for bootstrapping purposes and this isn't a problem in reality anymore, but I'll check. >> So: can I? Are my bash scripts valid scripts if they use compgen, or >> should I be concerned that when I publish them for wide adoption, people >> are going to report bugs to me that it is broken on their niche system >> configuration which they are positive they have a good reason for? > > You can always check whether compgen is available and fall back to some > other solution if it's not. > > compgen -v >/dev/null 2>&1 && have_compgen=1 > >> Should I document in the project readme that in addition to needing >> a >> certain minimum version of bash, "you also need to make sure that >> programmable completions are enabled when compiling your bash binary"? > > No. You need to say that users should make sure they haven't disabled > them when compiling their bash binary. > Then the configure help text should reflect that. But needing compgen for this is still odd, in any case. Or feels it to me. >> Should I eschew compgen and rely on eval-using hacks like the one Kerin >> described? > > It's your call, of course. You just have to decide whether or not it's > worth the effort to accommodate non-default option choices. What about > aliases? Arrays? Brace expansion? Process substitution? Extglobs? All of > those can be compiled out. What's the `bash core' you're going to assume? Only one of these requires an external (modulo the bundled copy) library.
Re: [BUG] Associative array initial reference name is made available in another context
On Sun, Jul 02, 2023 at 03:04:25PM +0200, Top Dawn wrote: > > > > What makes you think so? Variables are always visible in invoked > > functions, unless you shadow them using local/declare/typeset. > > > > Thank you very much for this information, I didn't know invoked functions > inherited their parent functions variables. > I understand better now the use of declare -n "reference", to make a copy > of the referenced variable. I wouldn't call it a copy. It's more like a symbolic link. It's a reference to another variable, by name instead of by memory location. Given this function: f1() { local v=in_f1 declare -n nameref=v f2 } "nameref" refers to "v" by name, so anything that tries to use "nameref" will refer to "v" instead. It will use the same name resolution rules that are always used. If we have f2 defined like this: f2() { printf '%s\n' "$nameref" } Then f2 will try to resolve "nameref". There isn't a local variable by that name, so it will look in the caller's scope, and so on. If we run f1 at this point, we get "in_f1" as output. Now, what if we define f2 like this: f2() { local v=in_f2 printf '%s\n' "$nameref" } Now, when we run f1, we get "in_f2" as output. Why? Bash resolves "nameref" by searching for a variable by that name, first in f2's local variables, then in the caller's local variables, then in the caller's caller's, and so on up to the global scope, until it finds a matching variable name, or concludes that there isn't one. Bash resolves nameref as a name reference to "v". So now it begins a search for a variable named "v". Once again, it uses the local variables of f2 first, then the local variables of f1, and so on. Since we have a local variable named "v" in f2, the resolution stops there, and that's the one it uses. unicorn:~$ f2() { local v=in_f2; printf '%s\n' "$nameref"; } unicorn:~$ f1 in_f2 Bash does not offer a way to refer to an instance of a variable at a given scope. There is no way to say, inside of f1, that the nameref should refer to f1's instance of v, and no other instance. Likewise, there is no way to say "I want you, the function I am calling, to return a value inside this variable vvv, which is my variable, and whose name I am providing to you as an argument" without the function potentially choosing a different instance of vvv. That's why it's so important to (try to) use function-specifically-named variables any time namerefs are in the picture. f1() { local _f1_v=in_f1 local _f1_nameref=_f1_v } And so on. Yes, it's ugly as sin, but it's the only reasonable way to avoid disaster.
Re: maybe a bug in bash?
On 2023-06-30 at 15:49 +0200, Sebastian Luhnburg wrote: > First, in my LPIC-1 course the lecturer tell me it is better (not > binding) to deny SSH login for root users (especially for the user with > the name root). The reason is simple: decrease the attack surface. Yes, > a secure password needs a lot of time to be cracked via brute force, but > if the attacker did not know the username, which is needed to login, the > attacker must get two things. For my opinion, the decrease the attack > surface is a good approach. It's not a bad approach. But with "PermitRootLogin prohibit-password" then it's not even possible to attempt guessing the root password (with a random password like you use, it won't be guessed, but it will produce cleaner logs). SSH keys are really the way to use for ssh connections > If I use SSH keys, it is a decentral approach. Every user must manage > his keys, which allows to connect to the servers. Every user creates his own key. If Bob loses his laptop ssh key, only that key needs to be replaced, no change for Alice, and no need to change the the passwords for all the servers in the company. What you should have is a process to change the keys (new employee, reinstalled computer, lost laptop, employee leaves the company...). This could be an automated system that propagates the changes to all servers (usual systems are ansible, chef, puppet...), or the servers could be fetching the keys on the fly from a centralized place (generally LDAP) through an AuthorizedKeysCommand script.
Document m=1 m=2; echo $m result
man page says: A variable may be assigned to by a statement of the form name=[value] If value is not given, the variable is assigned the null string. All values undergo tilde expansion, parameter and variable expansion... OK, but do please mention somewhere that "if the variable is set more than once within the same statement, the final value is used." $ m=1 m=2; echo $m #(Yes, acts the same as:) $ m=1; m=2; echo $m version 5.2.15(1)-release
Re: Document m=1 m=2; echo $m result
On Sun, Jul 2, 2023, at 8:30 PM, Dan Jacobson wrote: > man page says: > >A variable may be assigned to by a statement of the form > > name=[value] > >If value is not given, the variable is assigned the null string. All >values undergo tilde expansion, parameter and variable >expansion... > > OK, but do please mention somewhere that "if the variable is set more > than once within the same statement, the final value is used." > > $ m=1 m=2; echo $m > #(Yes, acts the same as:) > $ m=1; m=2; echo $m This is stated under "Simple Command Expansion". When a simple command is executed, the shell performs the following expansions, assignments, and redirections, from left to right, in the following order. [...] 4. The text after the = in each variable assignment undergoes tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal before being assigned to the variable. https://www.gnu.org/software/bash/manual/html_node/Simple-Command-Expansion.html -- vq
Re: Document m=1 m=2; echo $m result
On Sun, Jul 02, 2023 at 08:56:08PM -0400, Lawrence Velázquez wrote: > On Sun, Jul 2, 2023, at 8:30 PM, Dan Jacobson wrote: > > OK, but do please mention somewhere that "if the variable is set more > > than once within the same statement, the final value is used." > > > > $ m=1 m=2; echo $m > > #(Yes, acts the same as:) > > $ m=1; m=2; echo $m It's more than this. > This is stated under "Simple Command Expansion". > > When a simple command is executed, the shell performs the > following expansions, assignments, and redirections, from > left to right, in the following order. > > [...] > > 4. The text after the = in each variable assignment undergoes > tilde expansion, parameter expansion, command substitution, > arithmetic expansion, and quote removal before being > assigned to the variable. > > https://www.gnu.org/software/bash/manual/html_node/Simple-Command-Expansion.html Each variable assignment within the simple command is done, one at a time, left to right. It's not "the last one wins". unicorn:~$ m=1 m=$((m+3)); echo "$m" 4 The first assignment is done before the value of "m" is used in the second assignment. They can build upon each other. This feature is commonly used in the following constructs: extract=${input#*<} extract=${extract%>*} data=$(cat file; printf x) data=${data%x}
Re: Document m=1 m=2; echo $m result
> "LV" == Lawrence Velázquez writes: LV> This is stated under "Simple Command Expansion". OK good. No more issue.