Hi,
I have command completion in my bash command. But I need to input tab
in the command line. Is there a way to do so?
--
Regards,
Peng
Hi,
I have a directory named '\E' (two letters, rather than a single
special character). I have the following $PS1 variable.
$ echo $PS1
${debian_chroot:+($debian_chroot)}...@\h:\w\$
When my current directory is '\E', the prompt shows a special
character (I think that it should be the special ch
On Sat, Jul 10, 2010 at 9:52 PM, Chet Ramey wrote:
> On 7/10/10 9:57 PM, Peng Yu wrote:
>> Hi,
>>
>> I have a directory named '\E' (two letters, rather than a single
>> special character). I have the following $PS1 variable.
>>
>> $ echo $P
On Sat, Jul 10, 2010 at 10:00 PM, Peng Yu wrote:
> On Sat, Jul 10, 2010 at 9:52 PM, Chet Ramey wrote:
>> On 7/10/10 9:57 PM, Peng Yu wrote:
>>> Hi,
>>>
>>> I have a directory named '\E' (two letters, rather than a single
>>> special charact
For the character $ is treated specially between a pair of double quotes.
echo "$PATH"
If I really want to print $, I need to say
echo "\$PATH"
Could anybody let me know the complete set of characters that need to
be escaped (prepend with backslash) between a pair of double quotes if
I really w
Hi,
I think that that may not be a unique naming convention for bash
script filenames. But I use the following.
For an executable bash script I use the suffix .sh. For a bash script
that is only source-able but runnable, I use the suffix .bashrc.
People may use different conventions. I just want
Hi,
I have some filenames that have the character ^J. I can not figure out
a way to input such a character. Does anybody know if it is possible
to input ^J?
--
Regards,
Peng
Hi,
Although echo is sufficient most of the time, my understanding is that
printf may be better for certain situations (for example, formatting
the width of a number). The manual doesn't explicitly mention in what
situations it is better to use printf than to use echo. I think that
this might have
Hi,
The variable f keeps the last value when the for loop is finished. Is
there a way to declare it as a local variable, so that it disappears
after for-loop is finished? (I could unset it, but I want to know if
it can be a local variable)
$ for f in a b; do echo $f; done
a
b
$ echo $f
b
--
Reg
Hello All,
I have the following script and output. The man page says "Return a
status of 0 or 1 depending on the evaluation of the conditional
expression expression." Therefore, I thought that the two printf
statements should print 1 and 0 respectively. But both of them print
0. I'm wondering w
Hi,
The following example returns the exit status of the last command in a
pipe. I'm wondering if there is a way to inherent non-zero exit status
using pipe. That is, if there is any command in a pipe that return a
non-zero status, I'd like the whole pipe return a non-zero status.
$ cat main.sh
Here,
I have the following working sql script, which takes /dev/stdin as
input. Then I want to convert it to a here document. But it doesn't
work, as shown below.
I think that this may not be a sqlite3 problem. Rather, it may be
because I don't use here document and pipe correctly. Could any bash
Hi,
I'm wondering if there is a widely accepted coding style of bash scripts.
lug.fh-swf.de/vim/vim-bash/StyleGuideShell.en.pdf
I've seen the following style. Which is one is more widely accepted?
if [ -f $file]; then
do something
fi
if [ -f $file];
then
do something
fi
--
Regards,
Pen
Hi,
stat --printf "%y %n\n" `find . -type f -print`
I could use the following trick to stat each file separately. But I
prefer to stat all the files at once. I'm wondering if there is any
easy way to converted the strings returned by find if there are
special characters such as space by adding '\
> I suppose the first thing needed to make that work, and maybe the only
> thing needed to make that work, is agreement on the name of a search path
> environment variable that enable can use to find loadable builtins.
Why not just use an environment variable such as LOADABLES_PATH (just
like the
dirname loadable gives the following error. I think the coreutils'
direname's convention is better. Should it be considered as a bug to
fix?
$ dirname -- -a
dirname: usage: dirname string
$(type -P dirname) -- -a
.
--
Regards,
Peng
eep 10
#echo "$?"
)
$ ./main_INT.sh
^C
$ source enable.sh
$ enable sleep
$ help sleep
sleep: sleep seconds[.fraction]
Suspend execution for specified period.
sleep suspends execution for a minimum of SECONDS[.FRACTION] seconds.
On Mon, Dec 17, 2018 at 1:57 PM Peng Yu wrote:
>
Hi,
I can not mkdir -p . in /tmp/ via the loadable mkdir. What is the
difference between /tmp/ and other directories? I am on Mac OS X. Is
this a bug in mkdir?
$ cd /tmp
$ mkdir -p -- .
-bash: mkdir: .: Operation not permitted
$ cd ~/
$ mkdir -p -- .
--
Regards,
Peng
Hi,
I'd like to compile hashlib.c to try its main(). But I got the
following error. What is the correct commands to compile it? Thanks.
$ gcc -DPROGRAM='"bash"' -DCONF_HOSTTYPE='"x86_64"'
-DCONF_OSTYPE='"darwin17.7.0"'
-DCONF_MACHTYPE='"x86_64-apple-darwin17.7.0"' -DCONF_VENDOR='"apple"'
-DLOCAL
Hi,
[[ -v 1 ]] does not work for $1.
$ [ -v 1 ]; echo "$?"
1
$ set -- a
$ [ -v 1 ]; echo "$?"
1
Although [[ -z ${1+s} ]] and (($#)) works for testing if $1 is set,
neither of them are uniformly better performance wise. In this case,
should [[ -v 1 ]] be supported?
set -- $(seq 1)
time for ((i=
On Thu, Dec 27, 2018 at 3:19 PM Martijn Dekker wrote:
>
> Op 27-12-18 om 19:22 schreef Chet Ramey:
> > On 12/26/18 10:49 PM, Peng Yu wrote:
> >
> >> Although [[ -z ${1+s} ]] and (($#)) works for testing if $1 is set,
> >> neither of them are uniformly b
On Thu, Dec 27, 2018 at 12:27 PM Chet Ramey wrote:
>
> On 12/26/18 4:31 PM, Peng Yu wrote:
> > Hi,
> >
> > I'd like to compile hashlib.c to try its main(). But I got the
> > following error. What is the correct commands to compile it? Thanks.
>
> Think a
> I don't believe that at all. The number of positional parameters is kept
> anyway. It's not recalculated when you compare it to another number, so
> it's just as fast as a simple comparison of two integers.
Getting the number $# is slow.
> And even if it weren't -- if performance is *that* impo
On Thu, Dec 27, 2018 at 7:37 PM G. Branden Robinson
wrote:
>
> At 2018-12-27T18:39:26-0600, Peng Yu wrote:
> > What I meant in my original email is that I want something for testing
> > if there is a command line argument (one or more, the exact number
> > does not mat
We are talking about unit testing in the bash C source code, not bash scripts.
On Thu, Dec 27, 2018 at 8:03 PM G. Branden Robinson
wrote:
>
> At 2018-12-27T17:34:49-0800, Eduardo Bustamante wrote:
> > On Thu, Dec 27, 2018 at 5:15 PM Peng Yu wrote:
> > (...)
> > > S
> You're whacking moles. Use a profiler. That's what they're for.
I've already shown that $() is a major problem to slow down the speed
and I have reduced using its usage in my code and significantly
improved the performance. Nevertheless, it doesn't mean that it is not
necessary to systematical
> A profiler is exactly what you need here. You should profile your
> script and understand the stuff that actually matters for your goals.
> Otherwise you're just chasing unimportant things.
Again, my goal is not to profile a specific bash script. The goal is
to see what features make bash only f
> That code hasn't really changed in almost twenty years. All the testing
> was done long ago.
Do you keep all the testing code in the bash repository? Or you keep
the testing code separately from the bash source? Given the fugal
testing code that is in the bash source, it is doesn't seem that the
Hi,
I see things like `cd builtins && $(MAKE) ...` in the Makefiles in
bash source code. GNU Make has the option of -C for entering a
directory and make. Is the reason to cd then make for compatibility
with other make's that don't support -C? Thanks.
--
Regards,
Peng
On Wed, Dec 26, 2018 at 11:35 AM Chet Ramey wrote:
>
> On 12/24/18 10:35 PM, Peng Yu wrote:
> > dirname loadable gives the following error. I think the coreutils'
> > direname's convention is better. Should it be considered as a bug to
> > fix?
> >
> &g
> Have you tried 'make test'?
No, I didn't. I didn't know it was a target. I just followed the
README in that directory.
--
Regards,
Peng
The following test cases show that the variable length can
significantly affect the runtime in bash. But the variable length
doesn't seem to have a significant effect in some other interpreted
languages, such as python.
I can understand that variables length will slow down the runtime, as
bash is
Hi,
It is not uncommon to see the same name is used to defined functions
in different .c files in bash source code.
For example, sh_single_quote is defined in both lib/readline/shell.c
and lib/sh/shquote.c with the exact same signature. The two pieces of
code are slightly different. Do they do th
> What would you say the "suggested improvement" is here?
This is implied. If it is agreed that identical function names are not
good by the majority of bash developers, then what I found could be
turned into an explicit suggestion.
Since maybe there is a good reason, I don't want to pretend that
> There is probably no easy regex to match strings bash will tolerate as
> a function name without error. The accepted names vary in several
> contexts.
>
> http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_05
>
> "The function is named fname; the application shall
> "Not uncommon" is stretching it, since it happens in only one place:
> lib/readline/shell.c.
No. it is not uncommon. See the analysis of duplicated function/macro
names and where they appear. There are around ~100 of them. Note that
this analysis is not very accurate. But the balkpark estimate s
>
>
> > https://pastebin.com/cV1jP41Y
>
> Really? What is your analysis? There are 100 duplicate global symbols
> shared between bash and other libraries? Or is it your assertion that
> one should never use the same symbol names, unconditionally? You're not
> making much of a point here.
It is th
See the following for the difference. I'd consider the behavior of
4.4.23 should be correct.
How was this bug introduced? Should there be a test case to cover this case?
$ cat main_debug.sh
#!/usr/bin/env bash
# vim: set noexpandtab tabstop=2:
echo $BASH_VERSION
declare -- null="@()"
declare --
> The bash-4.4 code only worked the way you want it by chance. There was a
bug that was fixed in January, 2017, the result of
> http://lists.gnu.org/archive/html/bug-bash/2017-01/msg00018.html
> that uncovered the behavior you're complaining about.
This only explains where the change of behavior
Hi,
I see these global or static variables (1st column) used only by one
function (2nd column). Some are from bash, some are from the libraries
that bash depends.
It seems to be problematic to declare variables global/static but only
to use them in one function. Should these variables be made loc
When I use the loadable cat, I may get the following error. The input
is a fifo in this specific case.
cat: cannot open /tmp/tmp.VXkbqFlPtH: Interrupted system call
So far, I can not make a minimal script to demonstrate the problem.
But if I replace it with coreutils cat in my code, the problem i
Hi,
GLOBAL_COMMAND is mentioned as a global variable. But I don't find it.
Is it renamed to something else?
eval.c
276-/* Call the YACC-generated parser and return the status of the parse.
277- Input is read from the current input stream (bash_input). yyparse
278: leaves the parsed command i
> grep global_command *.?
GLOBAL_COMMAND is uppercase. But the actual variable name
global_command is in lowercase.
I think that GLOBAL_COMMAND should be changed to global_command in the comment.
--
Regards,
Peng
> That's a documentation convention - the all-caps in the docstring calls
> your attention to the need to search case-insensitively for the actual
> variable, while spelling it case-sensitively would make it blend into
> the sentence and make it harder to realize that the sentence is indeed
> point
Hi,
I see many variables are declared with the "register" keyword. I know
its purpose is to tell compile always access the corresponding memory
without assuming the previously accessed values are preserved. This is
usually to deal with some external devices.
But I don't understand why it is usefu
> No, that is what volatile means. The register keyword is just an
> optimisation hint, and is mostly ignored by the compiler.
If it is ignored anyway, why "register" is used in many places in the
code? Thanks.
--
Regards,
Peng
Hi,
I deleted the file parser-built, and bash still compiles and an empty
parser-built file will be generated upon compilation. What is the
purpose of this file? Should it be deleted? Thanks.
--
Regards,
Peng
Hi,
yacc_EOF is mentioned in parse.y in something like this
%left '&' ';' '\n' yacc_EOF
| error yacc_EOF
But I don't find where it is defined similarly to other tokens like BAR_AND.
%token GREATER_BAR BAR_AND
Where is yacc_EOF defined?
(y.tab.c and y.tab.h are files generated by bison. so y
On Wed, Feb 6, 2019 at 4:49 PM Eric Blake wrote:
>
> On 2/6/19 4:18 PM, Peng Yu wrote:
> > Hi,
> >
> > I deleted the file parser-built, and bash still compiles and an empty
> > parser-built file will be generated upon compilation. What is the
> > purpose of thi
Hi,
I don't understand the purpose of wdcache and wlcache. The "nc" field
seems to be always 0 (as initialized in ocache_create(), and I don't
find where it is increased. But `ocache_alloc()` just call xmalloc
without using the cache since nc is 0. So wdcache and wlcache seem to
be useless.
Do I
> Yes: ocache_free.
Could you please help explain what wdcache and wlcache actually do.
Why is it essential to have them? Why not just alloc and free them
without the caches? Thanks.
--
Regards,
Peng
On Fri, Feb 8, 2019 at 9:42 AM Chet Ramey wrote:
>
> On 2/8/19 10:39 AM, Peng Yu wrote:
> >> Yes: ocache_free.
> >
> > Could you please help explain what wdcache and wlcache actually do.
> > Why is it essential to have them? Why not just alloc and free them
>
On Fri, Feb 8, 2019 at 10:50 AM Chet Ramey wrote:
>
> On 2/8/19 10:52 AM, Peng Yu wrote:
> > On Fri, Feb 8, 2019 at 9:42 AM Chet Ramey wrote:
> >>
> >> On 2/8/19 10:39 AM, Peng Yu wrote:
> >>>> Yes: ocache_free.
> >>>
> >>&g
Hi,
I know that ASSIGNMENT_WORD in parse.y is for assignments like x=10.
But in the grammar rules. I don't see any difference between them in
terms of actions to take. Where is the code that deals with them
differently?
Also, why parse x=10 as a single token. Why not parse it as three
tokens "x"
Hi,
`echo {` treats `{` as WORD.
`{ echo; }` treats `{` as a token of `{`.
`{a` treats `{a` as a WORD.
I don't see the point why yylex() treat `{` context dependently.
Wouldn't it better just treat a bare `{` as a token of `{`?
What is the reasoning behind the current design of the syntax?
-
Hi,
yylex() still gives the token ARITH_CMD for the following command. The
error seems to be raised at the parsing stage. Shouldn't the error be
caught in the lexical analysis stage?
$ ((x = 10 + 5; ++x; echo $x))
bash: ((: x = 10 + 5; ++x: syntax error: invalid arithmetic operator
(error token i
Hi,
Bash uses the low 16 bits for $RANDOM.
https://git.savannah.gnu.org/cgit/bash.git/tree/variables.c#n1321
https://git.savannah.gnu.org/cgit/bash.git/tree/variables.c#n1356
It seems that the high bits should be more random. If so, maybe the
high 16 bits should be kept if $RANDOM must stay in 1
See the following run time comparison. {1..100} is slower than
$(seq 100).
Since seq involves an external program, I'd expect the latter to be
slower. But the comparison shows the opposite.
I guess seq did some optimization?
Can the performance of {1..100} be improved so that it is f
[[ $x ]] just tests whether the variable $x is of length 0 or not. So
its performance should not depend on how long the variable is.
But the following test case shows that the run time does depend on the
length of the variable.
Should it be considered as a performance bug of bash?
$ x=$(printf '
Could you show me how you do the profiling for this specific case?
Based on what proof that you can conclude that it is not the `[[`
performance problem?
On 3/7/20, Chris Down wrote:
> Peng Yu writes:
>>[[ $x ]] just tests whether the variable $x is of length 0 or not. So
>>
My OS is Mac OS X. I don't have perf. Is it only on linux? Could you
show me the output of your perf?
On 3/7/20, Chris Down wrote:
> Peng Yu writes:
>>Could you show me how you do the profiling for this specific case?
>>Based on what proof that you can conclud
Hi, I use unset to remove x from the environment once the for loop is
finished. Is it the best way to do in bash? Thanks.
for x in a b c
do
echo "$x"
done
unset x
--
Regards,
Peng
Hi,
The following code works in bash.
for x in a b c; { echo $x; }
But I only find the following in bash man page. Does anybody know
where the above usage is documented? Thanks.
"for name [ [ in [ word ... ] ] ; ] do list ; done"
--
Regards,
Peng
Hi,
http://mywiki.wooledge.org/ProcessSubstitution
The above webpage says the following.
commandA <(commandB; [commandB's exit code is available here from $?])
[commandB's exit code cannot be obtained from here. $? holds
commandA's exit code]
But I am wondering if there is a walkaround to deal
On Mon, Mar 9, 2015 at 2:07 PM, Chet Ramey wrote:
> On 3/8/15 6:05 PM, Stephane Chazelas wrote:
>
>> Are bash questions no longer on topic here? bash-bug used to be
>> the place to discuss bash (before help-bash was created). It maps to the
>> gnu.bash.bug newsgroup. I don't think help-bash maps t
Hi,
The -i option obviously works with set. But it is missing in the man
page. Should this be added?
~$ echo $-
himBH
~$ set +i
~$ echo $-
hmBH
The following lines are from the man page.
set [--abefhkmnptuvxBCEHPT] [-o option-name] [arg ...]
set [+abefhkmnptuvxBCEHPT] [+o option-n
>> The -i option obviously works with set. But it is missing in the man
>> page. Should this be added?
>
> No. It's really only there for completeness, so things like `set $-'
> work as expected without error.
But if something is in the implementation, it should be also in the
documentation, righ
On Thu, Mar 12, 2015 at 1:29 PM, Greg Wooledge wrote:
> On Thu, Mar 12, 2015 at 01:13:18PM -0500, Peng Yu wrote:
>> One may want to manually set -i option in a bash script for whatever
>> reason. (In this case, it is to check COLUMNS.)
>
> http://mywiki.wooledge.org/BashF
On Friday, March 13, 2015, Chet Ramey wrote:
> On 3/12/15 2:13 PM, Peng Yu wrote:
> >>> The -i option obviously works with set. But it is missing in the man
> >>> page. Should this be added?
> >>
> >> No. It's really only there for completen
On Sat, Mar 14, 2015 at 8:46 AM, Linda Walsh wrote:
>
>
> Peng Yu wrote:
>>
>> Hi,
>>
>> http://mywiki.wooledge.org/ProcessSubstitution
>>
>> The above webpage says the following.
>>
>> commandA <(commandB; [commandB's exit co
Hi Chet,
>> Eduardo A. Bustamante López wrote:
>>> Well, if your scripts are so simple, why use local functions at all?
>> ---
>> Cleanliness, Hygiene...
>
> Please, let's not have this argument again. I think you're all using the
> term `local function' to mean different things.
>
> You seem
Hi Chet,
>>> That's the difference: if you're careful with naming and rigorous about
>>> your calling conventions, your one-time-use functions are about as close
>>> as you can get to local functions in bash, but you have to pay attention
>>> to the declaration's side effects.
>>
>> There is at le
Hi, I have been checking bash source code. But it is not clear to me
how bash do tokenization as I don't find a lex file.
Could anybody point to me where I should look for the information
about tokenization in bash source code?
--
Regards,
Peng
OK. I see it, which check emails and print prompt.
What factors people need to consider to decide whether to use flex to
perform tokenization and write a customize tokenizer?
Checking emails and printing prompt strictly speaking is not related
with tokenization. Is there an alternative way to org
Hi, The following example shows that bash uses xmalloc. But it seems
that using xmalloc is not a good practice. Is it better to use malloc
instead of xmalloc? In this test case, after `./main 100` failed I
still want to run the rest commands. So it sounds like malloc is
better.
http://stackove
The artificial ulimit is to tigger the error.
My point is why bash terminates when it runs an external command that
requires a large memory. Shouldn't bash return an exit code on behalf
of the failed command and continue to the next command?
On Sunday, November 6, 2016, Eduardo Bustamante wrote:
On Tue, May 14, 2024 at 3:35 AM Koichi Murase wrote:
>
> 2024年5月14日(火) 15:09 Martin D Kealey :
> > 1. I therefore propose that where a relative path appears in
> > BASH_SOURCE_PATH, it should be taken as relative to the directory
> > containing $0 (after resolving symlinks), rather than relative t
101 - 177 of 177 matches
Mail list logo