On Fri, 15 Jun 2018 15:15:23 +0200, Davide Brini <dave...@gmx.com> wrote:
> So it looks like the only "reliable" way to detect NULs in the input is to > read one character at a time. Your explanation got me thinking, and I've come up with the following code that seems to be slightly more efficient than reading one byte at a time, as it allows reading bigger chunks of data in the parts where there are no delimiters (NULs). I'm posting it in case it helps someone. Works For Me (TM). ---------------------------------- to_read=512 # how many bytes to read at a time while true; do IFS= read -d '' -r -n $to_read data status=$? length=${#data} # do whatever with what we got for ((i=0; i < length; i++)); do printf "Read character %02x\n" "'${data:i:1}" done # if we read less than we wanted, and it's not EOF, it means we also have # a delimiter if [ $length -lt $to_read ] && [ $status -eq 0 ]; then printf "Read a NUL\n" fi # exit if EOF if [ $status -ne 0 ]; then break fi done ---------------------------------- Tests: $ printf '\x00' | ./test.sh Read a NUL $ printf '\x00\x00' | ./test.sh Read a NUL Read a NUL $ printf '\x00a\x00' | ./test.sh Read a NUL Read character 61 Read a NUL $ printf '' | ./test.sh $ printf 'abcd' | ./test.sh Read character 61 Read character 62 Read character 63 Read character 64 $ printf '\x001\x00\n' | ./test.sh Read a NUL Read character 31 Read a NUL Read character 0a -- D.