The way I have created a queue in shell is:

(1) create a directory to manage queue entries.
(2) file names in that directory are high precision timer values
(3) file contents in that directory are command lines.

Then you need something to service the queue.

In my case, I also decided that I could tolerate one retry of a queued
command, but that in general queued commands needed to be designed to
"catch up" when they were falling behind on work (by taking bigger
bites out of the workload - more efficient, but higher latency).

Also, in my case, I occasionally was getting code running on machines
which did not have that command queue which was written for the
machine which had that command queue. This was pure sloppiness, but I
decided that I wanted those queue attempts to fail. If you decide that
that fits how you are working, the details of how you detect whether
the command queue service routine is available should relate to
whatever you have servicing the queue... So, if your queue directory
is $HOME/q/ that gives you a shell script something like this:

#!/bin/sh
set -e

if [ -r FIXME ]; then
        if [ 2 -gt $(fgrep -l "$*" $HOME/q/* 2>/dev/null | wc -l) ]; then
                echo "$*" >$HOME/q/$(perl -MTime::HiRes -e 'print
Time::HiRes::time')
        else
                echo "$* is backed up, not adding another retry"
        fi
else
        echo FIXME
        exit 1
fi

Replace the FIXME bits with something appropriate and/or redesign it
to your own specifications. Just remember that when you push the
limits of whatever resources you have available, things can break and
you will need to do something to isolate and address those problems.

I hope this helps,

-- 
Raul


On Sat, Apr 16, 2016 at 8:59 AM, andrew fabbro <[email protected]> wrote:
> On Sat, Apr 16, 2016 at 4:32 AM, Craig Skinner <[email protected]>
> wrote:
>
>> A bloated way to do that is with an SQLite database, with a table's
>> unique primary key being some (job number) attribute. Another column
>> could auto timestamp on row insertion, so you could query on job number
>> or time added. Unless you've other data to retain, it is rather bloated.
>>
>
> Not sure I agree - sqlite is pretty lightweight.  I have a job system that
> runs hundreds of jobs on many systems, each dumping results into local
> daily sqlite files which are then scp'd back and consolidated for
> reporting.  This gives us the ease of standardized job results and
> reporting without the need to have an HA DB every system can report to,
> load DB clients all over the place, DB security with remote access, etc.
>  (We need to gather results somehow, so rather than write some custom
> format or something like XML, sqlite is an easy format to use).  You can
> access sqlite on the command line in shell scripts if need be.  DB sizes
> are in MB.
>
> You might be saying bloated because it's writing SQL, etc. and for a
> sysadmin who's focused on systems and is not a code-writer, that's totally
> fair - SQLite is much more pleasant when you have perl or python and can
> properly bind variables, etc.
>
> I'd say the OP is crossing into programming rather than scripting.  I'm
> making an artificial distinction (since shell scripts are certainly
> programs) but in my experience, once you start needing more complex data
> structures, you've outgrown the shell and should look at something like
> perl, python, etc.  Not saying there aren't ways to do queues in
> bash/ksh/etc., just...why would you?
>
> --
> andrew fabbro
> [email protected]

Reply via email to