On Mar 1, 2007, at 6:50 AM, Ryan Novosielski wrote:
>>
>> On Thursday 01 March 2007 10:32:49 Kern Sibbald wrote:
>>
>> this item does indeed work very stable. I've been using this in
>> production for
>> exporting large Oracle databases (100+ GBytes each) once per week
>> for about
>> two years. I've never encountered stability issue with the FIFOS. :-)
>
> I would love, if you have the time, to write a brief mention about how
> this is done. Perhaps it could be posted somewhere (if it isn't
> already)?
My experience with FIFO database dumps:
Basics: On backup, when Bacula encounters an explicit FIFO (with
readfifo=yes), It tries to read data from that FIFO until end-of-
file. On restore, if the FIFO doesn't exist, bacula creates it. It
then waits (1 min timeout) for something else to read from the FIFO.
If the mystery stranger connects, it just pours all the data it saved
back into the FIFO and moves on.
1. A 'gotcha': Your fileset must explicitly list the FIFOs as
individual files, they cannot be something it comes across via
recursion or regular expressions. One simple way of doing this for
all FIFOS in a directory is to run a command on the remote machine
that lists all fifos. Example fileset (replace directory name as
needed):
FileSet {
Name = "DB FIFOs"
Include {
Options {
readfifo=yes
}
# Documentation says fifos must be "explicitly"
# Mentioned. You can't just specify the directory
# That they'll all be created in, or "readfifo"
won't apply.
# This is a workaround for that limitation: Run a
script to list them.
File = "\\|bash -c \"find /tmp/bacula_fifos/ -type p
-maxdepth 1\""
}
}
2. When Bacula hits the FIFOs, it will wait ~60 seconds on each file
for someone on the other end to start giving data (when Job=Backup)
or consuming it (Job=Restore). Thus you must start something in the
background to supply or take data from them.
RunScript {
# Uncomment appropriate command for desired effect, either backing
up, restoring to DB, or restoring to real files
#Command = "/etc/bacula/launchwrapper.sh restoretodb /tmp/
bacula_fifos/"
#Command = "/etc/bacula/launchwrapper.sh restoretofolder /var/
eas_restore/tmp/bacula_fifos/ /var/eas_restore/bacula_fifodump/"
#Command = "/etc/bacula/launchwrapper.sh backup /tmp/bacula_fifos/"
RunsWhen = Before
AbortJobOnError = yes
RunsOnClient = yes # BUG in early vers of Bacula 2.01, cannot use
uppercase Y in Yes for RunsOnClient.
}
RunScript {
# Clean out our FIFOs
Command = "/etc/bacula/launchwrapper.sh clean /tmp/bacula_fifos/"
RunsWhen = After
RunsOnSuccess = Yes
RunsOnFailure = Yes
RunsOnClient = yes
}
3. I had a bit of a problem with our set up, because we have many
databases per server which I wanted to handle individually. Spawning
lots of pg_dump processes writing to FIFOs (and waiting for Bacula to
come along) doesn't work, because you exhaust the connection pool and
only the first X databases will dump successfully.
So if anyone's interested, I have a python script which will actually
do non-blocking IO (gasp, shock) over the FIFOs, waiting until any
one of them can accept data and only *then* launching connecting to
the database to fill the FIFO. It will also do the converse for
restoring data back to the databases, noticing newly created FIFOs
(created by a Bacula restore job) in the directory and trying to grab
data from them. Note that this is definite overkill if you're okay
with doing one dump per server of all active databases.
--
--Darien A. Hager
[EMAIL PROTECTED]
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users