On Dec 4, 2007 2:43 PM, Blake Dunlap <[EMAIL PROTECTED]> wrote:
> >> After watching it run now for a few days, it definitely appears to not be
> >> starting new jobs until all of the jobs running on the sd finish. Is this
> >> how it is supposed to work?
> >>
> >No. Can you post your configs. I am using bacula-2.3.6 for the
> >director and storage and several different versions for the clients
> >and I do not have this problem.
>
> >John
>
> Sure, I'll just pick a random client for brevity, and show the relevant
> config.
>
> bacula-dir.conf:
>
> Director { # Define the Bacula Director Server
> Name = nrepbak01-dir
> DIRport = 9101 # where we listen for UA connections
> QueryFile = "/etc/bacula/query.sql"
> WorkingDirectory = "/var/bacula/working"
> PidDirectory = "/var/run"
> Maximum Concurrent Jobs = 20
> Password = "REDACTED" # Console password
> Messages = Daemon
> FD Connect Timeout = 10 min
> }
>
> Storage {
> Name = nrepbak-sd
> Address = 172.30.0.1 # N.B. Use a fully qualified name here
> Maximum Concurrent Jobs = 5
> SDPort = 9103
> Password = "REDACTED" # password for Storage daemon
> Device = Autochanger # must be same as Device in Storage
> daemon
> Media Type = LTO2 # must be same as MediaType in Storage
> daemon
> Autochanger = yes # enable for autochanger device
> }
>
> JobDefs {
> Name = "NrepNightlyFullGeneric" #This is the standard weekly backup
> defaults for NREP
> Spool Data = yes
> Type = Backup
> Level = Incremental
> Schedule = "WeeklyCycle"
> Storage = nrepbak-sd
> Messages = Standard
> # Max Start Delay = 22 hours ;Disabled until can override using the
> Schedules
> Rerun Failed Levels = yes
> Reschedule On Error = yes
> Reschedule Interval = 6 hours
> Reschedule Times = 1
> Prefer Mounted Volumes = yes
> Pool = OnsiteFull
> Incremental Backup Pool = OnsiteIncremental
> Write Bootstrap = "/var/bacula/working/%c_%n.bsr"
> # Priority = 6
> }
>
> Job {
> Name = "filemonster"
> Client = filemonster-fd
> FileSet = "filemonster"
> JobDefs = "NrepNightlyFullGeneric"
> }
>
> (all clients on that SD have same jobdef, just different client/name/filesets)
>
> Client {
> Name = filemonster-fd
> Address = 172.30.0.25
> FDPort = 9102
> Catalog = MyCatalog
> Password = "REDACTED" # password for FileDaemon 2
> File Retention = 30 days # 30 days
> Job Retention = 3 years # six months
> AutoPrune = yes # Prune expired Jobs/Files
> }
>
>
> bacula-sd.conf:
> Storage { # definition of myself
> Name = nrepbak01-sd
> SDPort = 9103 # Director's port
> WorkingDirectory = "/var/bacula/working"
> Pid Directory = "/var/run"
> Maximum Concurrent Jobs = 20
> Heartbeat Interval = 15 seconds
> }
>
> Autochanger {
> Name = Autochanger
> Device = DriveA
> Device = DriveB
> Changer Command = "/etc/bacula/mtx-changer %c %o %S %a %d"
> Changer Device = /dev/sg0
> }
>
> Device {
> Name = DriveA #
> Drive Index = 0
> Media Type = LTO2
> Archive Device = /dev/nst0
> AutomaticMount = yes; # when device opened, read it
> AlwaysOpen = yes;
> Spool Directory = /staging/backups/
> RemovableMedia = yes;
> RandomAccess = no;
> Changer Command = "/etc/bacula/mtx-changer %c %o %S %a %d"
> Changer Device = /dev/sg0
> Offline On Unmount = Yes
> AutoChanger = yes
> # Enable the Alert command only if you have the mtx package loaded
> Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
> }
>
> (driveB is the same as driveA except for dev/nst1)
>
I do not see anything that looks wrong. Are you using spooling? I use
spooling with most clients except a few jobs that originate on the
director or the storage machines.
John
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users