I am planning on mounting the share as the TopDir.
I'm going to start small and see where it fails, then deal with it.
This part sounds like it would be a lot simpler and way more effective to
tar archive everything we want to store and then copy the results to tape.

Or - there's a way to tar archive and stream it to the NFS share using
BackupPC - if I can manage to do that it might just solve problems.
Whatever happens it has to go straight to the NFS as that's the only
storage I'll have that's big enough to take anything.

Rick


On Mon, Nov 7, 2011 at 11:48 AM, Les Mikesell <[email protected]> wrote:

> On Mon, Nov 7, 2011 at 12:22 PM, Rick Bastedo <[email protected]> wrote:
> > The Systems Admin told me they are setting up a 5TB NFS share for me to
> use.
> > Any gotcha's anyone can think of before I go ahead with configuration?
>
> Are you planning to mount the NFS share as the top of the backuppc
> archive directory, or use it as a target to export tar archives either
> with scripted Backuppc_tarCreate commands or 'archive host'
> configurations?
>
> > They will be having me back up more Linux systems after I successfully
> take
> > care of the big sore point they have currently.
>
> If the system ends up doing some type file-oriented copy from the NFS
> share to tape, it will likely fail at some point if that is your whole
> backuppc archive  due to the large number of hardlinked files it will
> accumulate.
>
> > I've been asked if there's a way to just get things that are newer than
> NN
> > hours only, which I don't know - I said I'd get back to them on that.
>
> Maybe - if you modify the command issued to collect the copies.
>
> > So the current thinking is that the databases are backed up using
> whatever
> > our vendor uses to do backups, this is their responsibility according to
> > their SLA.
> > In order to provide disaster recovery (partially our responsibility) we
> will
> > move those resulting database backup files to our NFS share via BackupPC
> and
> > then backup that NFS to our DPM tape repository which gets sent out to
> Iron
> > Mountain weekly.
> > The restore process for disaster recovery purposes should be that of
> letting
> > the vendor rebuild the system according to their SLA, then we will supply
> > the database backup file set they request and they will perform the
> database
> > restore.
>
> I think you can make this work, but it isn't the sort of thing
> backuppc does best.  On the other hand if you don't actually have a
> site disaster, it may be handy to be able to grab the copy that
> backuppc still has online.
>
> > After we demonstrate success with this then we will identify other
> clients
> > and add more to our backups, but first things first.
>
> You'll see much more benefit from backuppc's features from targets
> where you have duplication across machines, or directories where only
> a few files change per run.
>
> --
>   Les Mikesell
>     [email protected]
>
------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
BackupPC-users mailing list
[email protected]
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to