On Fri, 24 Jun 2011, ted wrote: > I have an issue when trying a restore of a large data set consisting of > about 6.5TB and 35,000,000 files the console takes an extremely long time > to build the directory tree, over an hr. After the tree is built I typed > "mark *" and this command ran for about 18 hours before I hit ctrl-c; the > mySQL server showed no activity and neither bconsole nor bacula-dir was > not using any CPU, so I believe the process just petered out. I was > wondering if anyone has been successful in restoring large TB data sets. > Either ones containing a large number of individual files or ones that > are 6TB or larger with small file lists and contain small number of large > files?
You are not alone. I have one client in particular for whom building the file tree takes about 25-30 minutes. Like you, I'm using MySQL. There have been suggestions from a number of quarters which suggest that Postgresql doesn't have this issue. This may not necessarily be a criticism of MySQL, so much as that the developers tend to spend more time working with Postgresql and therefore Bacula is better optimised toward Postgresql. There is a discussion here: http://old.nabble.com/Re%3A-Tuning-for-large-%28millions-of-files%29-backups--tt30099042.html#a30259098 We plan migrating to Postgresql as that's the only solution I know of right now. Gavin ------------------------------------------------------------------------------ All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense.. http://p.sf.net/sfu/splunk-d2d-c1 _______________________________________________ Bacula-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/bacula-users
