On Tuesday 21 June 2005 10:31, Sebastian Stark wrote:
> Is there a way to speed up the creation of the directory tree when
> restoring files? For some clients this takes more than an hour for us.
>
> Our MySQL catalog has grown quite large (~5G) and I think this is the
> reason. But maybe there's another way to speed this up other than splitting
> up the catalog? Maybe play around with indexes?
>

I suspect that it is more a question of how many files you are trying to load 
into the tree at one time rather than an SQL question.  In general the size 
of the database is much less important than the number of files backed up per 
job.  

You didn't mention how many files/job you have.  If it is more than about 500K 
then I can understand the problem.  

Some ideas:
- Split your jobs to keep the files/job smaller.
- Find some *really* good algorithm for building a file tree.  The current one 
is reasonably well optimized, but I suspect it could be better.
- Find some algorithm for keeping the file tree on disk rather than in memory 
as I suspect this is what costs so much (lots of virtual memory). It would 
need to be good at caching and paging.

-- 
Best regards,

Kern

  (">
  /\
  V_V


-------------------------------------------------------
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477&alloc_id=16492&op=click
_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to