Nico Kadel-Garcia wrote:
I've got some colleagues with a rather large Subversion repository
whose trunk includes over 10,000 files and over 500 Meg of actual
content for various reasons. What we're finding is that checking it
out it on a Windows client to a local hard drive takes perhaps 3
minutes. Downloading it to a mounted Windows (CIFS) share takes
roughly half an hour.

What's the server on the CIFS side? If it is Linux/samba, it may be the overhead of making a case sensitive filesystem look case insensitive (consider what has to happen when you create a new file in a large directory and have to check if the name already exists).

* Is this poor CIFS performance normal for large repositories being
checked out?

I doesn't sound normal to me.

* How bad are the risks of screwing up my checkouts if I use a
post-commit to keep a central working copy updated, and have people
simply copy that over instead of checking out the trunk directly?

Copying a checked out directory isn't bad by itself, but you can't have something modifying the source during the copy - which sounds likely to happen.

My
concerin is that the checkout process isn't really designed for that,
and may fail to do a checkout in a clean and atomic state, and the
checked out copy may therefore be corrupted by being in the midst of
an update operation.

Yes, I'd expect things to break. Does everyone really need a complete copy or could you break it into components that each person needs to update? Or can everyone just check out once (or copy a workspace that doesn't update automatically while people are working) and subsequently do updates - or are they just as bad?

--
  Les Mikesell
   lesmikes...@gmail.com

Reply via email to