On Tue, 2010-07-13 at 15:57 -0500, Les Mikesell wrote:
> On 7/13/2010 2:50 PM, David Brodbeck wrote:
> >
> > On Jul 13, 2010, at 5:50 AM, Les Mikesell wrote:
> >
> >> Nico Kadel-Garcia wrote:
> >>> I've got some colleagues with a rather large Subversion repository
> >>> whose trunk includes over 10,000 files and over 500 Meg of actual
> >>> content for various reasons. What we're finding is that checking it
> >>> out it on a Windows client to a local hard drive takes perhaps 3
> >>> minutes. Downloading it to a mounted Windows (CIFS) share takes
> >>> roughly half an hour.
> >>
> >> What's the server on the CIFS side?  If it is Linux/samba, it may be the 
> >> overhead of making a case sensitive filesystem look case insensitive 
> >> (consider what has to happen when you create a new file in a large 
> >> directory and have to check if the name already exists).
> >
> > This could be a lot of it if a substantial number of files are in one flat 
> > subdirectory.  CIFS really, really does not deal with large directories 
> > well.  Neither does NFS, but the way Windows handles directories tends to 
> > make it worse.
> 
> I think CIFS is just the network protocol.  The real issue is on the 
> physical filesystem side.  When you open a file for writing, the 
> underlying system has to determine if that file name already exists and 
> if not, find or create a new filename slot to create it. And this has to 
> be done atomically, since other processes might be trying to create the 
> same file at the same time and only one can succeed.  This is bad enough 
> in a large directory when you let the OS deal with exact matches, but if 
> you are faking case insensitivity you have to do much more work in user 
> space to find the potential collisions with everything locked for longer 
> times.

You're right CIFS is just the protocol. And Samba implements it
efficiently... Windows Explorer (XP version) often transfers slower that
Linux "smbclient" command line (measures on a single large file)
And Windows often runs anti-virus !

Nico, let's try with a "smbmount" point on Linux to compare with NFS.

I do not understand the option to publish a "ready-to-use" checkout.
- authentication information like username may be included in repository
URL, so it must be modified in order to commit 
- such a working copy contents twice its volume
To publish a clean state, you should prefer an "export" in that case

I have seen 1 Gb working copy properly checkouted on a local disk.
When the working copy is there, just use "update" and "switch" to limit
transfer and disk writes... Why doing a new checkout each time ?

With large working copy checkout, older TortoiseSVN versions have
troubles/bugs. You should use latest version available - or native win32
"svn.exe" command line binaries.


Reply via email to