Hi Andy,

Thanks for your response! Running local copies of the environment doesn't seem 
practical in this case, my guys are working on 10+ projects at a time all of 
which can be in different states and which need many different modules in place 
via apache and php which would take forever to setup and even support across 
multiple workstations. Not having to worry about client machine consistency 
keeps things sane (in my opinion). When the commits are fast, the central model 
works well, I'm just trying to figure out a way to keep them fast.

The network *shouldn't* be an issue, traffic is fairly light and is gigabit to 
all of the machines. (I'm not sure how to test this beyond some pings). We do 
have a very small commit hook which updates an Apache web root with the changed 
file so that developers can see their changes in-browser. (how could I check 
the speed of this?). We don't commit *large* binaries, in our web projects 
however there are lots of common .jpg's and .png's for our interfaces each 
weighing in at no more than 100k. We're not updating these often though, could 
these be the big problem? 

Yes, we are serving the repo via Apache. I will try a restart to see if things 
feel faster. (Should I be watching memory usage on Apache in this instance?)

--
Brendan Farr-Gaynor







On 2010-05-05, at 9:30 AM, Andy Levy wrote:

> On Wed, May 5, 2010 at 09:21, Brendan Farr-Gaynor
> <bren...@resolutionim.com> wrote:
>> I run a small team of web developers (6) who all work from an in-house 
>> repository. We make lots of commits and often and notice the performance 
>> gets pretty bad. Sometimes taking as long as 2 minutes to commit a change. 
>> We rely a lot on server-side software so we often need to commit to see what 
>> our code will do, this can get painful during a debug process where as a 
>> developer you'll often want to commit many little changes quickly within the 
>> span of 5-10 minutes.
> 
> Set each developer's workstation up such that it can run the
> application. Test locally, don't commit code if you don't know whether
> it's broken or not. Local debugging will be faster too.
> 
>> We're using SVN on a newer quad-core xServe with about 4GB of ram and a 
>> standard 7200 RPM disk (eSATA). Is there something we can do on the hardware 
>> side that would help? Solid State drive? More RAM? Is there something in our 
>> SVN config that I should be playing with? We're currently using the stock 
>> install and config that Apple ships with OS X Server 10.6 (Snow Leopard).
> 
> Before you throw hardware at the problem, you need to determine what
> the bottleneck is. Is it your network? Do you have long-running hook
> scripts? Are you committing large binary files which Subversion
> struggles with diffing, or has to write the full contents out every
> time? Do you have a lot of path-based authorization rules? How do you
> serve the repository (I'm guessing Apache, but you don't say). What
> else is running on the server? Is performance better after an Apache
> restart (assuming you serve with Apache), and degrades over time?
> 
> Apply the same sort of troubleshooting & optimization you apply to
> your software development - observe, measure, then address the worst
> offender first. Don't just throw random "fixes" at it hoping that
> something works.

Reply via email to