On Thu, Jan 31, 2013 at 9:18 PM, Jason Keltz <j...@cse.yorku.ca> wrote: > On 31/01/2013 9:13 PM, Ryan Schmidt wrote:
>> Subversion is not a software distribution tool; it is a document and >> revision management system. Use a different tool. As someone else said, >> rsync seems like a good tool for this job; I didn't understand why you think >> using rsync directly between your file server and your clients won't work. >> > > See my email to Les... If only the rsync server could save a copy of the > file checksums when it runs, it would probably decrease the sync time by > half and save a whole lot of disk activity... This.... sounds like somone wants to use the same screwdriver for all screws in this birdhouse. It's theoretically possible to set a canonical Subversion and auto-propagate changes to it, from the "file server" or from the an rsynced copy of the fileserver with a local working copy on the Subversion master. But it's going to be bulky, and slow. If that 60 GBytes has a lot of churn due to rapidly changing binaries or extensive static database files, it's going to get awkward indeed. And because the "file server" you're propagating these changes from is neither a Subversion server, nor a Subversion client, it's much harder. Moreover, this doesn't seem to be the kind of "rollback the changes to a well-defined date" that Subversion does so well,a nd the changes from the master get fed to a trunk and will then have to be propagated to branches., and each machine will need a different branch. This.... gets tricky. One can differentiate among the slightly different environments by maintaining a trunk and merging the changes to the branches, but that can get awkward. Is it possible to set up tags that haven "svn:external" settings that point to sets of software from the master, and then the individual hosts are configured locally and have their changes propagated to the branches on the master? And you know, this sounds like an absolute flipping deployment disaster I dealt with about 12 years ago. The site architect thought the clever thing to do was make a complete tarball bundle for all deployments, and the whole compressed tarball had to be pushed *every time*, and releases could only happen with the complete tarball. Various forms of chaos ensued. I taught them to use packages, to deploy kernels, in particular, as a separate object so they could be deployed separately and with rollback separate from the rest of the system. This fixed the ongoing problem that any one component that failed would stop the *whole* deployment and push back even the smallest fixes for as much as six months. So while I've offered some hints, I'm gong to really suggest to Jason that he think hard about modularizing the components of this set of packages before he even starts this project.