Re: Discrepancies in svn mirror created with svnsync

2013-02-08 Thread Andreas Krey
On Thu, 07 Feb 2013 23:00:33 +, Marius Gedminas wrote:
...
> The cron script runs svnsync every 5 minutes. 

Do you make sure svnsync isn't started anew when the previous instance
hasn't terminated yet? (I don't know if that matters.)

Andreas

-- 
"Totally trivial. Famous last words."
From: Linus Torvalds 
Date: Fri, 22 Jan 2010 07:29:21 -0800


Re: Discrepancies in svn mirror created with svnsync

2013-02-08 Thread Marius Gedminas
On Fri, Feb 08, 2013 at 11:44:40AM +0100, Andreas Krey wrote:
> On Thu, 07 Feb 2013 23:00:33 +, Marius Gedminas wrote:
> ...
> > The cron script runs svnsync every 5 minutes. 
> 
> Do you make sure svnsync isn't started anew when the previous instance
> hasn't terminated yet? (I don't know if that matters.)

No.  Here's my /etc/cron.d/zope-svn-mirror:

  # Mirror the Zope repository
  */5 * * * * root /usr/local/sbin/update-zope-mirror > /dev/null

and here's my /usr/local/sbin/update-zope-mirror:

  #!/bin/sh
  /usr/bin/sudo -u syncuser /usr/bin/svnsync sync file:///stuff/zope-mirror

It's possible that a temporal overlap happened.

Marius Gedminas
-- 
MCSE == Minesweeper Consultant / Solitaire Expert


signature.asc
Description: Digital signature


Re: Feasiblility question

2013-02-08 Thread Thorsten Schöning
Guten Tag Dermot,
am Freitag, 8. Februar 2013 um 13:31 schrieben Sie:

> Has anyone used subversion for this type of tracking?

I use it to track binary software, MSI installers, pre configured
application packages which don't need installation, images. Depending
on your client you can even see diffs for images, TorotiseSVN provides
it at least for some images by presenting both images side by side
with zoom capabilities etc.

> Does what I'm
> proposing sound feasible?  Any thoughts would be appreciated.

Removing parts the history is nothing I would consider, but I don't
need as much space as you. When it comes to repo layout, what's the
kind or purpose of your submissions? Does they logically share some
purpose, e.g. 10 submissions for images about cars of different
models or can customers be considered? What about filenames? What
about processes after you modified your files and "added them as
records"? It sounds like you should need to worry much about your repo
layout because if nothing processes the repo layout and only you and
your co-workers have to deal with it, you can always efficiently
re-arrange it without loosing history and wasting any space. The only
thing I would consider is using directories wherever possible.

Mit freundlichen Grüßen,

Thorsten Schöning

-- 
Thorsten Schöning   E-Mail:thorsten.schoen...@am-soft.de
AM-SoFT IT-Systeme  http://www.AM-SoFT.de/

Telefon...05151-  9468- 55
Fax...05151-  9468- 88
Mobil..0178-8 9468- 04

AM-SoFT GmbH IT-Systeme, Brandenburger Str. 7c, 31789 Hameln
AG Hannover HRB 207 694 - Geschäftsführer: Andreas Muchow



Re: Discrepancies in svn mirror created with svnsync

2013-02-08 Thread Thorsten Schöning
Guten Tag Marius Gedminas,
am Freitag, 8. Februar 2013 um 14:45 schrieben Sie:

> It's possible that a temporal overlap happened.

That shouldn't be a problem, as per default svnsync aquires exclusive
locks on the destination repo where it should mirror the data to and
subsequent calls to svnsync would throw an error about the former
created locks.

Mit freundlichen Grüßen,

Thorsten Schöning

-- 
Thorsten Schöning   E-Mail:thorsten.schoen...@am-soft.de
AM-SoFT IT-Systeme  http://www.AM-SoFT.de/

Telefon...05151-  9468- 55
Fax...05151-  9468- 88
Mobil..0178-8 9468- 04

AM-SoFT GmbH IT-Systeme, Brandenburger Str. 7c, 31789 Hameln
AG Hannover HRB 207 694 - Geschäftsführer: Andreas Muchow



Re: Discrepancies in svn mirror created with svnsync

2013-02-08 Thread Stefan Sperling
On Fri, Feb 08, 2013 at 03:45:16PM +0200, Marius Gedminas wrote:
> On Fri, Feb 08, 2013 at 11:44:40AM +0100, Andreas Krey wrote:
> > On Thu, 07 Feb 2013 23:00:33 +, Marius Gedminas wrote:
> > ...
> > > The cron script runs svnsync every 5 minutes. 
> > 
> > Do you make sure svnsync isn't started anew when the previous instance
> > hasn't terminated yet? (I don't know if that matters.)
> 
> No.  Here's my /etc/cron.d/zope-svn-mirror:
> 
>   # Mirror the Zope repository
>   */5 * * * * root /usr/local/sbin/update-zope-mirror > /dev/null
> 
> and here's my /usr/local/sbin/update-zope-mirror:
> 
>   #!/bin/sh
>   /usr/bin/sudo -u syncuser /usr/bin/svnsync sync file:///stuff/zope-mirror
> 
> It's possible that a temporal overlap happened.
> 
> Marius Gedminas

I cannot tell you what happened here and why the revisions in the
mirro are empty. That sure is concerning.

However there are known race conditions in svnsync in Subversion 1.6.
See http://subversion.apache.org/docs/release-notes/1.7.html#atomic-revprops

So you should definitely wrap svnsync in a tool like lockfile (part of
procmail), or upgrade to 1.7.

It would be interesting to know if this problem also appears with
svnsync from 1.7. Note that with file:// URLs the svnsync you are running
is both client and server from the mirror repository's point of view.
So the version running on svn.zope.org doesn't really matter (though I
checked, and it appears to be severely outdated...)


Re: Discrepancies in svn mirror created with svnsync

2013-02-08 Thread Zachary Burnham
unsubscribe

Z

[cid:image002.jpg@01CD8139.6DE4E9D0]


Zachary Burnham | Web Developer
Energy Federation Inc | 1 Willow Street | Southborough, MA 01772
508.870.2277 x4467 | f 888.440.4219
zburn...@efi.org | efi.org




On Feb 8, 2013, at 9:45 AM, Stefan Sperling 
mailto:s...@elego.de>> wrote:

On Fri, Feb 08, 2013 at 03:45:16PM +0200, Marius Gedminas wrote:
On Fri, Feb 08, 2013 at 11:44:40AM +0100, Andreas Krey wrote:
On Thu, 07 Feb 2013 23:00:33 +, Marius Gedminas wrote:
...
The cron script runs svnsync every 5 minutes.

Do you make sure svnsync isn't started anew when the previous instance
hasn't terminated yet? (I don't know if that matters.)

No.  Here's my /etc/cron.d/zope-svn-mirror:

 # Mirror the Zope repository
 */5 * * * * root /usr/local/sbin/update-zope-mirror > /dev/null

and here's my /usr/local/sbin/update-zope-mirror:

 #!/bin/sh
 /usr/bin/sudo -u syncuser /usr/bin/svnsync sync file:///stuff/zope-mirror

It's possible that a temporal overlap happened.

Marius Gedminas

I cannot tell you what happened here and why the revisions in the
mirro are empty. That sure is concerning.

However there are known race conditions in svnsync in Subversion 1.6.
See http://subversion.apache.org/docs/release-notes/1.7.html#atomic-revprops

So you should definitely wrap svnsync in a tool like lockfile (part of
procmail), or upgrade to 1.7.

It would be interesting to know if this problem also appears with
svnsync from 1.7. Note that with file:// URLs the svnsync you are running
is both client and server from the mirror repository's point of view.
So the version running on svn.zope.org doesn't really 
matter (though I
checked, and it appears to be severely outdated...)

<>

Re: Discrepancies in svn mirror created with svnsync

2013-02-08 Thread Lathan Bidwell

  
  
Zach, the email to unsubscribe is:
  
  users-unsubscr...@subversion.apache.org
  
  
  On 02/08/2013 09:47 AM, Zachary Burnham wrote:


  
  unsubscribe
  

Z

  
  

Zachary Burnham | Web
  Developer
  Energy
  Federation Inc | 1 Willow Street |
  Southborough, MA 01772
  508.870.2277
  x4467 | f 888.440.4219 
zburn...@efi.org | efi.org

  
  
  

  
  
  
  
On Feb 8, 2013, at 9:45 AM, Stefan Sperling 
  wrote:

On Fri, Feb 08, 2013 at 03:45:16PM
  +0200, Marius Gedminas wrote:
  On Fri, Feb 08, 2013 at 11:44:40AM
+0100, Andreas Krey wrote:
On Thu, 07 Feb 2013 23:00:33 +,
  Marius Gedminas wrote:
  ...
  The cron script runs svnsync every
5 minutes. 
  
  
  Do you make sure svnsync isn't started anew when the
  previous instance
  hasn't terminated yet? (I don't know if that matters.)


No.  Here's my /etc/cron.d/zope-svn-mirror:

 # Mirror the Zope repository
 */5 * * * * root /usr/local/sbin/update-zope-mirror >
/dev/null

and here's my /usr/local/sbin/update-zope-mirror:

 #!/bin/sh
 /usr/bin/sudo -u syncuser /usr/bin/svnsync sync file:///stuff/zope-mirror

It's possible that a temporal overlap happened.

Marius Gedminas
  
  
  I cannot tell you what happened here and why the revisions in
  the
  mirro are empty. That sure is concerning.
  
  However there are known race conditions in svnsync in
  Subversion 1.6.
  See http://subversion.apache.org/docs/release-notes/1.7.html#atomic-revprops
  
  So you should definitely wrap svnsync in a tool like lockfile
  (part of
  procmail), or upgrade to 1.7.
  
  It would be interesting to know if this problem also appears
  with
  svnsync from 1.7. Note that with file:// URLs the svnsync you
  are running
  is both client and server from the mirror repository's point
  of view.
  So the version running on svn.zope.org doesn't really
  matter (though I
  checked, and it appears to be severely outdated...)

  
  
  
  
  
  
  Spam
  Not
spam
  Forget
previous vote

  


  



AW: Could not read chunk size: connection was closed by server on Windows 7

2013-02-08 Thread Michael Zender
> -Ursprüngliche Nachricht-
> Von: Michael Zender [mailto:michael.zen...@mos-tangram.com]
> Gesendet: Donnerstag, 7. Februar 2013 17:34
> An: users@subversion.apache.org
> Betreff: AW: Could not read chunk size: connection was closed by
> server on Windows 7
> 
> -Ursprüngliche Nachricht-
> Von: Mark Phippard [mailto:markp...@gmail.com]
> Gesendet: Donnerstag, 7. Februar 2013 15:26
> An: Michael Zender
> Cc: users@subversion.apache.org
> Betreff: Re: Could not read chunk size: connection was closed by
> server on Windows 7
> 
> Sorry for top posting, but see this FAQ:
> 
> http://subversion.apache.org/faq.html#secure-connection-truncated
> 
> It is the problem you are having.  The error message just varies
> between SSL and plain HTTP but the cause is the same.  The client gets
> busy doing something and the server thinks the client has gone away
> and it kills the connection.
> 
> Are you using a Subversion 1.7 client?  While I do not believe it
> eliminates the problem, it does manage the working copy radically
> different from SVN 1.6 and I would expect it to be less prone to this
> problem.
> 
> 
> 
> On Thu, Feb 7, 2013 at 9:18 AM, Michael Zender
>  wrote:
> > Hello everyone,
> >
> > a couple of days ago, I configured our apache webserver to serve our
> > internal subversion repositories over plain old http. Before that,
> the
> > repositories have only been accessible using https. Everything
> seemed to
> > work pretty smoothly but after a couple of hours I had more and more
> > complaints about problems that occurred while working (svn co, svn
> ls
> > -v) with directories containing a large number of files (> 3000).
> >
> > The error message reported was something like the following:
> > svn ls http: -v
> > svn: PROPFIND of '//!svn/bc//': Could not
> read
> > chunk size: connection was closed by server (http://)
> >
> > svn co http:
> > svn: REPORT of '//!svn/vcc/default': Could not read chunk
> size:
> > connection was closed by server (http://)
> > An interesting fact about the checkout is, that the working copy has
> > been created and so far I've had no problem with it.
> >
> > I started investigating and was able to reproduce the problem first
> in
> > the live environment before replicating it on a completely different
> > server. The specs are:
> > OS: Debian GNU/Linux 6.0.6 (squeeze)
> > Webserver: Apache/2.2.16
> > Subversion: svn, version 1.6.12 (r955767)
> >
> > I created a test repository with the following script
> > svnadmin create project_Test
> > chown www-data:www-data project_Test -R
> > svn co file:///var/lib/subversion/project_Test wc_project_Test
> > cd wc_project_Test
> > mkdir src
> > for i in {1..1000}
> > do
> > head -c 10K < /dev/urandom > src/testfile$i.dat
> > done
> > svn add src
> > svn ci -m "test commit"
> >
> > The script creates a new repository "project_Test" containing a src
> > directory with 1000 10kB files with random content.
> >
> > And made it available in via apache using the following VirtualHost
> > configuration:
> >
> > 
> > ServerName svntest
> >
> > 
> > DAV svn
> > SVNParentPath /var/lib/subversion/
> > SVNListParentPath On
> > 
> > 
> >
> > With this setup I was able to reproduce the checkout problem. I
> > increased the number of files in the directory to up to 4500 but as
> of
> > now, I've not been able to reproduce the svn ls -v problem.
> >
> > I did a lot of analysis and my conclusion is, that the problems only
> > occur on Windows 7 using any client software we have in use (Eclipse
> > integrated client, TortoiseSVN, sv command line client). On
> WindowsXP
> > as well as on Linux, there's no problem at all working with the
> > repository using http communication. When I execute the svn co
> operation
> > on a debian system installed on a VirtualBox VM hosted on my Windows
> 7
> > I have the same problem as directly on the Windows 7 client.
> >
> > I know that there are a lot of emails on this list describing the
> same
> > error message and I spent quite some time to scan through them and
> to
> > follow the links but so far none of them contained a solution for my
> > particular problem.
> >
> > I'll also gladly provide any further information that you find
> useful to
> > further analyze the problem.
> >
> > Thank you very much in advance for your help!
> >
> > Michael
> 
> 
> 
> --
> Thanks
> 
> Mark Phippard
> http://markphip.blogspot.com/
> 
> 
> Hi Mark,
> 
> and thank you for your quick answer!
> 
> I tried increasing the Timeout value in the apache configuration (was
> 300, I doubled it to test this to 600) but that did not resolve the
> problem.
> 
> The checkout on my Windows 7 client takes about 21 seconds
> (TortoiseSVN
> shows this summary, when the checkout is complete). On the Windows XP
> box (where the checkout does not produce any error message), the whole
> process takes about 50 seconds.
> 
> But you are right about the

Re: Feasiblility question

2013-02-08 Thread Les Mikesell
On Fri, Feb 8, 2013 at 6:31 AM, Dermot  wrote:

> In my $work, we manage thousands of binary files (tiffs). We may modify a
> file once or twice before eventually entering the file as a record. Files
> arrive in groups (a submission) and I would like to track changes and the
> history of a file. Once the file is entered as a record, I could remove much
> of the history.
>
> I've used subversion for software version control and I am wondering if I
> would be stretching it's features to versioning thousands of binary files
> (currently 13,000 since the start of 2013) at about 60MB each file.
>
> Apart from the size of the diffs/deltas, I am struggling to envisage a way
> to organise the repo. Making a new project for each submission would make
> make the whole repo unwieldy.
>
> Has anyone used subversion for this type of tracking? Does what I'm
> proposing sound feasible?  Any thoughts would be appreciated.

I don't believe there is a reasonable way to ever remove anything from
a subversion repository such that it releases the space used for the
thing you removed.   So, I wouldn't consider this with subversion
unless you can work out a way to make separate repositories for one or
a few files so it would be feasible to just remove the whole thing if
you no longer need it or 'svnadmin dump/filter/load' to restructure
them.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Unexpected conflicts merging updates from trunk to a branch

2013-02-08 Thread Matthew Pounsett

On 2013/02/07, at 10:09, Stefan Sperling wrote:

> On Thu, Feb 07, 2013 at 09:19:47AM -0500, Matthew Pounsett wrote:
>> 
>> I've been running into unexpected tree conflicts when updating branches from 
>> the trunk, after reintegrating to the trunk where and file adds or removes 
>> were involved in the reintegrate.  I expect I'm doing something wrong here, 
>> but I haven't been able to figure out what.  Can someone point me in the 
>> right direction?   Here's a simple example that demonstrates my issue.
> 
> You cannot keep using the as-is branch after it has been reintegrated.

Ah, of course.  Thanks very much for the references.. I'll go have another 
re-read.

> 
> Please review the entire "Reintegrating a Branch" section again:
> http://svnbook.red-bean.com/en/1.7/svn.branchmerge.basicmerging.html#svn.branchemerge.basicmerging.reintegrate
> and also see the "Keeping a Reintegrated Branch Alive", which discusses
> the problem in detail:
> http://svnbook.red-bean.com/en/1.7/svn.branchmerge.advanced.html#svn.branchmerge.advanced.reintegratetwice
> 
> This will be much easier in 1.8, see:
> http://subversion.apache.org/docs/release-notes/1.8.html#auto-merge
> 



Re: Feasiblility question

2013-02-08 Thread Nico Kadel-Garcia
On Fri, Feb 8, 2013 at 10:50 AM, Les Mikesell  wrote:
> On Fri, Feb 8, 2013 at 6:31 AM, Dermot  wrote:
>
>> In my $work, we manage thousands of binary files (tiffs). We may modify a
>> file once or twice before eventually entering the file as a record. Files
>> arrive in groups (a submission) and I would like to track changes and the
>> history of a file. Once the file is entered as a record, I could remove much
>> of the history.
>>
>> I've used subversion for software version control and I am wondering if I
>> would be stretching it's features to versioning thousands of binary files
>> (currently 13,000 since the start of 2013) at about 60MB each file.
>>
>> Apart from the size of the diffs/deltas, I am struggling to envisage a way
>> to organise the repo. Making a new project for each submission would make
>> make the whole repo unwieldy.
>>
>> Has anyone used subversion for this type of tracking? Does what I'm
>> proposing sound feasible?  Any thoughts would be appreciated.
>
> I don't believe there is a reasonable way to ever remove anything from
> a subversion repository such that it releases the space used for the
> thing you removed.   So, I wouldn't consider this with subversion
> unless you can work out a way to make separate repositories for one or
> a few files so it would be feasible to just remove the whole thing if
> you no longer need it or 'svnadmin dump/filter/load' to restructure
> them.

Separate repositories linked together by "svn;external" settings can
do this, with a central "build" structure publishing tags or branches
with hooks to specific releases of components from other repos. But
resource tracking can get awkward. Some old legacy repo that only one
project was using can wind up culled, with managerial approval, and
discovered to be critical to another legacy tool or two that no one
has built for a few years and kept saying "if it's not broken, don't
fix it". So factoring the repositories well, and having good archival
backups, can be invaluable.


Re: Could not read chunk size: connection was closed by server on Windows 7

2013-02-08 Thread Mark Phippard
On Fri, Feb 8, 2013 at 10:28 AM, Michael Zender
 wrote:

> I finally solved my problem and wanted to share my solution with you.

Thanks for letting us know.

> It turned out, that Kaspersky Endpoint Security 8 and its Web-Anti-Virus
> feature in particular were causing this problem to show up. We solved it
> by defining a rule that excludes our subversion servers from the
> Web-Anti-Virus service. The Windows XP still had Kaspersky 6 installed
> which does not have the Web-Anti-Virus feature.

I was thinking on this overnight, and believe it or not I was going to
propose you look in this general direction.

> I still don't know what exactly the problem is because in my opinion,
> the anti virus software should act in a completely transparent manner
> but anyways, it's working now, so I don't bother any more!

If you look at the FAQ and remove some of the specifics, more
generally what it is saying is that while the client is doing some
work the connection to the server is closed in a manner that the
client is not expecting it.  So the error manifests to the client as a
chunk delimiter error when the data it is reading disappears.  The FAQ
describes one scenario that caused this, the server ending the
connection.

These Windows anti-virus solutions operate at a low level so they can
intercept and monitor your TCP/IP traffic.  I would guess that either
Subversion's pattern of HTTP requests looked unusual or perhaps even
the content in one of your files.  My guess is that when these tools
sense a problem they do not try to be graceful about it.  It probably
just kills the connection.  After all if it were a virus or trojan
horse on your computer it does not want to make it easier for the
malicious code to recover.  So most likely when it senses a problem it
closed the connection and that manifested itself the same as the
server timeout.

One thing that might be helpful is to look into what kind of logging
the tool provides. It would be nice if they log some forensic data
about what caused them to do this.  Maybe that information can go back
to Kapersky to make the tool not do this.  Or maybe it is just a bug
in their tool where they cannot handle all of the requests and how
fast they are being made.  I suspect a SVN client drives HTTP traffic
a lot differently than a typical web browser loading a page does.

> Thanks again to Mark for his reply, it definitely made me investigate in
> the right direction.

You are welcome and thanks for sharing the information back.  Do you
have any suggestions on how this FAQ could be improved to add this
information?

-- 
Thanks

Mark Phippard
http://markphip.blogspot.com/


Re: Feasiblility question

2013-02-08 Thread Les Mikesell
On Fri, Feb 8, 2013 at 10:57 AM, Nico Kadel-Garcia  wrote:
>>
>>> In my $work, we manage thousands of binary files (tiffs). We may modify a
>>> file once or twice before eventually entering the file as a record. Files
>>> arrive in groups (a submission) and I would like to track changes and the
>>> history of a file. Once the file is entered as a record, I could remove much
>>> of the history.
>>>
>>> I've used subversion for software version control and I am wondering if I
>>> would be stretching it's features to versioning thousands of binary files
>>> (currently 13,000 since the start of 2013) at about 60MB each file.
>>>
>>> Apart from the size of the diffs/deltas, I am struggling to envisage a way
>>> to organise the repo. Making a new project for each submission would make
>>> make the whole repo unwieldy.
>>>
>>> Has anyone used subversion for this type of tracking? Does what I'm
>>> proposing sound feasible?  Any thoughts would be appreciated.
>>
>> I don't believe there is a reasonable way to ever remove anything from
>> a subversion repository such that it releases the space used for the
>> thing you removed.   So, I wouldn't consider this with subversion
>> unless you can work out a way to make separate repositories for one or
>> a few files so it would be feasible to just remove the whole thing if
>> you no longer need it or 'svnadmin dump/filter/load' to restructure
>> them.
>
> Separate repositories linked together by "svn;external" settings can
> do this, with a central "build" structure publishing tags or branches
> with hooks to specific releases of components from other repos. But
> resource tracking can get awkward. Some old legacy repo that only one
> project was using can wind up culled, with managerial approval, and
> discovered to be critical to another legacy tool or two that no one
> has built for a few years and kept saying "if it's not broken, don't
> fix it". So factoring the repositories well, and having good archival
> backups, can be invaluable.

You can simply put a bunch of repos under the top level served by http
or svn and it appears pretty seamless except for when you have to
create a new one.But, since binary diffs aren't very useful anyway
and that migh have scaling issues,  I think I'd just try to use a
de-duping filesystem like zfs and store as many copies as might still
be useful.

-- 
  Les Mikesell
 lesmikes...@gmail.com


Re: Feasiblility question

2013-02-08 Thread Dermot
On 8 Feb 2013 17:51, "Les Mikesell"  ...

> >
> > Separate repositories linked together by "svn;external" settings can
> > do this, with a central "build" structure publishing tags or branches
> > with hooks to specific releases of components from other repos. But
> > resource tracking can get awkward. Some old legacy repo that only one
> > project was using can wind up culled, with managerial approval, and
> > discovered to be critical to another legacy tool or two that no one
> > has built for a few years and kept saying "if it's not broken, don't
> > fix it". So factoring the repositories well, and having good archival
> > backups, can be invaluable.
>
> You can simply put a bunch of repos under the top level served by http
> or svn and it appears pretty seamless except for when you have to
> create a new one.But, since binary diffs aren't very useful anyway
> and that migh have scaling issues,  I think I'd just try to use a
> de-duping filesystem like zfs and store as many copies as ...

The requirement is more about trackabiliy; knowing where in the workflow
the binary file is. I don't think anyone is expecting to use diffs but the
history would be key.
Thanks,
Dermot.


Re: Feasiblility question

2013-02-08 Thread Les Mikesell
On Fri, Feb 8, 2013 at 1:13 PM, Dermot  wrote:
>>
>> You can simply put a bunch of repos under the top level served by http
>> or svn and it appears pretty seamless except for when you have to
>> create a new one.But, since binary diffs aren't very useful anyway
>> and that migh have scaling issues,  I think I'd just try to use a
>> de-duping filesystem like zfs and store as many copies as ...
>
> The requirement is more about trackabiliy; knowing where in the workflow the
> binary file is. I don't think anyone is expecting to use diffs but the
> history would be key.

In that respect it would work as long as as you commit each operation
- you do get the log message tied atomically to the change and it
would  be a relatively cheap operation to copy things around to
different paths to represent their states.   But, there are probably
other workflow-control tools that can manage files without making it
completely impossible to administratively delete them.

-- 
   Les Mikesell
  lesmikes...@gmail.com