Hi Daniel,
by Using;
> make install; echo $?
It does indeed exit with a 0.
Thanks.
I don;' know how - but I completely missed the subversion directory that was
created!
I possibly, simply, assumed it was from trunk.
I can also confirm from a dev-point of view,
That trunk passes all tests via
> When I merge changes in SVN, the merges work well except for the
> conflicts. For some reason, the merges are frequently (but not
> always done twice). As an example:
> <<< .working
> <<< .working
> const int SERIALIZE_FIELDS_DATA_LENGTH = 83;
> ===
> const int SERIALIZE_FIELDS_DATA
Yes, that was the problem. After fixing the permissions, I initiated another
mirror operation that was able to sync everything.
The issue was that A had restricted access and a branch 'B' was created from A
for public use. Since A was missing in the mirrored branch, the mirror
operation
failed
Hi all,
We have 5 developers, each of them has a workspace on our linux
workgroup server. The workspaces constist of working copies checked out
from our SVN repository. The workspaces have grown rather big in our
case, we have like 20 GB per developer.
That's a 100 GB worth of working copi
"It should work."
Specifically, if you run svnsync using an authz-bound user, then you
should just see adds-with-history converted into adds-without-history.
I believe we have regression tests for such scenarios too.
If you can reproduce this (have a minimal example), please file an
issue, thanks
Do you use option 'nbrl' when mounting?
Like this (in stab):
//winserver/repo /opt/csvn/data/repositories cifs \
nobrl,user=csvn,password=***,rw,uid=csvn,gid=csvn, \
dir_mode=0755,file_mode=0755 0 0
Guten Tag Jan Dvorak,
am Donnerstag, 14. April 2011 um 16:16 schrieben Sie:
> Which linux filesystem would you recommend?
The only one I read of with deduplication was ZFS and KQ Infotech
seems to provide a good implementation for Linux.
http://www.kqinfotech.com/content.php?id=2
Mit freundlich
On 4/1/2011 7:52 AM, Stirnweiss, Siegmund SZ/HZA-ZIT3 wrote:
Sometimes you can adjust a tag if you've tagged the wrong
file, but that should be fairly rare. In Subversion, tags take less
than a second to do while in CVS, you have to tag each and every file.
Long files have to be rewritten after
On 4/1/2011 7:45 AM, KM wrote:
Thanks I think. Not sure it helps with my original question. ... which i
am sure i'll figure out by trial and error.
Whether to updgrade to solaris 10 w/new server, install newer svn and
dependent packages and then copy reppos from old server - dump it and
load it.
I reverted to the original source and verified it. It had no conflicts.
Then I did the merge again. The same thing happened and I ended up with
duplicate conflicts again.
--
From: "Bob Archer"
Sent: Thursday, April 14, 2011 9:33 AM
To: "Daniel
> --
> From: "Bob Archer"
> Sent: Thursday, April 14, 2011 9:33 AM
> To: "Daniel Walter" ;
>
> Subject: RE: duplicate merge conflict
>
> >> When I merge changes in SVN, the merges work well except for the
> >> conflicts. For some reason, the merge
It is possible that there are two ranges. It is not my intent to have these
two ranges, it would just be from the mechanics of svn copy. I have
recently switched from CVS where I did merges with tags. I am attempting to
use svn copy to do the same thing.
I make a new tag (svn copy) in my pr
Yeah, even I thought that the behavior should be same as you mentioned but I
got
"svnsync: Error while replaying commit" error during sync operation for that
revision.
When I checked the mirrored repository, A was missing which is correct as at
that time A had restricted access. Branch oper
On Thu, 14 Apr 2011 11:17 -0700, "ankush chadha"
wrote:
> Yeah, even I thought that the behavior should be same as you mentioned
> but I got "svnsync: Error while replaying commit" error during sync
> operation for that revision.
>
>
> When I checked the mirrored repository, A was missing whic
2011/4/14 Thorsten Schöning
> Guten Tag Jan Dvorak,
> am Donnerstag, 14. April 2011 um 16:16 schrieben Sie:
>
> > Which linux filesystem would you recommend?
>
> The only one I read of with deduplication was ZFS and KQ Infotech
> seems to provide a good implementation for Linux.
>
> http://www.kq
On 14.4.2011 21:17, David Brodbeck wrote:
[...]
Be careful with ZFS deduplication. It still has some issues. Memory
usage for it is quite massive,
I was prepared for that, yes.
and there are cases of running a destroy
operation on a deduped zpool taking literally days.
I see, that's ve
We are using Subversion 1.6.16 on a Ubuntu Lucid box, and a Ubuntu Hardy
(8.04) box hosting a subversion 1.6.16 repository via apache.
We use this configuration to support codestriker 1.9.10 for code
reviews, and have recently hit a problem with some of our reviews. After
carefully stepping throug
17 matches
Mail list logo