Re: Subversion Exception!

2012-02-10 Thread Ulrich Eckhardt

Am 10.02.2012 00:34, schrieb Ajay Mahagaokar:

I am using TortoiseSVN for the first time and got immediately into
trouble. My setup:

1.   Win7 64-bit
2.   VirtualBox with Win7 as host
3.   VM is VirtualBox Guest is Ubuntu 10.04 LTS (up to date).
4.   SVN Server is networked and a different m/c.


What is m/c?


5.   SVN Working copy is on by VM - command line SVN works fine,
builds fine.
6.   I wanted to use the Tortoise GUI frontend to SVN so
a.   I installed it
b.  Used file manager  (windows explorer) to move to the Samba share
where we have working directory


Working copies are not necessarily portable between systems, in 
particular not between MS Windows and POSIX systems, because the 
translation of line endings has different results there, so this setup 
is prone to errors. Another thing, where I'm not sure if that could also 
cause problems is that the working copies contain an SQLite database, 
which might also differ between systems (I'm not sure in this point, but 
a Berkeley database does differ). Keep your working copies local.


Another thing is that the database inside the WC requires file locking 
to work, which is not always guaranteed to 100% for network filesystems. 
Another reason to keep working copies local.




c.   Created a new directory (tortoisetest)
d.  Right clicked and did a checkout
e.  Got the below.


Question: You mention host and guest operating systems, is anything of 
that relevant? If I understand you right, I would think that all you 
need is a Win7/64-bit system and a Samba share. Is this fault even 
reproducible or did it occur just once?




'D:\Development\SVN\Releases\TortoiseSVN-1.7.4\ext\subversion\subversion
\libsvn_client\checkout.c'

line 94: assertion failed (svn_uri_is_canonical(url, pool))


Which URL did you check out? Also, I seem to remember some problems with 
certain paths (root of a share), where exactly did you check out the 
working copy?


Thanks for the great description! Not everybody takes the instructions 
to report what they did and to search the web seriously...



Uli
**
Domino Laser GmbH, Fangdieckstraße 75a, 22547 Hamburg, Deutschland
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932
**
Visit our website at http://www.dominolaser.com
**
Diese E-Mail einschließlich sämtlicher Anhänge ist nur für den Adressaten 
bestimmt und kann vertrauliche Informationen enthalten. Bitte benachrichtigen 
Sie den Absender umgehend, falls Sie nicht der beabsichtigte Empfänger sein 
sollten. Die E-Mail ist in diesem Fall zu löschen und darf weder gelesen, 
weitergeleitet, veröffentlicht oder anderweitig benutzt werden.
E-Mails können durch Dritte gelesen werden und Viren sowie nichtautorisierte 
Änderungen enthalten. Domino Laser GmbH ist für diese Folgen nicht 
verantwortlich.
**



RE: TortoiseSVN

2012-02-10 Thread Bob Archer
> TortoiseSVN newer than 1.6.16 causes errors in our software build process

Good to know. 

Seriously, did you have a specific question?

BOb



multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Bruce Lysik
Hi,

I'm considering deploying 3 front-ends, all mounting the same SAN volume for 
repo.  (The SAN handle flock() and fnctl() correctly.)  These 3 FEs would be 
load balanced by a Citrix Netscaler.  (At least for http(s).)

Would there be any problems with this configuration?
 
--
Bruce Z. Lysik 

Re: multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Nico Kadel-Garcia
On Fri, Feb 10, 2012 at 1:21 PM, Bruce Lysik  wrote:
> Hi,
>
> I'm considering deploying 3 front-ends, all mounting the same SAN volume for
> repo.  (The SAN handle flock() and fnctl() correctly.)  These 3 FEs would be
> load balanced by a Citrix Netscaler.  (At least for http(s).)
>
> Would there be any problems with this configuration?

Potentially. Read operations, I wouldn't expect to be a big problem,
but commit operations need to atomic, and the software wasn't
*written* to behave well with network mounted back end filesystems
across multiple servers. So I wouldn't know, off hand, what phase
delays between two front ends writing revisions at the same time might
create for genuine adventures on the back end.

If you need 3 front ends, you might really consider Wandisco's
commercial package, which is designed for multiple front end use.


Re: Subversion reports error.

2012-02-10 Thread Masaru Kitajima
To all who helped me.

Thanks a lot for your kind helps.

Finally, I found that it was my environment issue. There was "usr/bin/ssh"
in my "Users/account/.ssh/config" file. It couldn't seen from Mac's Finder
as its name starts with ".".

I used Terminal.app to correct it to "/usr/bin/ssh" and succeeded to connect
to Subversion via Terminal.app.

But, it still failed connecting from Dreamweaver. Even after I changed the
Subversion setup with site in DW, it still tried to search for old server.

I couldn't find the misconfiguration so I create a new folder and copied
visible sources from Finder in order to not to include the files starts
with ".".

And re-create the site setting in DW and it works correctly now.

Thanks again for all your helps. This issue is closed.

Kindest regards,
Masaru

On 2012/02/10, at 8:23, Ryan Schmidt wrote:
> On Feb 9, 2012, at 16:09, Alagazam.net Subversion wrote:
>> On 2012-02-09 00:23, Ryan Schmidt wrote:
>>> On Feb 8, 2012, at 15:41, Alagazam.net Subversion wrote:
>>> 
 On Feb 7, 2012, at 15:44, Masaru Kitajima wrote:
 
> On 2012/02/08, at 6:36, Daniel Shahaf wrote:
> 
>> What is the output of
>> 
>> % ssh 
>> sectio...@section-9.sakura.ne.jp
>> svnserve -t
>> 
> It is as below:
> 
> ( success ( 2 2 ( ) ( edit-pipeline svndiff1 absent-entries 
> commit-revprops depth log-revprops partial-replay ) ) ) 
> 
> and then stops. A prompt is not shown.
> 
 This symptom of not getting any prompt back reminds me of a totally
 non-svn related "bug" but a network error I encountered some time ago.
 
>>> Not seeing a prompt in this case is not a bug; it's expected behavior. 
>>> svnserve is not an interactive program that has a prompt. It's a Subversion 
>>> server; the above test demonstrated that svnserve is running correctly and 
>>> is waiting for a Subversion client to connect to it.
>> 
>> Maybe I was reading the threads wrong [snip]
> 
> Yes; here are the two original messages in question:
> 
> http://mail-archives.apache.org/mod_mbox/subversion-users/201202.mbox/%3c3b642053-4ff0-4bd0-a0de-7ea55d053...@gmail.com%3e
> 
> http://mail-archives.apache.org/mod_mbox/subversion-users/201202.mbox/%3ce130c27c-fb32-453a-8f8a-830d079ea...@gmail.com%3e
> 
> 

<><><><><><><><><><><><><><><><><><><><><><><><><><>
Manager / Photographer / Lecturer / Writer
Masaru Kitajima

E-mail:tachi.sil...@gmail.com
blog:http://www.section-9.jp/blog/bluez/

<><><><><><><><><><><><><><><><><><><><><><><><><><>



Re: multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Stefan Sperling
On Fri, Feb 10, 2012 at 04:00:02PM -0500, Nico Kadel-Garcia wrote:
> On Fri, Feb 10, 2012 at 1:21 PM, Bruce Lysik  wrote:
> > Hi,
> >
> > I'm considering deploying 3 front-ends, all mounting the same SAN volume for
> > repo.  (The SAN handle flock() and fnctl() correctly.)  These 3 FEs would be
> > load balanced by a Citrix Netscaler.  (At least for http(s).)
> >
> > Would there be any problems with this configuration?
> 
> Potentially. Read operations, I wouldn't expect to be a big problem,
> but commit operations need to atomic, and the software wasn't
> *written* to behave well with network mounted back end filesystems
> across multiple servers. So I wouldn't know, off hand, what phase
> delays between two front ends writing revisions at the same time might
> create for genuine adventures on the back end.

Subversion was designed to allow multiple concurrent server processes
accessing the same repository.

And, generally, yes, there is a lot less risk of curruption if no
network i/o is involved when data is written to the repository
by a server process. After all, you're adding yet another complex
layer where something can go wrong.

But assuming locking via fcntl() works correctly, there shouldn't be
a problem with FSFS repositories ("svnadmin create --fs-type=fsfs",
which is default). FSFS was specifically designed for use with NFS.
This was stated in the release announcement of Subversion 1.1 which
was the first release to support FSFS
(see http://subversion.apache.org/docs/release-notes/1.1.html).
Technical details are available at
http://svn.apache.org/repos/asf/subversion/trunk/notes/fsfs

SAN storage usually appears as a local disk, so things should work fine.
I know of setups that run virtualised servers which access repositories
on SAN storage without issues.

It would be far from the truth to claim that problems have never been
seen on network filesystems, though. For example, I know people who,
after putting FSFS repositories on a CIFS share, ended up with a corrupt
rep-cache.db. This is an sqlite database added to FSFS in Subversion 1.6.
Sqlite requires the same locking primitives that Subversion requires
(see http://sqlite.org/faq.html#q5). This problem happened even with
just a single server instance writing to the repository. However, the
most likely cause is flawed or misconfigured file locking support in
the CIFS implementation. I could not examine the failure in detail.
But it involved a huge commit that took a long time, with possibly
concurrent commits. The rep-cache.db file is opened by every commit
operation so a file locking race is quite likely to hit here (not
discounting other possible races such as two commits writing to the
same revision file at the same time, they're just less likely).
The rep cache could be disabled in fsfs.conf without harm to recover
from this problem. Disabling this cache only increases the odds that
future revisions store redundant content but has no effect on correctness.
AFAIK repositories were moved off the share and the problem has not
occured since.


Re: multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Ryan Schmidt

On Feb 10, 2012, at 15:00, Nico Kadel-Garcia wrote:

> On Fri, Feb 10, 2012 at 1:21 PM, Bruce Lysik wrote:
>> 
>> I'm considering deploying 3 front-ends, all mounting the same SAN volume for
>> repo.  (The SAN handle flock() and fnctl() correctly.)  These 3 FEs would be
>> load balanced by a Citrix Netscaler.  (At least for http(s).)
>> 
>> Would there be any problems with this configuration?
> 
> Potentially. Read operations, I wouldn't expect to be a big problem,
> but commit operations need to atomic, and the software wasn't
> *written* to behave well with network mounted back end filesystems
> across multiple servers. So I wouldn't know, off hand, what phase
> delays between two front ends writing revisions at the same time might
> create for genuine adventures on the back end.
> 
> If you need 3 front ends, you might really consider Wandisco's
> commercial package, which is designed for multiple front end use.

My understanding is that WANdisco's strength is for multi-site deployments, but 
the OP's mention of a load balancer implies to me that all servers will be 
located at a single site. Bruce, if that's so, are three servers really needed? 
Have you verified that a single server will not be fast enough? If so, you 
could consider having any number of read-only slave servers, which would each 
proxy their write requests back to the single master server that Subversion 
supports. This way read operations would be accelerated, while write operations 
would be securely limited to just the single master. The slave servers could 
keep individual copies of the repository(ies) synchronized with the master 
using svnsync, or possibly keeping a single copy of the data on a SAN that all 
the servers access would be ok, though as Nico said that's not the usual (and 
well-tested) configuration.




Re: multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Stefan Sperling
On Fri, Feb 10, 2012 at 04:09:45PM -0600, Ryan Schmidt wrote:
> Have you verified that a single server will not be fast enough?

Good point. It might very well be fast enough.

> If so, you could consider having any
> number of read-only slave servers, which would each proxy their write
> requests back to the single master server that Subversion supports.
> This way read operations would be accelerated, while write operations
> would be securely limited to just the single master. The slave servers
> could keep individual copies of the repository(ies) synchronized with
> the master using svnsync

This is misleading. A write-through proxy setup does not eliminate
write operations on slave servers.

While replicating commits, svnsync performs the exact same kinds of
write operations against the slave servers that happen on the master
repository when a client makes a commit.

In fact, in the corrupted rep-cache.db case I mentioned earlier,
the write operation to the CIFS share was performed by svnsync
(luckily, only the slave was storing its repositories on CIFS :)


Re: multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Ryan Schmidt

On Feb 10, 2012, at 16:16, Stefan Sperling wrote:
> On Fri, Feb 10, 2012 at 04:09:45PM -0600, Ryan Schmidt wrote:
>> you could consider having any
>> number of read-only slave servers, which would each proxy their write
>> requests back to the single master server that Subversion supports.
>> This way read operations would be accelerated, while write operations
>> would be securely limited to just the single master. The slave servers
>> could keep individual copies of the repository(ies) synchronized with
>> the master using svnsync
> 
> This is misleading. A write-through proxy setup does not eliminate
> write operations on slave servers.

Oh I see! That is not how I had envisioned it happening, so thank you for the 
clarification.

> While replicating commits, svnsync performs the exact same kinds of
> write operations against the slave servers that happen on the master
> repository when a client makes a commit.

So when using svnsync one should always use a separate and preferably local 
copy of the repository(ies) on each server, yes?



Re: multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Stefan Sperling
On Fri, Feb 10, 2012 at 04:20:02PM -0600, Ryan Schmidt wrote:
> > While replicating commits, svnsync performs the exact same kinds of
> > write operations against the slave servers that happen on the master
> > repository when a client makes a commit.
> 
> So when using svnsync one should always use a separate and preferably
> local copy of the repository(ies) on each server, yes?

Yes. That's the entire idea. Revisions are replicated by performing
a commit against the slave. The only difference to a normal commit
is that data originates from an existing revision of a different
repository, rather than a working copy.

This design allowed svnsync to reuse functionality already implemented
for normal svn clients (e.g. network support). The slave server cannot 
tell the difference between an svn client and svnsync. Well, it could
by looking for revision properties of revision 0 which svnsync uses
to store meta data. But it cannot as far as the commit process is concerned.

You probably assumed that svnsync transferred underlying revision data
files across the network? That is not how svnsync operates.
But 'svnadmin hotcopy' works this way (minus the network support ;)
This is why 'svnadmin hotcopy' is faster when making a copy of a
complete repository. It performs a direct disk-to-disk copy,
and does not replay commits like svnsync does.


Re: multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Ryan Schmidt

On Feb 10, 2012, at 16:34, Stefan Sperling wrote:

> On Fri, Feb 10, 2012 at 04:20:02PM -0600, Ryan Schmidt wrote:
>>> While replicating commits, svnsync performs the exact same kinds of
>>> write operations against the slave servers that happen on the master
>>> repository when a client makes a commit.
>> 
>> So when using svnsync one should always use a separate and preferably
>> local copy of the repository(ies) on each server, yes?
> 
> Yes. That's the entire idea. Revisions are replicated by performing
> a commit against the slave. The only difference to a normal commit
> is that data originates from an existing revision of a different
> repository, rather than a working copy.

Here you're talking about a commit that happened on the master, and is being 
replicated to the slaves. I understand how that works.

I was talking about a commit that is done against the slave, and is proxied to 
the master. The book says:

http://svnbook.red-bean.com/en/1.7/svn.serverconfig.httpd.html#svn.serverconfig.httpd.extra.writethruproxy

"All read requests go to their local slave. Write requests get automatically 
routed to the single master server. When the commit completes, the master then 
automatically “pushes” the new revision to each slave server using the svnsync 
replication tool."


So thinking all this through, I agree svnsync does not make sense if you are 
hosting a repository on a SAN and trying to connect multiple svn servers to it. 
But it sounds like it would work fine, if you simply don't use svnsync. 
Configure one server to be the master (let it accept write requests). Configure 
the other servers to be slaves (read-only, and proxy any incoming write 
requests to the master). All servers point to the same repository data on the 
SAN and it can't get corrupted because only one server is writing to it. Sound 
ok?




Re: multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Stefan Sperling
On Fri, Feb 10, 2012 at 04:47:31PM -0600, Ryan Schmidt wrote:
> So thinking all this through, I agree svnsync does not make sense if
> you are hosting a repository on a SAN and trying to connect multiple
> svn servers to it. But it sounds like it would work fine, if you
> simply don't use svnsync. Configure one server to be the master (let
> it accept write requests). Configure the other servers to be slaves
> (read-only, and proxy any incoming write requests to the master). All
> servers point to the same repository data on the SAN and it can't get
> corrupted because only one server is writing to it. Sound ok?

Ah, I see what you mean.

Well, I suppose this would work, yes. You are essentially using
the write-through proxy feature to implement load balancing for
incoming TCP connections.

But it isn't necessary because the SAN should support file locking
so multiple concurrent servers writing to the same repository
synchronise write operations anyway.

E.g. consider offering both http:// and svn:// access to the repository.
This is rarely done but it is a supported use case. In your scheme, 
only http:// works because the write-through proxy feature is specific
to mod_dav_svn.


Re: multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Bruce Lysik
We have a single server installation which is currently not fast enough.   

The LB pair + 3 svn front-ends + SAN storage is not strictly for performance, 
but also for reliability.  Scaling vertically would probably solve performance 
problems in the short term, but still wouldn't address single points of failure.

Thanks for all the responses to this thread, it's very educational.
 
--
Bruce Z. Lysik 



 From: Stefan Sperling 
To: Ryan Schmidt  
Cc: Nico Kadel-Garcia ; Bruce Lysik ; 
"users@subversion.apache.org"  
Sent: Friday, February 10, 2012 2:16 PM
Subject: Re: multiple svn front-ends, single SAN repo volume
 
On Fri, Feb 10, 2012 at 04:09:45PM -0600, Ryan Schmidt wrote:
> Have you verified that a single server will not be fast enough?

Good point. It might very well be fast enough.

> If so, you could consider having any
> number of read-only slave servers, which would each proxy their write
> requests back to the single master server that Subversion supports.
> This way read operations would be accelerated, while write operations
> would be securely limited to just the single master. The slave servers
> could keep individual copies of the repository(ies) synchronized with
> the master using svnsync

This is misleading. A write-through proxy setup does not eliminate
write operations on slave servers.

While replicating commits, svnsync performs the exact same kinds of
write operations against the slave servers that happen on the master
repository when a client makes a commit.

In fact, in the corrupted rep-cache.db case I mentioned earlier,
the write operation to the CIFS share was performed by svnsync
(luckily, only the slave was storing its repositories on CIFS :)

Re: multiple svn front-ends, single SAN repo volume

2012-02-10 Thread Les Mikesell
On Fri, Feb 10, 2012 at 11:29 PM, Bruce Lysik  wrote:
> We have a single server installation which is currently not fast enough.
>
> The LB pair + 3 svn front-ends + SAN storage is not strictly for
> performance, but also for reliability.  Scaling vertically would probably
> solve performance problems in the short term, but still wouldn't address
> single points of failure.
>
> Thanks for all the responses to this thread, it's very educational.

Is the current storage on the san?  If not, putting it there with
fail-over svn servers fixes the reliability issue without introducing
new locking issues.   And if the san is faster than the local disk it
may help with speed as well.Does it all have to be in a single
repository?  If not, moving different parts to different svn servers
spreads the load without sharing the same transaction lock.

-- 
   Les Mikesell
 lesmikes...@gmail.com