Re: LDAP authenticate problem

2018-05-22 Thread Eric Johnson
The question relates to to either Apache, or the ActiveDirectory
configuration, not Subversion, from the looks of it.

The mailing lists for httpd will probably be able to give better advice
more quickly.

Eric.


On Mon, May 21, 2018 at 2:41 PM, Paul Nguyen 
wrote:

> I’m running SVN 1.9.3 (r1718519), on Ubuntu 16-04 with Server version:
> Apache/2.4.18 (Ubuntu).
>
> Problem is when a user failed 3 times with his password, the account
> doesn’t get locked but it keeps prompting. It looks like it authenticates
> against every single file in the path of the repo that user wants to access.
>
> The apache.conf:
>
>
> 
>
>   ServerName 
>
>   ErrorLog /var/log/svn/docs_LDAP_error.log
>
>   CustomLog /var/log/svn/docs_LDAP_access.log common
>
>   
>
> DAV svn
>
> SVNPath /var/svnrepo/docs
>
> ##LDAP
>
>  AuthName "docs Repo - Active Directory Authentication"
>
> AuthBasicProvider ldap
>
> AuthType Basic
>
> AuthLDAPGroupAttribute member
>
> AuthLDAPGroupAttributeIsDN On
>
> AuthLDAPURL "ldap://:389/cn=Users,dc=chp,
> dc=com?sAMAccountName?sub?(objectClass=*)"
>
> AuthLDAPBindDN "app_subvers...@chp.com"
>
> AuthLDAPBindPassword ""
>
> require valid-user
>
> ##
>
> RequestHeader edit Destination ^https: http: early
>
> AuthzSVNAccessFile /var/svnrepo/auth/docs-subdomain
>
> SetInputFilter DEFLATE
>
> SetOutputFilter DEFLATE
>
> SVNIndexXSLT /.chp/svnindex.xsl
>
>   
>
> 
>
> Is there a way to lock out an user account after 3 failed attempts as it's
> supposed to ?
>
> Thanks,
> Paul
>


Re: Reference to non-existent node

2018-05-22 Thread Davor Josipovic
>  Now that is interesting. 40k doesn't seem to be such a large amount of data 
> for modern computers. Very slow and fragmented hard drive? Or perhaps there's 
> something else going on that is manifesting this way?

The HDD is indeed on the slowside, and together with low memory...

But I think this also show how I/O intensive SVN is. On the client side, for 
each committed file, one copy is placed in .svn folder, and an other copy in a 
temporary folder (which is deleted after file transfer in v1.9). So for each 
file committed, a double copy is made client-side. This temporary copy is 
really necessary?

Server-side, I see similar disk bashing. For each committed file, max 2 (?) 
copies are made in the transaction directory.

So any way to reduce the I/O?


Re: Reference to non-existent node

2018-05-22 Thread Davor Josipovic
>  Now that is interesting. 40k doesn't seem to be such a large amount of data 
> for modern computers. Very slow and fragmented hard drive? Or perhaps there's 
> something else going on that is manifesting this way?

The HDD is indeed on the slowside, and together with low memory...

But I think this also show how I/O intensive SVN is. On the client side, for 
each committed file, one copy is placed in .svn folder, and an other copy in a 
temporary folder (which is deleted after file transfer in v1.9). So for each 
file committed, a double copy is made client-side. This temporary copy is 
really necessary?

Server-side, I see similar disk bashing. For each committed file, max 2 (?) 
copies are made in the transaction directory.

So any way to reduce the I/O?


Re: Reference to non-existent node

2018-05-22 Thread Davor Josipovic
>  Now that is interesting. 40k doesn't seem to be such a large amount of data 
> for modern computers. Very slow and fragmented hard drive? Or perhaps there's 
> something else going on that is manifesting this way?

The HDD is indeed on the slowside, and together with low memory...

But I think this also show how I/O intensive SVN is. On the client side, for 
each committed file, one copy is placed in .svn folder, and an other copy in a 
temporary folder (which is deleted after file transfer in v1.9). So for each 
file committed, a double copy is made client-side. This temporary copy is 
really necessary?

Server-side, I see similar disk bashing. For each committed file, max 2 (?) 
copies are made in the transaction directory.

So any way to reduce the I/O?


Re: Reference to non-existent node

2018-05-22 Thread Nico Kadel-Garcia
On Tue, May 22, 2018 at 2:36 PM Davor Josipovic  wrote:

> >  Now that is interesting. 40k doesn't seem to be such a large amount of
data for modern computers. Very slow and fragmented hard drive? Or perhaps
there's something else going on that is manifesting this way?

> The HDD is indeed on the slowside, and together with low memory...

> But I think this also show how I/O intensive SVN is. On the client side,
for each committed file, one copy is placed in .svn folder, and an other
copy in a temporary folder (which is deleted after file transfer in v1.9).
So for each file committed, a double copy is made client-side. This
temporary copy is really necessary?

I think it shows how I/O intensive using 40,000 small files is. Especially
if they are in the same directory, many filesystems get increasingly
unhappy as they try to manage that many files in one directory.