I solved that issue (unable to checkout large repo' via HTTP) by editing 
git/config/unicorn.yml and setting timeout to 120, when it was previously 
30, and restarting the gitab service.

My unicorn is behind apache running as a reverse proxy with no special 
timeouts set there...



On Wednesday, September 25, 2013 8:49:59 AM UTC-4, Alison Gravley wrote:
>
> People are constantly getting errors when cloning large repositories (only 
> about 100MB at biggest). Everyone has put the 500MB postBuffer settings on, 
> which sometimes fixes it, but it still fails most of the time regardless. I 
> have manually done repacks, which fixes it sometimes, but even then it will 
> still fail for people. I shouldn't have to go in there and fix repos all 
> day. What magical change to gitlab or whatever do I need to do to keep this 
> from happening???
>
> *gitlab.yml has these settings:*
>
>   ## Git settings
>   # CAUTION!
>   # Use the default values unless you really know what you are doing
>   git:
>     bin_path: /usr/bin/git
>     # Max size of a git object (e.g. a commit), in bytes
>     # This value can be increased if you have very large commits
>     max_size: 524288000 # 500.megabytes
>     # Git timeout to read a commit, in seconds
>     timeout: 120
>
> *The git errors they are getting is this:*
> POST git-upload-pack (200 bytes)
> remote: Counting object: 1305, done.
> remote: Compressing objects: 100% (291/291), done.
> fatal: early EOF
> fatal: The remote end hung up unexpectedly
> fatal: index-pack failed
> error: RPC failed; result=18, HTTP code = 200
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"GitLab" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to