Stefan Beller writes:
> I can confirm this now.
>
> git fetch --recurse-submodules=yes -j 400
>
> in an submodule-ified Android tree takes very long to start putting out useful
> information, but if I hardcode the SPAWN_CAP to 4 it looks pretty amazing
> fast.
Nice to hear that parallel fetc
On Wed, Sep 23, 2015 at 12:34 PM, Junio C Hamano wrote:
> Junio C Hamano writes:
>
>> You are running "git fetch" that are is a lot more heavy-weight.
>> Because once each of them started fully they will be network bound,
>> it is likely that you would want to run more processes than you have
>>
Junio C Hamano writes:
> You are running "git fetch" that are is a lot more heavy-weight.
> Because once each of them started fully they will be network bound,
> it is likely that you would want to run more processes than you have
> core.
I thought the conclusion would be obvious, but just in ca
Stefan Beller writes:
> I modified the test-run-command test function to start up to 400 processes.
> (Most people will use less than 400 processes in the next 5 years), and run
> just as in t0061:
>
> ./test-run-command run-command-parallel-400 sh -c "printf
> \"%s\n%s\n\" Hello World"
>
> T
Junio C Hamano writes:
> Just to make sure there is no misunderstanding, just like I prefer
> "start one" over "start as many as possible" in order to give
> scheduling decision to the calling loop, I would expect that...
To sum up, what I anticipate would happen over time on top of 06/14
is som
On Tue, Sep 22, 2015 at 11:29 PM, Junio C Hamano wrote:
>
> And this one, when get_next_task() says "nothing more to do", is
> clearly "we returned without starting anything", so according to the
> comment it should be returning 0, but the code returns 1, which
> looks incorrect.
>
>> + if (s
Junio C Hamano writes:
> Stefan Beller writes:
>
>> +static void pp_buffer_stderr(struct parallel_processes *pp)
>> +{
>> +int i;
>> +
>> +while ((i = poll(pp->pfd, pp->max_processes, 100)) < 0) {
>> +if (errno == EINTR)
>> +continue;
>> +pp_cl
Stefan Beller writes:
> +static void pp_buffer_stderr(struct parallel_processes *pp)
> +{
> + int i;
> +
> + while ((i = poll(pp->pfd, pp->max_processes, 100)) < 0) {
> + if (errno == EINTR)
> + continue;
> + pp_cleanup(pp);
> + die_
Stefan Beller writes:
> run-command.c | 264
> +
> run-command.h | 36 +++
> t/t0061-run-command.sh | 20
> test-run-command.c | 24 +
> 4 files changed, 344 insertions(+)
I think we are almost there, but the
This allows to run external commands in parallel with ordered output
on stderr.
If we run external commands in parallel we cannot pipe the output directly
to the our stdout/err as it would mix up. So each process's output will
flow through a pipe, which we buffer. One subprocess can be directly
pi
10 matches
Mail list logo