Hi all,
With the help of Tomas, I was able to track the issue down: Prior to R v3.6.0
the parallel package passes an uninitialized variable as the file descriptor
argument to the close system call.
In my particular R session this uninitialized variable (reproducibly) was
holding the value 7,
Hi all,
With the help of Tomas, I was able to track the issue down: Prior to R v3.6.0
the parallel package passes an uninitialized variable as the file descriptor
argument to the close system call.
In my particular R session this uninitialized variable (reproducibly) was
holding the value 7,
Hi Tomas,
I rebuild R (v3.5.2 for now, R-devel to follow) from the Debian package with
MC_DEBUG defined and hopefully also with "-Wall -O0 -gdwarf-2 -g3", though I
still have to verify this.
Below is the output. I think it is a total of two mclapply invocations in this
R session, the failing o
Hi Andreas,
thank you very much, good job finding it was EBADF. Now the question is
why the pipe has been closed prematurely; it could be accidentally by R
(a race condition in the cleanup code in fork.c) or possibly by some
other code running in the same process (maybe the R program itself or
Hi Tomas,
Thanks for your prompt reply and your offer to help. I might need to get back
to this since I am not too experienced in debugging these kinds of issues.
Anyway, I gave it a try and I think I have found the immediate cause:
I installed the debug symbols (r-base-core-dbg), placed
https
Hi Andreas,
the error is reported when some child process cannot send results to the
master process, which originates from an error returned by write() -
when write() returns -1 or 0. The logic around the writing has not
changed since R 3.5.2. It should not be related to the printing in the
c
Hi again,
One important correction of my first message: I misinterpreted the output.
Actually in that R session 2 input files were processed one after the other in
a loop. The first (with 88 parts went fine). The second (with 85 parts)
produced the sendMaster errors and failed. If (in a new ses
Hi,
I am facing a very weird problem with parallel::mclapply. I have a script which
does some data wrangling on an input dataset in parallel and then writes the
results to disk. I have been using this script daily for more than one year
always on an EC2 instance launched from the same AMI (no u