I am looking for ways to reduce my process size for calls to mclapply. I
have determined that the size of my process is creating too much overhead
for the forking to be faster than a serial run.I am looking for some
techniques to experiment with. I have been getting huge efficiency gains
usi
Live: OO#.. Dead: OO#.. Playing
> Research Engineer (Solar/BatteriesO.O#. #.O#. with
> /Software/Embedded Controllers) .OO#. .OO#. rocks...1k
> -------
&
edded Controllers) .OO#. .OO#. rocks...1k
> -------
> Sent from my phone. Please excuse my brevity.
>
> Jeffrey Flint wrote:
>>I'm running package parallel in R-3.0.2.
>>
>>Below are the execut
I'm running package parallel in R-3.0.2.
Below are the execution times using system.time for when executing
serially versus in parallel (with 2 cores) using parRapply.
Serially:
user system elapsed
4.670.034.71
Using package parallel:
user system elapsed
3.820.12
can you please explain why you want to start the workers manually? I'd
> be happy to look into the details if you can reproduce the problem with a
> recent version of R and the parallel package.
>
> Best,
> Uwe Ligges
>
>
>
>
>
>
> On 28.09.2013 03:20, Jeffrey
This is in regards to the SNOW library.
I'm using Windows. The problem is that makeSOCKcluster hangs in R as well
as the DOS command line. Below I've shown that it completes the Rscript
until it reaches the line "slaveLoop(master)" , at which point it hangs.
=
In R:
am not sure why you would want to do
> that).
>
>
> On 27/09/2013 02:26, Jeffrey Flint wrote:
>
>> The command which hangs:
>> I'm hoping there is a simple explanation, but I searched on-line and
>> nothing jumped out.
>>
>> cl <- makeCluster(
The command which hangs:
I'm hoping there is a simple explanation, but I searched on-line and
nothing jumped out.
> cl <- makeCluster(type="SOCK",c("localhost"),manual=TRUE)
Manually start worker on localhost with
C:/PROGRA~1/R/R-214~1.2/bin/Rscript.exe "C:/Program
Files/R/R-2.14.2/library/sn
8 matches
Mail list logo