As parameters. For example, if you have 100 simulations, set up a list of 4
distinct sets of data (1:25, 26:50, etc) and call the single-threaded
processing function from parLapply iterated four times. Then each instance of
the processing function won't return until it has completed 25 simulatio
How can I copy distinct blocks of data to each process?
On Mon, Oct 14, 2013 at 10:21 PM, Jeff Newmiller
wrote:
> The session info is helpful. To the best of my knowledge there is no easy way
> to share memory between R processes other than forking. You can use
> clusterExport to make "global"
The session info is helpful. To the best of my knowledge there is no easy way
to share memory between R processes other than forking. You can use
clusterExport to make "global" copies of large data structures in each process
and pass index values to your function to reduce copy costs at a price
Jeff:
Thank you for your response. Please let me know how I can
"unhandicap" my question. I tried my best to be concise. Maybe this
will help:
> version
_
platform i386-w64-mingw32
arch i386
os mingw32
system i386, mingw32
status
major
Your question misses on several points in the Posting Guide so any answers are
handicapped by you.
There is an overhead in using parallel processing, and the value of two cores
is marginal at best. In general parallel by forking is more efficient than
parallel by SNOW, but the former is not ava
5 matches
Mail list logo