I'm in a similar situation and am looking seriously at a pair of E5-2643v3 (6 cores each-hyperthreaded).

Clint Bowman                    INTERNET:       cl...@ecy.wa.gov
Air Quality Modeler             INTERNET:       cl...@math.utah.edu
Department of Ecology           VOICE:          (360) 407-6815
PO Box 47600                    FAX:            (360) 407-7534
Olympia, WA 98504-7600

        USPS:           PO Box 47600, Olympia, WA 98504-7600
        Parcels:        300 Desmond Drive, Lacey, WA 98503-1274

On Mon, 15 Sep 2014, Prof Brian Ripley wrote:

On 15/09/2014 11:21, Ben Bolker wrote:
 Leif Ruckman <Leif <at> Ruckman.se> writes:

> > I am going to buy a new computer ( Dell workstation T5810 - Windows 8)
>  to work with simulatons in R.
> > Now I am asked what kind of processor I like and I was given two > choices. > > 1. Intel Xeon E5-1620 v3 - 4 cores 3.7 GHz Turbo
>  2. Intel Xeon E5-2640 v3 - 8 cores 2.6 GHz Turbo
> > I don't know what is better in simulations studies in R, a few very fast
>  cores or many cores at normal speed.


    It's **very** hard to answer such general questions reliably, but I'll
 take a guess and say that if you're doing simulation studies you're likely
 to be doing tasks that are easily distributable (e.g. many random
 realizations of the same simulation and/or realizations for many
 different sets of parameter values) and so the more-cores option
 will be a good idea.

    But it's possible that what you mean by "simulation studies" is
 different.

    If you can do some benchmarking of your problems on an existing
 machine that would probably be a good idea.

Unfortunately unless it is of very similar architecture that may not help much.

Three issues hard to scale from are the 'Turbo', the hyperthreading of modern Xeons and the cache sizes. Now, I happen to have machines with multiple E5-24x0 and E5-26x0 Xeons: both do hyperthreading well, so you would have 8 or 16 virtual CPUs and they will give you say 50% increase in throughput if all the virtual cores are used. But you cannot scale up from using just one process on one core.

I find it hard to think of tasks where option 1) would have more throughput, but if most of the time you are not running things in parallel then the higher speed on a single task is a consideration.


    Ben Bolker

 ______________________________________________
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,                  rip...@stats.ox.ac.uk
Emeritus Professor of Applied Statistics, University of Oxford
1 South Parks Road, Oxford OX1 3TG, UK

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to