As a short and simple approach, I just compiled the current R release
on Ubuntu with ICC and MKL using just this:
$ tar -xzf R-3.2.2.tar.gz
$ cd R-3.2.2
$ CC=icc CXX=icpc AR=xiar LD=xild CFLAGS="-g -O3 -xHost" CXXFLAGS="-g
-O3 -xHost" ./configure --with-blas="-lmkl_rt -lpthread" --with-lapack
--en
I was experiencing a strange pattern of slowdowns when using
sample.int(), where sampling from a one population would sometimes
take 1000x longer than taking the same number of samples from a
slightly larger population. For my application, this resulted in a
runtime of several hours rather than
arnaud gaboury
wrote:
> On Wed, Sep 9, 2015 at 11:26 PM, Nathan Kurz wrote:
>>
>> As a short and simple approach, I just compiled the current R release
>> on Ubuntu with ICC and MKL using just this:
>>
>> $ tar -xzf R-3.2.2.tar.gz
>> $ cd R-3.2.2
>> $ CC=
When doing repeated regressions on large data sets, I'm finding that
the time spent on garbage collection often exceeds the time spent on
the regression itself. Consider this test program which I'm running
on an Intel Haswell i7-4470 processor under Linux 3.13 using R 3.1.2
compiled with ICPC 14
On Thu, Jan 15, 2015 at 11:08 AM, Simon Urbanek
wrote:
> In addition to the major points that others made: if you care about speed,
> don't use compression. With today's fast disks it's an order of magnitude
> slower to use compression:
>
>> d=lapply(1:10, function(x) as.integer(rnorm(1e7)))
>>
On Thu, Jan 15, 2015 at 3:55 PM, Michael Lawrence
wrote:
> Just wanted to start a discussion on whether R could ship with more
> appropriate GC parameters.
I've been doing a number of similar measurements, and have come to the
same conclusion. R is currently very conservative about memory usage,
On Mon, Jan 19, 2015 at 1:00 PM, Felipe Balbi wrote:
> I just thought that such a small patch which causes no visible change to
> SVN users and allow for git users to build R would be acceptable, but if
> it isn't, that's fine too.
Felipe ---
It would appear that you are unaware that you are wal
ormance improvement would be large.
Measurement details are here:
http://stackoverflow.com/questions/28532493/reusing-existing-memory-for-blas-operations-in-r
Are there downsides or difficulties to this approach that I'm missing?
Nathan Kurz
n...@verse.com
__
On Wed, Feb 18, 2015 at 7:19 AM, Radford Neal wrote:
>> ... with assignments inside of loops like this:
>>
>> reweight = function(iter, w, Q) {
>> for (i in 1:iter) {
>> wT = w * Q
>> }
>> }
>> ... before the RHS is executed, the LHS allocation would be added
>> to a small fixed length lis