Thanks for the tip. That API could make my work considerably easier.
Jason
On Tue, Feb 22, 2011 at 7:37 AM, Douglas Bates wrote:
> On Sun, Feb 20, 2011 at 4:39 PM, Jason Rudy wrote:
>> I've never used C++ before, so for this project I think I will stick
>> with just using the BLAS and LAPACK r
You're right about the code preceding .C. I stripped down the .C and
.Call codes to be as similar as possible, and the timings were much
closer.
R code:
Call_matrix_multiply = function(A,B){
C <- matrix(0,nrow(A),ncol(B))
.Call("R_CALL_matrix_multiply", A, B, C)
return(C)
On Sun, Feb 20, 2011 at 4:39 PM, Jason Rudy wrote:
> I've never used C++ before, so for this project I think I will stick
> with just using the BLAS and LAPACK routines directly. Another issue
> is that I will need to do some sparse matrix computations, for which I
> am planning to use CSPARSE, a
On Feb 22, 2011, at 1:55 AM, Jason Rudy wrote:
> I just tried that myself and the .Call version is substantially
> faster. It seems like there is a lot more going on in the .Call C
> code than the .C. Why is .Call faster? Does it have to do with the
> way arguments are passed into the C functio
I just tried that myself and the .Call version is substantially
faster. It seems like there is a lot more going on in the .Call C
code than the .C. Why is .Call faster? Does it have to do with the
way arguments are passed into the C function? I tried with DUP=FALSE
and NAOK=TRUE, but .Call was
Simon,
FWIW, %*% does a bit more, in particular does not call dgemm with NAs
present, as BLAS are often 'optimized' to give the wrong answer in
that case (e.g. by assuming 0*x is always 0, even though x can be
Inf, or my not distinguishing NaNs, whereas R uses one for NA).
Brian
On Sun, 20
Jason,
FWIW the direct interface (.Call) is more efficient and makes passing things
from R simpler:
C_matrix_multiply = function(A,B) .Call("R_matrix_multiply", A, B)
The drawback is a bit more legwork on the C side, but it also gives you more
flexibility:
SEXP R_matrix_multiply(SEXP A, SEXP
I've never used C++ before, so for this project I think I will stick
with just using the BLAS and LAPACK routines directly. Another issue
is that I will need to do some sparse matrix computations, for which I
am planning to use CSPARSE, at least to begin with. I am interested
by RcppArmadillo, an
It was indeed a simple problem! I took a look at that array.c as you
suggested and that cleared it right up. So, the correct C code is:
#include
#include
#include
#include
void R_matrix_multiply(double * A, double * B, int * m, int *n, int *
p, double * C){
double one = 1.0;
On 20 February 2011 at 09:50, Dirk Eddelbuettel wrote:
| There is of course merit in working through the barebones API but in case you
| would consider a higher-level alternative, consider these few lines based on
| RcppArmadillo (which end up calling dgemm() for you via R's linkage to the
BLAS)
On 19 February 2011 at 16:50, Jason Rudy wrote:
| Dear R-devel,
|
| I've written a numerical solver for SOCPs (second order cone programs)
| in R, and now I want to move most of the solver code into C for speed.
| I've written combined R/C packages before, but in this case I need to
| do matrix
Look a close look at matprod in src/main/array in the R sources.
Hint: it is the other dimensions you have wrong.
And as BLAS is Fortran, counts do start at 1.
On Sat, 19 Feb 2011, Jason Rudy wrote:
Dear R-devel,
I've written a numerical solver for SOCPs (second order cone programs)
in R, and
12 matches
Mail list logo