Hi,
I found the following problem with recent R-devel
(2010-08-26 r52817) on Windows (32-bit and 64-bit):
'R CMD build ' gets stalled during vignette
creation for packages that have a Makefile in /inst/doc.
It seems that the problem is that the commands used in the
Makefile for converting .tex t
On 10/09/2010 5:05 PM, pooja varshneya wrote:
Hi Folks,
I am trying to build R-2.11.1 from source code on Windows 2003. I am
able to build it, but when i run 'make check', it fails as follows:
Does the tests produce a log somewhere that i can use for
troubleshooting the problem ?
--
Hi Folks,
I am trying to build R-2.11.1 from source code on Windows 2003. I am
able to build it, but when i run 'make check', it fails as follows:
Does the tests produce a log somewhere that i can use for
troubleshooting the problem ?
--
You're right Simon. 'open' is so common name.. and it's probably used in
some other part of R. I reneme it's from "open" to "open1" and now it works
without segfaults.
Thanks,
daniel
W dniu 10 wrze¶nia 2010 20:52 u¿ytkownik Simon Urbanek <
simon.urba...@r-project.org> napisa³:
> Daniel,
>
> I do
W dniu 10 września 2010 20:32 użytkownik Brian G. Peterson
napisał:
> Daniel,
>
> I haven't tried your example, but I wonder why you aren't using C accessor
> methods defined by xts itself or at least derived from the significant
> amounts of C code in xts.
>
> For example, your test code seems to
Daniel,
I haven't tried your example, but I wonder why you aren't using C
accessor methods defined by xts itself or at least derived from the
significant amounts of C code in xts.
For example, your test code seems to bear a close resemblance in
principle to coredata.c, but you don't appear t
Hi,
I work with SEXP C code and with xts and quantmod packages. I try to
touch how xts internal works.
So we have R session and:
> ls()
character(0)
> getSymbols('AAPL') # quantmod package
[1] "AAPL"
> ls()
[1] "AAPL"
> str(AAPL)
An ‘xts’ object from 2007-01-03 to 2010-09-09 containing:
Da
On Fri, 10 Sep 2010, Duncan Murdoch wrote:
On 10/09/2010 7:07 AM, Renaud Gaujoux wrote:
Thank you Duncan for your reply.
Currently I am using 'double' for the computations.
What type should I use for extended real in my intermediate computations?
I think it depends on compiler details. On s
On Sep 10, 2010, at 17:21 , Marc Schwartz wrote:
> Hi all,
>
> After my reply on R-Help to the relevant thread, I noted what appear to be a
> couple of typos in the Details section of ?pairwise.t.test. Note text with
> '**'.
>
> Current text:
>
> The **pool.SD** switch calculates a common SD
Hi all,
After my reply on R-Help to the relevant thread, I noted what appear to be a
couple of typos in the Details section of ?pairwise.t.test. Note text with '**'.
Current text:
The **pool.SD** switch calculates a common SD for all groups and **used** that
for all comparisons (this can be us
You will also get differences if you change optimization settings; even
though the hardware and OS and development tools are the same. The issue
there involves rounding error, particularly on intermediate results, and
propagation of that error (depending on the nature of the calculations after
the
Thanks Paul for the hints.
After some tests, reducing portion of my code, I found that simply doing
a naive computation of 'crossprod' in C does NOT give exactly the same
results as calling the Fortran underlying routine (dsyrk) as used in the
R source code.
I will try the double 0.0 to see if
On Wed, Sep 8, 2010 at 6:39 PM, Duncan Murdoch wrote:
> On 08/09/2010 5:37 PM, Jeffrey Horner wrote:
>>
>> On Wed, Sep 8, 2010 at 1:01 PM, Jeffrey Horner
>> wrote:
>>>
>>> On Wed, Sep 8, 2010 at 12:37 PM, Duncan Murdoch
>>> wrote:
On 08/09/2010 1:21 PM, Jeffrey Horner wrote:
>
>>>
With fortran I have always managed to be able to get identical results
on the same computer with the same OS. You will have trouble if you
switch OS or hardware, or try the same hardware and OS with different
math libraries. All the real calculations need to be double, even
intermediate variables.
Ok.
I quickly tried using LDOUBLE wherever I could, but it did not changed
the results. I might try harder...
I agree with you Barry, and I will re-double re-check my code.
Thank you both for your help.
Bests,
Renaud
On 10/09/2010 13:24, Barry Rowlingson wrote:
On Fri, Sep 10, 2010 at 11:46
On Fri, Sep 10, 2010 at 11:46 AM, Renaud Gaujoux
wrote:
> Hi,
>
> suppose you have two versions of the same algorithm: one in pure R, the
> other one in C/C++ called via .Call().
> Assuming there is no bug in the implementations (i.e. they both do the same
> thing), is there any well known reason
On 10/09/2010 7:07 AM, Renaud Gaujoux wrote:
Thank you Duncan for your reply.
Currently I am using 'double' for the computations.
What type should I use for extended real in my intermediate computations?
I think it depends on compiler details. On some compilers "long double"
will get it, but
Thank you Duncan for your reply.
Currently I am using 'double' for the computations.
What type should I use for extended real in my intermediate computations?
The result will still be 'double' anyway right?
On 10/09/2010 13:00, Duncan Murdoch wrote:
On 10/09/2010 6:46 AM, Renaud Gaujoux wrote
On 10/09/2010 6:46 AM, Renaud Gaujoux wrote:
Hi,
suppose you have two versions of the same algorithm: one in pure R, the
other one in C/C++ called via .Call().
Assuming there is no bug in the implementations (i.e. they both do the
same thing), is there any well known reason why the C/C++ imple
Hi,
suppose you have two versions of the same algorithm: one in pure R, the
other one in C/C++ called via .Call().
Assuming there is no bug in the implementations (i.e. they both do the
same thing), is there any well known reason why the C/C++ implementation
could return numerical results non
20 matches
Mail list logo