If you want to do a fancy matrix operation, you may need to invent the
wheel yourself. Rmpfr only supports limited matrix operations. There may
exist some C++ library that can do this job, I will suggest finding a
matrix library whose elements are of a template type, then combine it with
a multi-pr
Thanks! I installed Rmpfr now.
But it does NOT support eigen function ?
x <- array(1:16, dim=c(4,4))
ev <- eigen(x)
mat <- mpfrArray(1:25, 64, dim = c(5,5))
ev <- eigen(mat)
Error in `dimnames<-`(`*tmp*`, value = NULL) : non-list RHS
***
Rmpfr product and inner product are not doubling the nominal precision.
Small precision inner products are stored as 53-bit numeric.
library(Rmpfr)
## double with precBits=53
(5/17) * (5/17) ## 0.08650519 ## base 10
formatHex( (5/17) * (5/17) ) ## +0x1.6253443526171p-4
## various precisions
fs
The issue is to avoid the storage and operational penalty. 100 x 100 matrix in
100 decimals vs 100 x 100 matrix in 50
decimals for many operations like copy, scale, etc. But accumulation of inner
products, you want to avoid digit loss,
e.g., A and B are long vectors -- say 10 long, with a few
Got it! Thanks!
On Sat, Mar 14, 2020 at 2:24 PM J C Nash wrote:
> As I understand things, OpenBLAS will improve the performance of
> computations on regular IEEE 754 formats.
>
> I use Rmpfr for much longer numbers, e.g., 50 decimal digits, which --
> depending on scale of numbers in a
> vect
As I understand things, OpenBLAS will improve the performance of computations
on regular IEEE 754 formats.
I use Rmpfr for much longer numbers, e.g., 50 decimal digits, which --
depending on scale of numbers in a
vector -- can mean one needs approximately 100 decimals to accumulate the dot
prod
Not sure I understand the concern. IEEE 754 double precision floating point was
invented to allow for avoiding loss of precision when manipulating single
precision floating point numbers... but then C just ignores single precision
and you are expected to know that the precision of your answers m
Are you using a PC, please? You may want to consider installing OpenBLAS.
It’s a bit tricky but worth the time/effort.
Thanks,
Erin
On Sat, Mar 14, 2020 at 2:10 PM J C Nash wrote:
> Rmpfr does "support" matrix algebra, but I have been trying for some
> time to determine if it computes "double
Rmpfr does "support" matrix algebra, but I have been trying for some
time to determine if it computes "double" precision (i.e., double the
set level of precision) inner products. I suspect that it does NOT,
which is unfortunate. However, I would be happy to be wrong about
this.
JN
On 2020-03-14 3
Read its documentation yourself and unless you have good reason not to,
always cc the list (which I have done here).
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
Here's a novel idea:
Do a google search on "multiprecision computing package R" for an answer.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Sat, Mar 14, 2020
CRAN Rmpfr
On Sat, Mar 14, 2020 at 7:36 PM 林伟璐 <13917987...@163.com> wrote:
> Dear all
>
>
> I need a multiprecision computing package in R, if anyone in the list
> knows, please let me known...
>
>
> Many thanks
>
>
> Weilu Lin
> [[alternative HTML version deleted]]
>
>
Dear all
I need a multiprecision computing package in R, if anyone in the list knows,
please let me known...
Many thanks
Weilu Lin
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
ht
13 matches
Mail list logo