Rmpfr product and inner product are not doubling the nominal precision.
Small precision inner products are stored as 53-bit numeric.
library(Rmpfr)
## double with precBits=53
(5/17) * (5/17) ## 0.08650519 ## base 10
formatHex( (5/17) * (5/17) ) ## +0x1.6253443526171p-4
## various precisions
fs
You are starting to sound like Dr Nash [1]... "use optimr".
[1] https://stat.ethz.ch/pipermail/r-help/2018-July/458498.html
On March 14, 2020 2:27:48 PM PDT, Abby Spurdle wrote:
>##
>I ran before posting, and waited a while...
>(Re: The postin
Uh... yes?!
On March 14, 2020 2:22:14 PM PDT, William Dunlap via R-help
wrote:
>On Linux it says "Program received signal SIGFPE, Arithmetic
>exception". I
>think the only way to get a SIGFPE (floating point exception) any more
>(on
>machines with IEEE floating point arithmetic) is taking an in
##
I ran before posting, and waited a while...
(Re: The posting guide, which I'm going to start putting a lot more weight on).
Noting, I was wondering if the posting guide has a mistake, because
<4*runif(1)> doesn't do anything special...
(Hopef
On Linux it says "Program received signal SIGFPE, Arithmetic exception". I
think the only way to get a SIGFPE (floating point exception) any more (on
machines with IEEE floating point arithmetic) is taking an integer modulo
zero, which do_druncnorm does when length(x) is 0:
const double cx
The issue is to avoid the storage and operational penalty. 100 x 100 matrix in
100 decimals vs 100 x 100 matrix in 50
decimals for many operations like copy, scale, etc. But accumulation of inner
products, you want to avoid digit loss,
e.g., A and B are long vectors -- say 10 long, with a few
Got it! Thanks!
On Sat, Mar 14, 2020 at 2:24 PM J C Nash wrote:
> As I understand things, OpenBLAS will improve the performance of
> computations on regular IEEE 754 formats.
>
> I use Rmpfr for much longer numbers, e.g., 50 decimal digits, which --
> depending on scale of numbers in a
> vect
As I understand things, OpenBLAS will improve the performance of computations
on regular IEEE 754 formats.
I use Rmpfr for much longer numbers, e.g., 50 decimal digits, which --
depending on scale of numbers in a
vector -- can mean one needs approximately 100 decimals to accumulate the dot
prod
Not sure I understand the concern. IEEE 754 double precision floating point was
invented to allow for avoiding loss of precision when manipulating single
precision floating point numbers... but then C just ignores single precision
and you are expected to know that the precision of your answers m
Are you using a PC, please? You may want to consider installing OpenBLAS.
It’s a bit tricky but worth the time/effort.
Thanks,
Erin
On Sat, Mar 14, 2020 at 2:10 PM J C Nash wrote:
> Rmpfr does "support" matrix algebra, but I have been trying for some
> time to determine if it computes "double
Rmpfr does "support" matrix algebra, but I have been trying for some
time to determine if it computes "double" precision (i.e., double the
set level of precision) inner products. I suspect that it does NOT,
which is unfortunate. However, I would be happy to be wrong about
this.
JN
On 2020-03-14 3
Read its documentation yourself and unless you have good reason not to,
always cc the list (which I have done here).
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
I see it in three different Mac builds, including a quite recent local R-devel
build.
It boils down to this:
> dtruncnorm(numeric(0), mean=6.7, sd=1.38, a=-Inf, b=9)
Floating point exception: 8
which looks like a bug in the truncnorm package, where dtruncnorm() is
unprepared for a zero-length
Here's a novel idea:
Do a google search on "multiprecision computing package R" for an answer.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Sat, Mar 14, 2020
Inline.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Sat, Mar 14, 2020 at 10:36 AM |Juergen Hedderich
wrote:
> Dear R-help list members,
>
> the R Session
CRAN Rmpfr
On Sat, Mar 14, 2020 at 7:36 PM 林伟璐 <13917987...@163.com> wrote:
> Dear all
>
>
> I need a multiprecision computing package in R, if anyone in the list
> knows, please let me known...
>
>
> Many thanks
>
>
> Weilu Lin
> [[alternative HTML version deleted]]
>
>
Dear R-help list members,
the R Session aborted without any 'comment' for the following 'small
example':
/library(fitdistrplus)
library(truncnorm)
filter <- c(4.98, 8.60, 6.37, 4.37, 8.03, 7.43, 6.83, 5.64, 5.43, 6.88,
4.57, 7.50, 5.69, 7.88, 8.98, 6.79, 8.61, 6.70, 5.14, 7.29)
f
Dear all
I need a multiprecision computing package in R, if anyone in the list knows,
please let me known...
Many thanks
Weilu Lin
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
ht
It seems CG is having problems with the cube root. This converges while
still using CG:
S1 <- optim(1001,function(x) (production1(x)^3), method = "CG",
control = list(fnscale=-1))
On Thu, Mar 12, 2020 at 9:34 AM Skyler Saleebyan
wrote:
>
> I am trying to familiarize myself with optim() with a
I got that last point wrong as well.
(Each iteration is using five evaluations).
Ignore all my comments on this subject.
On 3/14/20, Abby Spurdle wrote:
>> It is correctly signalling that it hasn't converged (look at
>> optim.sol$convergence, which "indicates that the iteration limit maxit
>> h
20 matches
Mail list logo