Hi John.

I don't think the 80 bit format was part of IEEE 754; I think it was an Intel invention for the 8087 chip (which I believe preceded that standard), and didn't make it into the standard.

The standard does talk about 64 bit and 128 bit floating point formats, but not 80 bit.

Duncan Murdoch

On 04/02/2024 4:47 p.m., J C Nash wrote:
Slightly tangential: I had some woes with some vignettes in my
optimx and nlsr packages (actually in examples comparing to OTHER
packages) because the M? processors don't have 80 bit registers of
the old IEEE 754 arithmetic, so some existing "tolerances" are too
small when looking to see if is small enough to "converge", and one
gets "did not converge" type errors. There are workarounds,
but the discussion is beyond this post. However, worth awareness that
the code may be mostly correct except for appropriate tests of
smallness for these processors.

JN




On 2024-02-04 11:51, Dirk Eddelbuettel wrote:

On 4 February 2024 at 20:41, Holger Hoefling wrote:
| I wanted to ask if people have good advice on how to debug M1Mac package
| check errors when you don“t have a Mac? Is a cloud machine the best option
| or is there something else?

a) Use the 'mac builder' CRAN offers:
     https://mac.r-project.org/macbuilder/submit.html

b) Use the newly added M1 runners at GitHub Actions,
     
https://github.blog/changelog/2024-01-30-github-actions-introducing-the-new-m1-macos-runner-available-to-open-source/

Option a) is pretty good as the machine is set up for CRAN and builds
fast. Option b) gives you more control should you need it.

Dirk


______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Reply via email to