Not a direct answer but you may find lm.fit worth experimenting with. Also try the high-performance computing task view on CRAN
Cheers, Andrew -- Andrew Robinson Chief Executive Officer, CEBRA and Professor of Biosecurity, School/s of BioSciences and Mathematics & Statistics University of Melbourne, VIC 3010 Australia Tel: (+61) 0403 138 955 Email: a...@unimelb.edu.au Website: https://researchers.ms.unimelb.edu.au/~apro@unimelb/ I acknowledge the Traditional Owners of the land I inhabit, and pay my respects to their Elders. On 14 Nov 2024 at 1:13 PM +0100, Ivo Welch <ivo.we...@gmail.com>, wrote: External email: Please exercise caution I have found more general questions, but I have a specific one. I have a few million (independent) short regressions that I would like to run (each reg has about 60 observations, though they can have missing observations [yikes]). So, I would like to be running as many `lm` and `coef(lm)` in parallel as possible. my hardware is Mac, with nice GPUs and integrated memory --- and so far completely useless to me. `mclapply` is obviously very useful, but I want more, more, more cores. is there a recommended plug-in library to speed up just `lm` by also using the GPU cores? ______________________________________________ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide https://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]] ______________________________________________ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide https://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.