This reminds me of a similar issue I had. What approaches do you take for large dense matrix multiplication in MPI, when the matrices are too large to fit into cluster memory? If I hack up something to cache intermediate results to disk, the IO seems to drag everything to a halt and I'm looking for a better solution. I'd like to use some libraries like PETSc, but how would you work around memory limitations like this (short of building a bigger cluster)?
> >> I don't speak fortran natively, but isn't that array > >> approximately 3.6 TB in size? > > > > Oops, forgot to put the decimal in the right place. > > > > 9915^3 * 8 bits/integer / 1024^3 bytes/GB = 907 GB. > > > > It could be done with a 64 bit kernel. Too big for PAE. > > Yeah, if you had a box with several hundred memory slots.... > > Which I say only semi-sarcastically. They sound like they're coming, > they're coming. Who knows, maybe they're here and I'm just out of > touch. > > If it is a sparse matrix, them just maybe one can do something on this > scale, but otherwise, well, it's like telling mathematica to go and > compute umpty-something factorial -- it will go out, make a herioc > effort, use all the free memory in the universe, and die valiantly > (perhaps taking down your computer with it if the kernel happens to need > some memory at a critical time when their isn't any). Large scale > computation as a DOS attack... > -- Peter N. Skomoroch [EMAIL PROTECTED] http://www.datawrangling.com
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf