Joe Landman on 17 July 2006 22:05 wrote: > Toon Moene wrote: > > > Fascinating. > > > > How would you envision this done ? > > > > Most of our "large memory usage" is of the form: > > > > main program: > > > > READ * L, M, N > > CALL MAIN(L, M, N) > > ... > > END > > .... > > separate file: > > SUBROUTINE MAIN(L, M, N) > > REAL U(L, M, N), V(L, M, N), T(L, M, N), Q(L, M, N) > > .... > > etc. > > > > I.e., the memory used (automatic arrays) is based on the stack (at > > least, that's how most Fortran compilers would implement it. > > Yes, that is how it was implemented in IRIX as I remember. > You simply > ran the code with a helper environment/application that let you use > large pages.
Tru64 (and therefore the old Alphaserver SC clusters) worked in a similar way - flip a kernel config entry, reboot and /any/ allocation above a tunable size would get as large a page as possible. Stack, mmap, malloc, it didn't matter. Fragmentation can be a problem, it is trivial to syntesise a workload that breaks up all the large pages, but I've found that generally even after months of uptime there are still a high percentage of large pages available. -- Duncan Thomas Quadrics, One Bridewell Street, Bristol, BS1 2AA, UK Telephone: +44 117 9155525 Fax: +44 117 9075395 Email: [EMAIL PROTECTED] http://www.quadrics.com _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf