Hi Robert:

Robert G. Brown wrote:
Dear List,

What are the current limits on the size of arrays that can be allocated
in g77?  I know that this is almost a FAQ, but I cannot look back at the
archives because it is so time dependent an answer.  In particular, can
an x64 box with a modern linux kernel and a modern g77 allocate 4-8 GB
arrays (presuming, of course, that one uses a long int for an index)?  I
have seen references to "using an offset" in a few google hits (that
were not very informative) -- does that basically mean doing pointer
arithmetic in fortran?

Hmmmm.... If you use a staticly allocated array, you need to do some bits with ulimit/limit in the shell to make sure you can allocate enough space on the stack (which is probably not a good idea).

If you instead use gfortran/g95 you can use the fortran allocation bits (roughly analogous to calloc/malloc). Worst case, you can string together an interface to calloc/malloc, though I would advise against that unless everyone is using the same platform.

I ask because:

  a) I'm not a fortran expert -- in fact the last time I >>willingly<<
coded in fortran was twenty or so years ago.

Its not *so* bad ... the language is only 51+ years old ... :)

Actually, modern fortran is pretty nice. Show me a CS student who doesn't turn green with revulsion over it, and I will be happy, but it is a fairly reasonable language.


  b) Alas, I'm probably going to have to become one (again).

Heh... people complained that I wrote my Perl, C, and even Assembler in Fortran for a while. Now they complain that I write it all in C. Its not so hard to switch back and forth. The hard part is the IO. Format statements are annoying.

  c) Working on some problems with potentially very large memory
allocations.

Shouldn't be too hard using g95/gfortran. Can you look at out of core type solutions (blocked access).

  d) Where commercial compilers aren't a viable option (although I
suggested them) -- the software has to build and be usable by e.g.
researchers in countries where there simply is no money to spend on
compilers.

Actually the code, if written to spec should be trivially portable between the compilers, modulo some limits and vagaries of each compiler.


The last suggests that it would be ideal if large arrays were at least
approximately "transparent" -- so that the software would build on 32
bit systems and be runnable there with smaller arrays but would also
build and run on x64 big-memory systems without the need for extensive
instrumentation of the code.

If you want to do this, you might look at the out of core methods. This might not be realistic, depends upon your analysis.


   Thanks,

       rgb

(I know Toone works on this and am hoping he's paying attention so I can
get a really authoritative and informative answer...:-)


--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: [EMAIL PROTECTED]
web  : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 734 786 8452
cell : +1 734 612 4615
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to