Jon Forrest wrote:
that compilers will never try to unroll code at that level, even when
enormous memory systems are commonplace?

Again, the enormous memory systems you mention consist mostly
of enormous amounts of data, not text.


... so I see you have never used an interprocedural analysis (-ipa) switch :)

Allows you do do things like, I dunno, inline one whole routine inside another ...

Usually leads to much larger program text sizes.

This said, I have seen very large programs from RISC days hitting well more than 1 GB of text. I haven't played with any recently though.

[...]

and "the program" being run on a multitasking
operating system is the union of all "sub" programs being run on the
system, with or without shared libraries (sharing is expensive in
performance, remember -- we do it to save memory because it is a scarce
resource).

Why is sharing expensive in performance? It might take a little
overhead to setup and manage, but why is having multiple virtual
addresses map to the same physical memory expensive?

Contention. Memory hot spots. Been there, done that. We are about to do this all over again (collectively).



--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: land...@scalableinformatics.com
web  : http://www.scalableinformatics.com
       http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to