But even then, it was a pretty slow evolution – the Fortran compilers I
was running in the 80s on microcomputers under MS-DOS wasn’t materially
different from the Fortran I was running in 1978 on a Z80, which wasn’t
significantly different from the Fortran I ran on mainframes (IBM 360,
CDC 6xxx, etc.) and minis (IBM 1130, PDP-11 in the 60s and 70s. What
would change is things like the libraries available to do “non-standard”
stuff (like random disk access).
Non standard stuff and lack of implementation of entire standards is
still present today:
https://coarrays.sourceforge.io/doc.html
it seems a required part of how programming languages change. Co-arrays
seem very helpful, making a nice PGAS language out of old Fortran, such
transformations seem to be more difficult for other programming
languages (C, C++,Ruby,Rust,Java etc). Many PGAS languages lack fault
tolerance, which most big data frameworks have. On clusters of
workstations in production environments, fault tolerance is useful, and
code portability may prevent big data workflows being transitioned to
PGAS type languages.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf