> Please, be faithful on your citations, > and don't distort what others post on the list,
Sorry, it was unintentional. I will try to be more careful in the future. (Though you have granted me an un-earned doctorate too :-)) -- Jim James Cownie <jcow...@gmail.com> Mob: +44 780 637 7146 http://skiingjim.blogspot.com/ On 9 Apr 2015, at 20:01, Gus Correa <g...@ldeo.columbia.edu> wrote: > > > On 04/09/2015 02:20 PM, James Cownie wrote: >>> MPI continues to hang on, outlived its creators, >> >> As someone who was a subcommittee chair on MPI-1, it needs to go on a >> while yet to outlive me (I hope!) >> >> -- Jim >> James Cownie <jcow...@gmail.com <mailto:jcow...@gmail.com>> >> Mob: +44 780 637 7146 >> http://skiingjim.blogspot.com/ >> >> > > Dr. Cownie > > You edited, and distorted what I wrote. > It said *Fortran* (NOT MPI !) outlived its creators. > I certainly didn't mean to kill prematurely > the MPI designers and developers either. > > Here is what I wrote: > > > After all, reports of the death of Fortran, like Mark Twain's death, > > have been greatly exaggerated. > > It continues to hang on, outlived his creators, > > and shows no sign of dying anytime soon. > > Please, be faithful on your citations, > and don't distort what others post on the list, > even if it is just to make fun. > > Thank you, > Gus Correa > > > >> >> >> On 8 Apr 2015, at 22:12, Gus Correa <g...@ldeo.columbia.edu >> <mailto:g...@ldeo.columbia.edu>> wrote: >> >>> For those old enough to have heard somebody as great as >>> John Lennon sing that "the dream is over", >>> and rather mediocre "philosophers" claim the "end of history", >>> prophecies that were never confirmed, >>> reading bombastic claims that MPI is dead is not so unsettling. >>> After all, reports of the death of Fortran, like Mark Twain's death, >>> have been greatly exaggerated. >>> It continues to hang on, outlived his creators, >>> and shows no sign of dying anytime soon. >>> >>> Is Exascale really needed? >>> If so, would the hardware paradigm have to change also perhaps? >>> (Good scaling properties and reliability are problems not only for >>> software, are they?) >>> Why can't MPI adapt to the new scenario? >>> >>> Gus Correa >>> >>> On 04/08/2015 03:57 PM, Scott Atchley wrote: >>>> There is concern by some and outright declaration by others (including >>>> hardware vendors) that MPI will not scale to exascale due to issues like >>>> rank state growing too large for 10-100 million endpoints, lack of >>>> reliability, etc. Those that make this claim then offer up their >>>> favorite solution (a PGAS variant, Chapel, Legion, Open Community >>>> Runtime). Several assert that the event-driven/task-driven runtimes will >>>> take care of data partitioning, data movement, etc. and that the user >>>> only has to define relationships and dependencies while exposing as much >>>> parallelism as possible. >>>> >>>> The domain scientists shudder at the thought of rewriting existing >>>> codes, some of which have existed for decades. If they do get funding to >>>> rewrite, which new programming model should they pick? At this point, >>>> there is no clear favorite. >>>> >>>> >>>> >>>> On Wed, Apr 8, 2015 at 2:31 PM, Prentice Bisbal >>>> <prentice.bis...@rutgers.edu <mailto:prentice.bis...@rutgers.edu> >>>> <mailto:prentice.bis...@rutgers.edu>> wrote: >>>> >>>> I got annoyed by this article and had to stop reading it. I'll go >>>> back later and try to give it a proper critique, but obviously >>>> disagree with most of what I've read so far. Right of the bat, the >>>> author implies that Big Data = HPC, and I disagree with that. >>>> >>>> More ranting to come.... >>>> >>>> Prentice >>>> >>>> >>>> On 04/08/2015 01:16 PM, H. Vidal, Jr. wrote: >>>> >>>> Curious as to what the body of thought is here on this article: >>>> >>>> http://www.dursi.ca/hpc-is-__dying-and-mpi-is-killing-it/ >>>> <http://www.dursi.ca/hpc-is-dying-and-mpi-is-killing-it/> >>>> >>>> _________________________________________________ >>>> Beowulf mailing list, Beowulf@beowulf.org >>>> <mailto:Beowulf@beowulf.org> >>>> <mailto:Beowulf@beowulf.org> sponsored by Penguin Computing >>>> To change your subscription (digest mode or unsubscribe) visit >>>> http://www.beowulf.org/__mailman/listinfo/beowulf >>>> <http://www.beowulf.org/mailman/listinfo/beowulf> >>>> >>>> >>>> _________________________________________________ >>>> Beowulf mailing list, Beowulf@beowulf.org <mailto:Beowulf@beowulf.org> >>>> <mailto:Beowulf@beowulf.org> sponsored by Penguin Computing >>>> To change your subscription (digest mode or unsubscribe) visit >>>> http://www.beowulf.org/__mailman/listinfo/beowulf >>>> <http://www.beowulf.org/mailman/listinfo/beowulf> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Beowulf mailing list, Beowulf@beowulf.org >>>> <mailto:Beowulf@beowulf.org> sponsored by Penguin Computing >>>> To change your subscription (digest mode or unsubscribe) visit >>>> http://www.beowulf.org/mailman/listinfo/beowulf >>>> >>> >>> _______________________________________________ >>> Beowulf mailing list, Beowulf@beowulf.org <mailto:Beowulf@beowulf.org> >>> sponsored by Penguin Computing >>> To change your subscription (digest mode or unsubscribe) visit >>> http://www.beowulf.org/mailman/listinfo/beowulf >> > > _______________________________________________ > Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf