On Tue, Oct 13, 2020 at 3:54 PM Douglas Eadline <deadl...@eadline.org>
wrote:

>
> It really depends on what you need to do with Hadoop or Spark.
> IMO many organizations don't have enough data to justify
> standing up a 16-24 node cluster system with a PB of HDFS.
>

Excellent. If I understand what you are saying, there is simply no demand
to mix technologies, esp. in the academic world. OK. In your opinion and
independent of Spark/HDFS discussion, why are we still only on openMPI in
the world of writing distributed code on HPC clusters? Why is there nothing
else gaining any significant traction? No innovation in exposing higher
level abstractions and hiding the details and making it easier to write
correct code that is easier to reason about and does not burden the writer
with too much of a low level detail. Is it just the amount of investment in
an existing knowledge base? Is it that there is nothing out there to compel
people to spend the time on it to learn it? Or is there nothing there? Or
maybe there is and I am just blissfully unaware? :)

Thanks!
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf

Reply via email to