On Tue, Oct 13, 2020 at 9:55 AM Douglas Eadline <[email protected]> wrote:
> > Spark is a completely separate code base that has its own Map Reduce > engine. It can work stand-alone, with the YARN scheduler, or with > other schedulers. It can also take advantage of HDFS. > Doug, this is correct. I think for all practical purposes Hadoop and Spark get lumped into the same bag because the underlying ideas are coming from the same place. A lot of people saw Spark (esp. at the beginning) as a much faster, in-memory Hadoop.
_______________________________________________ Beowulf mailing list, [email protected] sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
