IMHO you will want 4 cores and 4 to 8 GB for each VM to run both Cassandra and
Hadoop on the nodes.
For comparison people often use an EC2 m1.xlarge which has 4 cores and 16GB.
Also, I recommend anyone starting experiments with Cassandra and Hadoop use
DataStax Enterprise. So you can focus on
Even if we have one machine, will splitting it up into 2 nodes via a VM
make a difference ?
Can it simulate 2 nodes of half the computing power ? Also, yes, this will
just be a test playground
and not in production.
Thank you,
Martin
On Mon, Jul 15, 2013 at 1:56 PM, Nate McCall wrote:
> Good
Good point. Just to be clear - my suggestions all assume this is a
testing/playground/get a feel setup. This is a bad idea for
performance testing (not to mention anywhere near production).
On Mon, Jul 15, 2013 at 3:02 PM, Tim Wintle wrote:
> I might be missing something, but if it is all on one
I might be missing something, but if it is all on one machine then why use
Cassandra or hadoop?
Sent from my phone
On 13 Jul 2013 01:16, "Martin Arrowsmith"
wrote:
> Dear Cassandra experts,
>
> I have an HP Proliant ML350 G8 server, and I want to put virtual
> servers on it. I would like to put
This is really dependent on the workload. Cassandra does well with 8GB
of RAM for the jvm, but you can do 4GB for moderate loads.
JVM requirements for Hadoop jobs and available slots are wholly
dependent on what you are doing (and again whether you are just
integration testing).
You can get away
Dear Cassandra experts,
I have an HP Proliant ML350 G8 server, and I want to put virtual
servers on it. I would like to put the maximum number of nodes
for a Cassandra + Hadoop cluster. I was wondering - what is the
minimum RAM and memory per node I that I need to have Cassandra + Hadoop
before th