Dear Users,

I'm in the process of making an app cluster aware and getting ready for
deployment.

I've been looking at running an Hadoop file system. If I have fault
tolerance at the file system it seems that I would be creating a ton of
extra drive i/o to do replication. Am I correct in this assumption?

My data is not really critical so one of the fist things I will be doing is
seeing if I can do a full reindex in a timely manner. My data needs to
change weekly. Can I have a case where replication is not required? It
seems that I might.

So, my thoughts bounce back to Solr on plain old SSD on a journaled file
system.

Thanks,

GW

Reply via email to