For data of this size you may want to look at something like Apache
Cassandra, which is made specifically to handle data at this kind of
scale across many machines.

You can still use Hadoop to analyse and transform the data in a
performant manner, however it's probably best to do some research on
this on the relevant technical forums for those technologies.

Nick

Reply via email to