I am guessing you already asked if they could give you three 100MB files
instead? so you could parallelize the operation. or maybe your task
doesn't lend itself well to that.
Dean
On Tue, Jul 24, 2012 at 10:01 AM, Pushpalanka Jayawardhana <
pushpalankaj...@gmail.com> wrote:
> Hi all,
>
> I am d
Hi all,
I am dealing with a scenario where I receive a .csv file in every 10mins
intervals which is of average 300MB. I need to update a Cassandra cluster
according to the received data from .csv file, after some processing
functions.
Current approach is keeping a Hashmap in memory, updating it f