Thanks Dan for your quick response.

Though, This may not be a recommended pattern, Here, I am targeting a
bucket specific putAll and want to exclude hashing as it turn out as an
overhead in my scenario.
Is this achievable...? How should I define a PartionResolver that works
generically and returns a respective bucket for specific file.
What will get impacted if I opt this route (Fix partitioning per file), can
think of horizontal scalability as buckets made fix .. thoughts?


-Steave M.


On Sat, Apr 11, 2020, 1:54 AM Dan Smith <dsm...@pivotal.io> wrote:

> Hi Steve,
>
> The bucket that data goes into is generally determined by the key. So for
> example if your data in File-0 is all for customer X, you can include
> Customer X in your region key and implement a PartitionResolver that
> extracts the customer from your region key and returns it. Geode will then
> group all of the data for Customer X into a single bucket.
>
> You generally shouldn't have to target a specific bucket number (eg bucket
> 0). But technically you can just by returning an integer from your
> PartitionResolver. If you return the integer 0, your data will go into
> bucket 0. Usually it's just better to return your partition key (eg
> "Customer X") and let geode hash that to some bucket number.
>
> -Dan
>
> On Fri, Apr 10, 2020 at 11:04 AM steve mathew <steve.mathe...@gmail.com>
> wrote:
>
> > Hello Geode devs and users,
> >
> > I have a set of files populated with data, fairly distributed, I want to
> > put each file's data in a specific bucket,
> > like PutAll File-0 data into Geode bucket B0
> >       PutAll File-1 data into Geode bucket B1
> >
> >       and so on...
> >
> > How can i achieve this using geode client...?
> >
> > Can i achieve this using PartitonResolver or some other means...?
> >
> > Thanks in advance
> >
> > -Steve M.
> >
>

Reply via email to