Are you passing the keys as a filter or an argument? What does
optimizeForWrite in your function return?

Thanks,
Barry Oglesby


On Mon, May 1, 2017 at 4:41 PM, Goutam Tadi <gt...@pivotal.io> wrote:

> Hi Dan,
>
> Thanks for the reply.
> No, we are neither executing the function nor passing keys from a client.
> We are trying to remove a significant portion of the keys from a region
> (most, but not all) at once.
>
> Thanks.
>
> On Mon, May 1, 2017 at 2:40 PM Dan Smith <dsm...@pivotal.io> wrote:
>
> > That seems like it should do things fairly quickly. Are you executing the
> > function from a client? Did you find that using a function was actually
> > faster than just calling removeAll from the client? I think removeAll
> from
> > the client should send your keys in a single message, similar to your
> > function approach.
> >
> > How many keys are you trying to remove? If you have a really large number
> > of keys, it might be better to batch up the keys. You could do multiple
> > removeAlls from the client, perhaps even in parallel.
> >
> > -Dan
> >
> > On Mon, May 1, 2017 at 12:19 PM, Goutam Tadi <gt...@pivotal.io> wrote:
> >
> > > Hi Team,
> > >
> > > With +Bradford D Boyle <bbo...@pivotal.io>
> > >
> > > We are trying to remove a set of keys from a `PartitionedRegion`.
> > > Currently, we execute a function with `onRegion()`. Inside the
> function,
> > we
> > > call `PartitionRegionHelper.getLocalPrimaryData()` and use the
> returned
> > > region to execute `region.removeAll(keys)`.
> > >
> > > The problem we are facing is that is slow. Is there a faster way to
> > remove
> > > a set of keys from a partitioned region?
> > >
> > > We are considering using `getDataStore().getAllLocalBucketRegions()`
> to
> > > get
> > > the set of `BucketRegion`s and then using a thread pool to remove the
> > keys
> > > in parallel. Are there alternative approaches?
> > >
> > > Thanks,
> > > Goutam Tadi.
> > > --
> > > Regards,
> > > *Goutam Tadi.*
> > >
> >
> --
> Regards,
> *Goutam Tadi.*
>

Reply via email to