Michael, i tried your idea of implementing own cursor in solr 4.6.1 itself
but some how that testcase was taking huge time.
Then i tried Cursor approach by upgrading solr to 4.10.3. With that got
better results. For Setup 2 now time reduced from
114 minutes to 18 minutes but still little far from S
Toke, won't be able to use TermsComponent as i had complex filter criteria
on other fields.
Michael, i understood your idea of paging without using start=,
will prototype it as it is possible in my usecase also and post here
results i got with this approach.
On Sun, Jan 18, 2015 at 10:05 PM, Mich
You can also implement your own cursor easily enough if you have a
unique sortkey (not relevance score). Say you can sort by id, then you
select batch 1 (50k docs, say) and record the last (maximum) id in the
batch. For the next batch, limit it to id > last_id and get the first
50k docs (don't
Naresh Yadav [nyadav@gmail.com] wrote:
> Thanks for sharing solr internal's for my problem. I will definitely try
> Cursor also but only problem is my current
> solr version is 4.6.1 in which i guess cursor support is not there.
I thinkt it was added in 4.7, so you are right that it will proba
Hi Toke,
Thanks for sharing solr internal's for my problem. I will definitely try
Cursor also but only problem is my current
solr version is 4.6.1 in which i guess cursor support is not there. Any
other option i have for this problem ??
Also as per your suggestion i will try to avoid regional uni
Naresh Yadav [nyadav@gmail.com] wrote:
> In both setups, we are reading in batches of 50k and each batch taking
> Setup1 : approx 7 seconds and for completing all batches of total 10 lakh
> results takes 1 to 2 minutes.
> Setup2 : approx 2-3 minutes and for completing all batches of total 10 l
In both setups, we are reading in batches of 50k and each batch taking
Setup1 : approx 7 seconds and for completing all batches of total 10 lakh
results takes 1 to 2 minutes.
Setup2 : approx 2-3 minutes and for completing all batches of total 10 lakh
results takes 114 minutes.
We tried other bat
&shard.info=true
Sent from my iPhone
> On 17 Jan 2015, at 04:23, Naresh Yadav wrote:
>
> Hi all,
>
> We have single solr index with 3 fixed fields(on of field is tokenized with
> space) and rest dynamic fields(string fields in range of 10-20).
>
> Current size of index is 2 GB with around 12
Hmmm, you say
"reading around 10 lakh docs". Are
you returning 1,000,000 documents? That is,
is have you set &rows=100? Returning
that many rows will never performa all that well. What
kinds of performance are you getting if you only
are reading a few rows?
Or have I misunderstood completely?
Hi all,
We have single solr index with 3 fixed fields(on of field is tokenized with
space) and rest dynamic fields(string fields in range of 10-20).
Current size of index is 2 GB with around 12 lakh docs and solr
nodes are of 4 core, 16 gb ram linux machines.
Writes performance is good then we t
10 matches
Mail list logo