Hello!
You'll need a data set that you can index. This is quite simple - if
you don't know what data you want to index or you don't have test
data, just take Wikipedia dump and index it using Data Import Handler.
You can find the example on using Wikipedia dump with DIH on
http://wiki.apache.org/s
This warning has been fixed in svn branches. (not yet released)
https://issues.apache.org/jira/browse/SOLR-6499
You can simply ignore this warning, or comment out the corresponding
RequestHandler definition from solrconfig.xml (implicit handler will work.)
Regards,
Tomoko
2015-01-17 5:27 GMT+09:
Hmmm, you say
"reading around 10 lakh docs". Are
you returning 1,000,000 documents? That is,
is have you set &rows=100? Returning
that many rows will never performa all that well. What
kinds of performance are you getting if you only
are reading a few rows?
Or have I misunderstood completely?
&shard.info=true
Sent from my iPhone
> On 17 Jan 2015, at 04:23, Naresh Yadav wrote:
>
> Hi all,
>
> We have single solr index with 3 fixed fields(on of field is tokenized with
> space) and rest dynamic fields(string fields in range of 10-20).
>
> Current size of index is 2 GB with around 12
In both setups, we are reading in batches of 50k and each batch taking
Setup1 : approx 7 seconds and for completing all batches of total 10 lakh
results takes 1 to 2 minutes.
Setup2 : approx 2-3 minutes and for completing all batches of total 10 lakh
results takes 114 minutes.
We tried other bat