Thank you for your feedback. I really appreciate you taking the time to write
it up for me (and hopefully others who might be considering the same). My
first thought for dealing with deleted docs was to delete the contents and
rebuild the index from scratch but my primary customer for the dele
Hello all,
I found a way of doing this and thought of sharing this info with you. I
found a way to dynamically change the field which gives the suggestions.
It's using the solr spellchecker (Not suggester). You can basically
configure a indexed field as default *spellcheck.dictionary* in the conf
Clayton
you could also try running and optimize on the SOLR index as a
weekly/bi weekly maintenance task to keep the segment count in check and
the maxdoc , numdoc count as close as possible (in DB terms de-fragmenting
the solr indexes)
Best Regards,
Abhishek
On Sun, May 15, 2016 at 7:1
Hi Ryan,
The rows=10 on the /select handler is likely going to cause problems
with 8 workers. This is calling the /select handler with 8 concurrent
workers each retrieving 100,000 rows. The /select handler bogs down as the
number of rows increases. So using the rows parameter with the /select
Ah, you also used 4 shards. That means with 8 workers there were 32
concurrent queries against the /select handler each requesting 100,000
rows. That's a really heavy load!
You can still try out the approach from my last email on the 4 shards
setup, as you add workers gradually you'll gradually ra
Erick,
I tried the new configuration. The same issue that Satvinder is having. The
log updater cannot be instantiated...
class="solr.CdcrUpdateLog"
for some reason that class is causing a problem!
Anyway, anyone has a config that works?
Regards,
--Abdel
On Fri, May 13, 2016 at 11:57 AM, Erick
One other thing to keep in is how the partitioning is done when you add the
partitionKeys.
Partitioning is done using the HashQParserPlugin, which builds a filter for
each worker. Under the covers this is using the normal filter query
mechanism. So after the filters are built and cached they are e
Mikhail
It was caused by an endless loop in the page's codes that is triggered
only under certain conditions.
On 5/11/2016 4:07 PM, Mikhail Khludnev wrote:
On Wed, May 11, 2016 at 10:16 AM, Derek Poh wrote:
Hi Erick
Yes we have identified and fixed the page slow loading.
Derek,
Can you e