Mon, Mar 27, 2017 at 11:56 AM, Rohit Kanchan
wrote:
> Thanks Erick for replying back. I have deployed changes to production, we
> will figure it out soon if it is still causing OOM or not. And for commits
> we are doing auto commits after 10K docs or 30 secs.
> If I get time I will
test. You've created a custom
> URP component because you "didn't want to run the queries from the
> client". That's perfectly reasonable, it's just that to know where you
> should be looking deleting from the client would eliminate your custom
> code and tell us
from a
committer. What do you guys think?
Thanks
Rohit
On Wed, Mar 22, 2017 at 1:36 PM, Rohit Kanchan
wrote:
> For commits we are relying on auto commits. We have define following in
> configs:
>
>
>
> 1
>
> 300
For commits we are relying on auto commits. We have define following in
configs:
1
3
false
15000
One thing which I would like to mention is that we are not calling directly
deleteById from client. We
Hi Chris,
Thanks for replying. We are using Solr 6.1 version. Even I saw that it is
bounded by 1K count, but after looking at heap dump I was amazed how can it
keep more than 1K entries. But Yes I see around 7M entries according to
heap dump and around 17G of memory occupied by BytesRef there.
It
Hi All,
I am looking for some help to solve an out of memory issue which we are
facing. We are storing messages in solr as documents. We are running a
pruning job every night to delete old message documents. We are deleting
old documents by calling multiple delete by id query to solr. Document
cou
Hi Prateek,
I think you are talking about two different animals. Solr(actually embedded
lucene) is actually a search engine where you can use different features
like faceting, highlighting etc but it is a document store where for each
text it does create an Inverted index and map that to documents
With Java 8, you also need to upgrade your tomcat which can work on Java 8.
I think Tomcat 8.x compiled using Java 8. I think you can switch your
existing Tomcat also to Java 8 but that may break somewhere because of same
reason.
Thanks
Rohit Kanchan
On Sat, Sep 10, 2016 at 2:38 AM, Brendan
I think it is better to use zookeeper data. Solr Cloud updates zookeeper
about node status. If you are using cloud then u can check zookeeper
cluster api and get status of node from there. Zookeeper cluster state api
can give you information about your Solr cloud. I hope this helps.
Thanks
Rohit
helps in solving your problem.
Thanks
Rohit Kanchan
On Tue, Aug 30, 2016 at 5:11 PM, Erik Hatcher
wrote:
> Personally, I don’t think a QParser(Plugin) is the right place to modify
> other parameters, only to create a Query object. A QParser could be
> invoked from an fq, not just a q,
We also faced same issue when we were running embedded solr 6.1 server.
Actually I faced the same in our integration environment after deploying
project. Solr 6.1 is using http client 4.4.1 which I think embedded solr
server is looking for. I think when solr core is getting loaded then old
http cl
11 matches
Mail list logo