Hi Ali,
I don't think that it is deadlock but like Mikhail said it is saturated and
should try reduce load - try to make it work firs and then increase load to
see its limits. It would be beneficial if you monitor your Solr and see
bottlenecks. One such solution is Sematext's SPM: http://sematext.c
Ali,
20 sounds a way much. Why woun't you start from the single thread, check
that everything is correct, then steadily increase number of threads,
checking that cluster utilization also increases with every step until you
find a number which saturate Solr well, but won't be greater than necessary
Dear Emir,
Hi,
Actually Solr is in a deadlock state it will not accept any new document.
(some of them will store in tlog and some of them not) However, It will
response to the new query requests very slowly. Unfortunately right now I
have not any access to full thread dump. But, as I mentioned, it
Hi Ali,
Is Solr busy at that time and eventually recover or it is deadlocked?
Can you provide full thread dump when it happened?
Do you run only indexing at that time? Is "unavailable" only from
indexing perspective, or you cannot do anything with Solr?
Is there any indexing scenario that does n
I really appreciate if somebody can help me to solve this problem.
Regards.
On Tue, Dec 8, 2015 at 9:22 PM, Ali Nazemian wrote:
> I did that already. The situation was worse. The autocommit part makes
> solr unavailable.
> On Dec 8, 2015 7:13 PM, "Emir Arnautovic"
> wrote:
>
>> Hi Ali,
>> Can y
I did that already. The situation was worse. The autocommit part makes solr
unavailable.
On Dec 8, 2015 7:13 PM, "Emir Arnautovic"
wrote:
> Hi Ali,
> Can you try without explicit commits and see if threads will still be
> blocked.
>
> Thanks,
> Emir
>
> On 08.12.2015 16:19, Ali Nazemian wrote:
>
Hi Ali,
Can you try without explicit commits and see if threads will still be
blocked.
Thanks,
Emir
On 08.12.2015 16:19, Ali Nazemian wrote:
The indexing load is as follows:
- Around 1000 documents every 5 mins.
- The indexing speed is slow because of the complicated analyzer which is
applied
The indexing load is as follows:
- Around 1000 documents every 5 mins.
- The indexing speed is slow because of the complicated analyzer which is
applied to each document. It takes around 60 seconds to index 1000
documents with applying this analyzer (It is really slow. However, based on
the analyzi
Dear Emir,
Hi,
There are some cases that I have soft commit in my application. However,
the bulk update part has only hard commit for a bulk of 2500 documents.
Here are some information about the whole indexing/updating scenarios:
- Indexing part uses soft commit.
- In a single update cases soft co
Hi Ali,
This thread is blocked because cannot obtain update lock - in this
particular case when doing soft commit. I am guessing that there others
are blocked for the same reason. Can you tell us bit more about your
setup and indexing load and procedure? Do you do explicit commits?
Regards,
E
Hi,
There is a while since I have had problem with Solr 5.2.1 and I could not
fix it yet. The only think that is clear to me is when I send bulk update
to Solr the commit thread will be blocked! Here is the thread dump output:
"qtp595445781-8207" prio=10 tid=0x7f0bf68f5800 nid=0x5785 waiting f
11 matches
Mail list logo