Hi Erick,
@ichattopadhyaya beat me to it already yesterday. So we are good
-cheers
Vijay
On Wed, Jan 28, 2015 at 1:30 PM, Erick Erickson
wrote:
> Vijay:
>
> Thanks for reporting this back! Could I ask you to post a new patch with
> your correction? Please use the same patch name
> (SOLR-5850.
Vijay:
Thanks for reporting this back! Could I ask you to post a new patch with
your correction? Please use the same patch name
(SOLR-5850.patch), and include a note about what you found (I've already
added a comment).
Thanks!
Erick
On Wed, Jan 28, 2015 at 9:18 AM, Vijay Sekhri wrote:
> Hi Sh
Hi Shawn,
Thank you so much for the assistance. Building is not a problem . Back in
the days I have worked with linking, compiling and building C , C++
software . Java is a piece of cake.
We have built the new war from the source version 4.10.3 and our
preliminary tests have shown that our issue (
On 1/27/2015 2:52 PM, Vijay Sekhri wrote:
> Hi Shawn,
> Here is some update. We found the main issue
> We have configured our cluster to run under jetty and when we tried full
> indexing, we did not see the original Invalid Chunk error. However the
> replicas still went into recovery
> All this tim
Hi Shawn,
Here is some update. We found the main issue
We have configured our cluster to run under jetty and when we tried full
indexing, we did not see the original Invalid Chunk error. However the
replicas still went into recovery
All this time we been trying to look into replicas logs to diagnos
On 1/26/2015 9:34 PM, Vijay Sekhri wrote:
> Hi Shawn, Erick
> So it turned out that once we increased our indexing rate to the original
> full indexing rate the replicas went back into recovery no matter what the
> zk timeout setting was. Initially we though that increasing the timeout is
> helpin
Hi Shawn, Erick
>From another replicas right after the same error it seems the leader
initiates the recovery of the replicas. This one has a bit different log
information than the other one that went into recovery. I am not sure if
this helps in diagnosing
Caused by: java.io.IOException: JBWEB0020
Hi Shawn, Erick
So it turned out that once we increased our indexing rate to the original
full indexing rate the replicas went back into recovery no matter what the
zk timeout setting was. Initially we though that increasing the timeout is
helping but apparently not . We just decreased indexing ra
On 1/26/2015 2:26 PM, Vijay Sekhri wrote:
> Hi Erick,
> In solr.xml file I had zk timeout set to/ name="zkClientTimeout">${zkClientTimeout:45}/
> One thing that made a it a bit better now is the zk tick time and
> syncLimit settings. I set it to a higher value as below. This may not
> be advis
Personally, I never really set maxDocs for autocommit, I just leave things
time-based. That said, your settings are so high that this shouldn't matter
in the least.
There's nothing in the log fragments you posted that's the proverbial
"smoking gun". There's
nothing here that tells me _why_ the nod
Hi Erick,
In solr.xml file I had zk timeout set to* ${zkClientTimeout:45}*
One thing that made a it a bit better now is the zk tick time and syncLimit
settings. I set it to a higher value as below. This may not be advisable
though.
tickTime=3
initLimit=30
syncLimit=20
Now we observed tha
Ah, OK. Whew! because I was wondering how you were running at _all_ if all
the memory was allocated to the JVM ;)..
What is your Zookeeper timeout? The original default was 15 seconds and this
has caused problems like this. Here's the scenario:
You send a bunch of docs at the server, and eventuall
Thank you for the reply Eric.
I am sorry I had wrong information posted. I posted our DEV env
configuration by mistake.
After double checking our stress and Prod Beta env where we have found the
original issue, I found all the searchers have around 50 GB of RAM
available and two instances of JVM ru
Shawn directed you over here to the user list, but I see this note on
SOLR-7030:
"All our searchers have 12 GB of RAM available and have quad core Intel(R)
Xeon(R) CPU X5570 @ 2.93GHz. There is only one java process running i.e
jboss and solr in it . All 12 GB is available as heap for the java
proc
14 matches
Mail list logo