On Tue, 2015-08-18 at 14:36 +0530, Modassar Ather wrote:
> So Toke/Daniel is the node showing *gone* on Solr cloud dashboard is
> because of GC pause and it is actually not gone but the ZK is not able to
> get the correct state?
That would be my guess.
> The issue is caused by a huge query with m
bq: The issue is caused by a huge query with many wildcards and phrases in it.
Well, the very first thing I'd do is look at whether this is necessary.
For instance:
leading and trailing wildcards are an anti-pattern. You should investigate
using ngrams instead.
trailing wildcards usually resolve
So Toke/Daniel is the node showing *gone* on Solr cloud dashboard is
because of GC pause and it is actually not gone but the ZK is not able to
get the correct state?
The issue is caused by a huge query with many wildcards and phrases in it.
If you see I have mentioned about (*The request took too l
Ah ok, its ZK timeout then
(org.apache.zookeeper.KeeperException$SessionExpiredException)
which is because of your GC pause.
The page Shawn mentioned earlier has several links on how to investigate GC
issues and some common GC settings, sounds like you need to tweak those.
Generally speaking, I b
On Tue, 2015-08-18 at 10:38 +0530, Modassar Ather wrote:
> Kindly help me understand, even if there is a a GC pause why the solr node
> will go down.
If a stop-the-world GC is in progress, it is not possible for an
external service to know if this is because a GC is in progress or the
node is dead
I tried to profile the memory of each solr node. I can see the GC activity
going higher as much as 98% and there are many instances where it has gone
up at 10+%. In one of the solr node I can see it going to 45%.
Memory is fully used and have gone to the maximum usage of heap which is
set to 24g. D
Shawn! The container I am using is jetty only and the JVM setting I am
using is the default one which comes with Solr startup scripts. Yes I have
changed the JVM memory setting as mentioned.
Kindly help me understand, even if there is a a GC pause why the solr node
will go down. At least for other
When you say "the solr node goes down", what do you mean by that? From your
comment on the logs, you obviously lose the solr core at best (you do
realize only having a single replica is inherently susceptible to failure,
right?)
But do you mean the Solr Core drops out of the collection (ZK timeout)
On 8/17/2015 5:45 AM, Modassar Ather wrote:
> The servers have 32g memory each. Solr JVM memory is set to -Xms20g
> -Xmx24g. There are no OOM in logs.
Are you starting Solr 5.2.1 with the included start script, or have you
installed it into another container?
Assuming you're using the download's
Thanks Upayavira fo your inputs. The java vesrion is 1.7.0_79.
On Mon, Aug 17, 2015 at 5:57 PM, Upayavira wrote:
> Hoping that others will chime in here with other ideas. Have you,
> though, tried reducing the JVM memory, leaving more available for the OS
> disk cache? Having said that, I'd expe
Hoping that others will chime in here with other ideas. Have you,
though, tried reducing the JVM memory, leaving more available for the OS
disk cache? Having said that, I'd expect that to improve performance,
not to cause JVM crashes.
It might also help to know what version of Java you are running
The servers have 32g memory each. Solr JVM memory is set to -Xms20g
-Xmx24g. There are no OOM in logs.
Regards,
Modassar
On Mon, Aug 17, 2015 at 5:06 PM, Upayavira wrote:
> How much memory does each server have? How much of that memory is
> assigned to the JVM? Is anything reported in the logs
How much memory does each server have? How much of that memory is
assigned to the JVM? Is anything reported in the logs (e.g.
OutOfMemoryError)?
On Mon, Aug 17, 2015, at 12:29 PM, Modassar Ather wrote:
> Hi,
>
> I have a Solr cluster which hosts around 200 GB of index on each node and
> are 6 nod
Hi,
I have a Solr cluster which hosts around 200 GB of index on each node and
are 6 nodes. Solr version is 5.2.1.
When a huge query is fired, it times out *(The request took too long to
iterate over terms.)*, which I can see in the log but at same time the one
of the Solr node goes down and the lo
14 matches
Mail list logo