Hi Shawn,
Curious, I suppose you have haproxy for Solr in Master/Slave
configuration? Thanks.
On 11/6/16 9:33 AM, Shawn Heisey wrote:
On 11/6/2016 4:08 AM, Mugeesh Husain wrote:
Please sugguest load balancer name ?
I use haproxy. It is a software load balancer with pretty impressive
perfo
Thanks Evan for quick response.
On 10/20/16 10:19 AM, Tom Evans wrote:
On Thu, Oct 20, 2016 at 5:38 PM, Rallavagu wrote:
Solr 5.4.1 cloud with embedded jetty
Looking for some ideas around offline indexing where an independent node
will be indexed offline (not in the cloud) and added to the
Solr 5.4.1 cloud with embedded jetty
Looking for some ideas around offline indexing where an independent node
will be indexed offline (not in the cloud) and added to the cloud to
become leader so other cloud nodes will get replicated. Wonder if this
is possible without interrupting the live se
Looking for clues/recommendations to help warm up during startup. Not
necessarily Solr caches but mmap as well. I have used some like
"q=:[* TO *]" for various fields and it seems to help with
mmap population around 40-50%. Is there anything else that could help
achieve 90% or more? Thanks.
44 AM, Rallavagu wrote:
Solr Cloud 5.4.1 with embedded jetty, jdk8
At the time of startup it appears that "QuerySenderListener" is run twice
and this is causing "firstSearcher" and "newSearcher" to run twice as well.
Any clues as to why QuerySenderListener is triggered twice? Thanks.
It is a single core.
On 10/5/16 6:58 PM, Erick Erickson wrote:
How many cores? Is it possible you're seeing these from two different cores?
Erick
On Wed, Oct 5, 2016 at 11:44 AM, Rallavagu wrote:
Solr Cloud 5.4.1 with embedded jetty, jdk8
At the time of startup it appears
Solr Cloud 5.4.1 with embedded jetty, jdk8
At the time of startup it appears that "QuerySenderListener" is run
twice and this is causing "firstSearcher" and "newSearcher" to run twice
as well. Any clues as to why QuerySenderListener is triggered twice? Thanks.
Solr Cloud 5.4.1 with embedded Jetty - jdk 8
Is there a way to disable incoming updates (from leader) during startup
until "firstSearcher" queries finished? I am noticing that firstSearcher
queries keep on running at the time of startup and node shows up as
"Recovering".
Thanks
ase as the number of words in a phrase increase
Well, there's more work to do as the # of words increases, and if you
have large slops there's more work yet.
Best,
Erick
On Wed, Sep 28, 2016 at 5:54 PM, Rallavagu wrote:
Thanks Erick.
I have added queries for "firstSearcher" an
-and-commit-in-sorlcloud/
Best,
Erick
On Sat, Sep 24, 2016 at 11:35 AM, Rallavagu wrote:
On 9/22/16 5:59 AM, Shawn Heisey wrote:
On 9/22/2016 5:46 AM, Muhammad Zahid Iqbal wrote:
Did you find any solution to slow searches? As far as I know jetty
container default configuration is bit slo
On 9/22/16 5:59 AM, Shawn Heisey wrote:
On 9/22/2016 5:46 AM, Muhammad Zahid Iqbal wrote:
Did you find any solution to slow searches? As far as I know jetty
container default configuration is bit slow for large production
environment.
This might be true for the default configuration that com
016 at 10:23 AM, Rallavagu wrote:
Comments in line...
On 9/16/16 10:15 AM, Erick Erickson wrote:
Well, the next thing I'd look at is CPU activity. If you're flooding the
system
with updates there'll be CPU contention.
Monitoring does not suggest any high CPU but as you can see fr
time QTimes go high is right after softCommit which
is expected.
Wondering what causes update threads wait and if it has any impact on
search at all. I had couple of more CPUs added but I still see similar
behavior.
Thanks.
Best,
Erick
On Fri, Sep 16, 2016 at 9:19 AM, Rallavagu wrote:
Eri
7;d look is whether you're _also_ seeing stop-the-world GC pauses.
In that case there are a number of JVM options that can be tuned
Best,
Erick
On Fri, Sep 16, 2016 at 8:40 AM, Rallavagu wrote:
Solr 5.4.1 with embedded jetty single shard - NRT
Looking in logs, noticed that there are hig
Solr 5.4.1 with embedded jetty single shard - NRT
Looking in logs, noticed that there are high QTimes for Queries and
round same time high response times for updates. These are not during
"commit" or "softCommit" but when client application is sending updates.
Wondering how updates could impac
I have modified modules/http.mod as following (for solr 5.4.1, Jetty 9).
As you can see I have referred jetty-jmx.xml.
#
# Jetty HTTP Connector
#
[depend]
server
[xml]
etc/jetty-http.xml
etc/jetty-jmx.xml
On 5/21/16 3:59 AM, Georg Sorst wrote:
Hi list,
how do I correctly enable JMX in Sol
Any takers?
On 9/9/16 9:03 AM, Rallavagu wrote:
All,
Running Solr 5.4.1 with embedded Jetty with frequent updates coming in
and softCommit is set to 10 min. What I am noticing is occasional "slow"
updates (takes 8 sec to 15 sec sometimes) and about the same time slow
QTimes. Upon inv
All,
Running Solr 5.4.1 with embedded Jetty with frequent updates coming in
and softCommit is set to 10 min. What I am noticing is occasional "slow"
updates (takes 8 sec to 15 sec sometimes) and about the same time slow
QTimes. Upon investigating, it appears that
"ConcurrentUpdateSolrClient:b
"fieldValueCache");
args.put("size", "1");
args.put("initialSize", "10");
args.put("showItems", "-1");
conf = new CacheConfig(FastLRUCache.class, args, null);
}
fieldValueCacheConfig = conf;
Cheers
On Thu, Sep 1, 2016
Yeo wrote:
If I didn't get your question wrong, what you have listed is already the
default configuration that comes with your version of Solr.
Regards,
Edwin
On 30 August 2016 at 07:49, Rallavagu wrote:
Solr 5.4.1
Wondering what is the default configuration for "fieldValueCache".
Solr 5.4.1
Wondering what is the default configuration for "fieldValueCache".
worked.
On 8/29/16 11:31 AM, Rallavagu wrote:
I have run into a strange issue where "jstack -l " does not work. I
have tried this as the user that solr (5.4.1) is running as. I get
following error.
$ jstack -l 24064
24064: Unable to open socket file: target process not responding or
HotSpot
I have run into a strange issue where "jstack -l " does not work. I
have tried this as the user that solr (5.4.1) is running as. I get
following error.
$ jstack -l 24064
24064: Unable to open socket file: target process not responding or
HotSpot VM not loaded
The -F option can be used when th
enough.
If the only small ratio of documents is updated and a bottleneck is
filterCache you can experiment with segmened filters which suite more for
NRT.
http://blog-archive.griddynamics.com/2014/01/segmented-filter-cache-in-solr.html
On Fri, Aug 26, 2016 at 2:56 AM, Rallavagu wrote:
Follow up
#x27;t noticed a 4 minute
latency yet, tell them they don't know what they're talking
about when they insist on the NRT interval being a few
seconds ;).
Best,
Erick
On Tue, Jul 26, 2016 at 7:20 AM, Rallavagu wrote:
On 7/26/16 5:46 AM, Shawn Heisey wrote:
On 7/22/2016 10:15 AM, Ral
On 7/26/16 5:46 AM, Shawn Heisey wrote:
On 7/22/2016 10:15 AM, Rallavagu wrote:
As Erick indicated, these settings are incompatible with Near Real Time
updates.
With those settings, every time you commit and create a new searcher,
Solr will execute up to 1000 queries
queries are slow. Grouping? Pivot Faceting? 'cause
from everything you've said so far it's surprising that you're
seeing queries take this long, something doesn't feel right
but what it is I don't have a clue.
Thanks
Best,
Erick
On Fri, Jul 22, 2016 at 9:15 AM, Ra
Also, here is the link to screenshot.
https://dl.dropboxusercontent.com/u/39813705/Screen%20Shot%202016-07-22%20at%2010.40.21%20AM.png
Thanks
On 7/21/16 11:22 PM, Shawn Heisey wrote:
On 7/21/2016 11:25 PM, Rallavagu wrote:
There is no other software running on the system and it is completely
4192 6004 2996 S 0.0 0.0 15:08.87 nslcd
On 7/21/16 11:22 PM, Shawn Heisey wrote:
On 7/21/2016 11:25 PM, Rallavagu wrote:
There is no other software running on the system and it is completely
dedicated to Solr. It is running on Linux. Here is the full version.
Linux version 3.8.13-55.1.6.e
We have run load tests using JMeter with directory pointing to Solr and
also tests that are pointing to the application that queries Solr. In
both cases, we have noticed the results being slower.
Thanks
Best,
Erick
On Thu, Jul 21, 2016 at 11:22 PM, Shawn Heisey wro
On 7/21/16 9:16 PM, Shawn Heisey wrote:
On 7/21/2016 9:37 AM, Rallavagu wrote:
I suspect swapping as well. But, for my understanding - are the index
files from disk memory mapped automatically at the startup time?
They are *mapped* at startup time, but they are not *read* at startup.
The
ueries were showing the slowness caused to due to faceting
(debug=true). Since we have adjusted indexing and facet times are
improved but basic query QTime is still high so wondering where can I
look? Is there a way to debug (instrument) a query on Solr node?
FWIW,
Erick
On Thu, Jul 21, 2016 at 8
Solr 5.4.1 with embedded jetty with cloud enabled
We have a Solr deployment (approximately 3 million documents) with both
write and search operations happening. We have a requirement to have
updates available immediately (NRT). Configured with default
"solr.NRTCachingDirectoryFactory" for dire
nSearcher is set to "false". What I am seeing is
(from heap dump due to OutOfMemory error) that the LRUCache pertaining
"Document Cache" occupies around 85% of available heap and that is
causing OOM errors. So, trying to understand the behavior to address the
OOM issues.
All,
Solr 5.4 with emdbedded Jetty (4G heap)
Trying to understand behavior of "optimize" operation if not run
explicitly. What is the frequency at which this operation is run, what
are the storage requirements and how do we schedule it? Any
comments/pointers would greatly help.
Thanks in ad
threads is low so it shouldn't
have much impact on query perf.
It's theoretically possible that the background merge
will merge down to one segment, so you still need at
least as much free space on your disk and your index
occupies.
Best,
Erick
On Wed, Mar 16, 2016 at 10:07 AM, Rall
size of 1000
entries should take around 20MB (assuming single shard)
Thanks,
Emir
On 18.03.2016 17:02, Rallavagu wrote:
On 3/18/16 8:56 AM, Emir Arnautovic wrote:
Problem starts with autowarmCount="5000" - that executes 5000 queries
when new searcher is created and as queries ar
documents with all fields. I could not
reproduce OOM. I understand that I need to reduce cache sizes but
wondering what conditions could have caused OOM so I can keep a watch.
Thanks
Thanks,
Emir
On 18.03.2016 15:43, Rallavagu wrote:
Thanks for the recommendations Shawn. Those are the
Solr 5.4 embedded Jetty
Is it the right assumption that whenever a document that is returned as
a response to a query is cached in "Document Cache"?
Essentially, if I request for any entry like /select?q=id:
will it be cached in "Document Cache"? If yes, what is the TTL?
Thanks in advance
searcher due to long running query
that might be potentially causing OOM? Was trying to reproduce but could
not so far.
Here is the filter cache config
autowarmCount="1000"/>
Query Results cache
autowarmCount="5000"/>
On 3/18/16 7:31 AM, Shawn Heisey wrote:
On 3/
false
Thanks.
The third video down here:
http://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html
is Mikes visualization of the automatic merging process.
Best,
Erick
On Wed, Mar 16, 2016 at 9:40 AM, Rallavagu wrote:
All,
Solr 5.4 with emdbedded Jetty (4G
It seems
that it is either configured to be huge or you have big documents and
retrieving all fields or dont have lazy field loading set to true.
Can you please share your document cache config and heap settings.
Thanks,
Emir
On 17.03.2016 22:24, Rallavagu wrote:
comments in line...
On 3/17/16 2
One another item to look into is to increase the zookeeper timeout in
solr.xml of Solr. This would help with timeout caused by long GC pauses.
On 11/3/15 9:12 AM, Björn Häuser wrote:
Hi,
thank you for your answer.
1> No OOM hit, the log does not contain any hind of that. Also solr
wasn't rest
Also, to give you more idea, as per the
following document I am testing "Index heavy, Query heavy" situation.
https://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
Thanks
Best,
Erick
On Fri, Oct 30, 2015 at 8:28 AM, Rallavagu wrot
4.10.4 solr cloud, 3 zk quorum, jdk 8
autocommit: 15 sec, softcommit: 2 min
Under heavy indexing load with above settings, i have seen tlog growing
(into GB). After the updates stopped coming in, it settles down and
takes a while to recover before cloud becomes "green".
With 15 second autoco
+with+Solr+Cell+using+Apache+Tika>
On Oct 29, 2015, at 1:47 PM, Rallavagu mailto:rallav...@gmail.com>> wrote:
In general, is there a built-in data handler to index pictures (essentially,
EXIF and other data embedded in an image)? If not, what is the best practice to
do so? Thanks.
In general, is there a built-in data handler to index pictures
(essentially, EXIF and other data embedded in an image)? If not, what is
the best practice to do so? Thanks.
On 10/28/15 5:41 PM, Shawn Heisey wrote:
On 10/28/2015 5:11 PM, Rallavagu wrote:
Seeing very high CPU during this time and very high warmup times. During
this time, there were plenty of these errors logged. So, trying to find
out possible causes for this to occur. Could it be disk I/O issues
Also, is this thread that went OOM and what could cause it? The heap was
doing fine and server was live and running.
On 10/28/15 3:57 PM, Shawn Heisey wrote:
On 10/28/2015 2:06 PM, Rallavagu wrote:
Solr 4.6.1, cloud
Seeing following commit errors.
[commitScheduler-19-thread-1] ERROR
(writing to disk).
On 10/28/15 3:57 PM, Shawn Heisey wrote:
On 10/28/2015 2:06 PM, Rallavagu wrote:
Solr 4.6.1, cloud
Seeing following commit errors.
[commitScheduler-19-thread-1] ERROR
org.apache.solr.update.CommitTracker – auto commit
error...:java.lang.IllegalStateException: this writer hit an
Solr 4.6.1, cloud
Seeing following commit errors.
[commitScheduler-19-thread-1] ERROR org.apache.solr.update.CommitTracker
– auto commit error...:java.lang.IllegalStateException: this writer hit
an OutOfMemoryError; cannot commit at
org.apache.lucene.index.IndexWriter.prepareCommitInternal(In
Is it related to this config?
Rege/
USC Viterbi School of Engineering
University of Southern California
Master of Computer Science - Student
Computer Science - B.E
salon...@usc.edu <mailto:salon...@usc.edu> _||_ _619-709-6756_
_
_
_
_
On Tue, Oct 27, 2015 at 10:47 AM, Rallavagu mailto:rallav...@gmail.com>> wrote
Could you please share your query? You could use "wt=json" query
parameter to receive JSON formatted results if that is what you are
looking for.
On 10/27/15 10:44 AM, Salonee Rege wrote:
Hello,
We are trying to query the books.json that we have posted to solr.
But when we try to specficall
ave been created (closed actually) after the last
commit are _not_ read at all until the next searcher is opened via
another commit. Nothing is done with these new segments before the new
searcher is opened which you control with your commit strategy.
I see. Thanks for the insight.
Best,
Erick
ommit, but these have nothing to do with MMapDirectory.
So the question is really moot ;)
Best,
Erick
On Mon, Oct 26, 2015 at 5:47 PM, Rallavagu wrote:
All,
Are memory mapped files (mmap) flushed to disk during "hard commit"? If yes,
should we disable OS level (Linux for example) memo
All,
Are memory mapped files (mmap) flushed to disk during "hard commit"? If
yes, should we disable OS level (Linux for example) memory mapped flush?
I am referring to following for mmap files for Lucene/Solr
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
Linux level
t
all there is for 4.6.
Best,
Erick
On Thu, Oct 22, 2015 at 10:48 AM, Rallavagu Kon wrote:
Erick,
Indexing happening via Solr cloud server. This thread was from the leader. Some
followers show symptom of high cpu during this time. You think this is from
locking? What is the thread that is ho
sing
> them.
>
> Best,
> Erick
>
>> On Thu, Oct 22, 2015 at 9:43 AM, Rallavagu wrote:
>> Solr 4.6.1 cloud
>>
>> Looking into thread dump 4-5 threads causing cpu to go very high and causing
>> issues. These are tomcat's http threads and are locki
Solr 4.6.1 cloud
Looking into thread dump 4-5 threads causing cpu to go very high and
causing issues. These are tomcat's http threads and are locking. Can
anybody help me understand what is going on here? I see that incoming
connections coming in for updates and they are being passed on to
St
Solr 4.6.1, 4 node cloud with 3 zk
I see the following thread as blocked. Could somebody please help me
understand what is going on here and how will it impact solr cloud? All
four of these threads blocked. Thanks.
"coreZkRegister-1-thread-1" id=74 idx=0x108 tid=32162 prio=5 alive,
parked, n
Are you unable to reduce the indexing rate?
Best,
Erick
On Tue, Oct 13, 2015 at 9:08 AM, Rallavagu wrote:
Also, we have increased number of connections per host from default (20) to
100 for http thread pool to communicate with other nodes. Could this have
caused the issues as it can now spi
Great. Thanks Erick.
On 10/13/15 5:39 PM, Erick Erickson wrote:
More than expected, guaranteed. As long as at least one replica in a
shard is active, all queries should succeed. Maybe more slowly, but
they should succeed.
Best,
Erick
On Tue, Oct 13, 2015 at 4:25 PM, Rallavagu wrote:
It
It appears that when a node that is in "recovery" mode queried it would
defer the query to leader instead of serving from locally. Is this the
expected behavior? Thanks.
sk why indexing at
such a furious rate is
required that you're hitting this. Are you unable to reduce the indexing rate?
Best,
Erick
On Tue, Oct 13, 2015 at 9:08 AM, Rallavagu wrote:
Also, we have increased number of connections per host from default (20) to
100 for http thread pool to c
indexing load? There were some
inefficiencies that caused followers to work a lot harder than the
leader, but the leader had to spin off a bunch of threads to send
update to followers. That's fixed int he 5.2 release.
Best,
Erick
On Tue, Oct 13, 2015 at 8:40 AM, Rallavagu wrote:
Please he
exing load? There were some
inefficiencies that caused followers to work a lot harder than the
leader, but the leader had to spin off a bunch of threads to send
update to followers. That's fixed int he 5.2 release.
Best,
Erick
On Tue, Oct 13, 2015 at 8:40 AM, Rallavagu wrote:
Please help me
Please help me understand what is going on with this thread.
Solr 4.6.1, single shard, 4 node cluster, 3 node zk. Running on tomcat
with 500 threads.
There are 47 threads overall and designated leader becomes unresponsive
though shows "green" from cloud perspective. This is causing issues.
see this is happening.
Best,
Erick
On Thu, Oct 8, 2015 at 12:23 PM, Rallavagu wrote:
As a follow up.
Eventually the tlog file is disappeared (could not track the time it took to
clear out completely). However, following messages were noticed in
follower's log.
512
very large sizes if there are very long hard
commit intervals, but I don't
see how that interval would be different on the leader and follower.
So color me puzzled.
Best,
Erick
On Wed, Oct 7, 2015 at 8:09 PM, Rallavagu wrote:
Thanks Erick.
Eventually, followers caught up but the 14G
k
On Wed, Oct 7, 2015 at 7:39 PM, Rallavagu wrote:
Solr 4.6.1, single shard, 4 node cloud, 3 node zk
Like to understand the behavior better when large number of updates happen
on leader and it generates huge tlog (14G sometimes in my case) on other
nodes. At the same time leader's tlog is
Solr 4.6.1, single shard, 4 node cloud, 3 node zk
Like to understand the behavior better when large number of updates
happen on leader and it generates huge tlog (14G sometimes in my case)
on other nodes. At the same time leader's tlog is few KB. So, what is
the rate at which the changes from
It is java thread though. Does it need increasing OS level threads?
On 10/6/15 6:21 PM, Mark Miller wrote:
If it's a thread and you have plenty of RAM and the heap is fine, have you
checked raising OS thread limits?
- Mark
On Tue, Oct 6, 2015 at 4:54 PM Rallavagu wrote:
GC logging
be what is happening with
the heap.
- Mark
On Tue, Oct 6, 2015 at 4:04 PM Rallavagu wrote:
Mark - currently 5.3 is being evaluated for upgrade purposes and
hopefully get there sooner. Meanwhile, following exception is noted from
logs during updates
ERROR org.apache.solr.update.CommitTracker – auto co
3.2. Given the pace of
SolrCloud, you are dealing with something fairly ancient and so it will be
harder to find help with older issues most likely.
- Mark
On Mon, Oct 5, 2015 at 12:46 PM Rallavagu wrote:
Any takers on this? Any kinda clue would help. Thanks.
On 10/4/15 10:14 AM, Rallavagu wrote:
Any takers on this? Any kinda clue would help. Thanks.
On 10/4/15 10:14 AM, Rallavagu wrote:
As there were no responses so far, I assume that this is not a very
common issue that folks come across. So, I went into source (4.6.1) to
see if I can figure out what could be the cause.
The thread
t;recoveryStrat.join()" is where things are
holding up.
I wonder why/how cancelRecovery would take time so around 870 threads
would be waiting on. Is it possible that ZK is not responding or
something else like Operating System resources could cause this? Thanks.
On 10/2/15 4:17 PM,
/vm/RNI.c2java(J)V(Native Method)
On 10/2/15 4:12 PM, Rallavagu wrote:
Solr 4.6.1 on Tomcat 7, single shard 4 node cloud with 3 node zookeeper
During updates, some nodes are going very high cpu and becomes
unavailable. The thread dump shows the following thread is blocked 870
threads which ex
Solr 4.6.1 on Tomcat 7, single shard 4 node cloud with 3 node zookeeper
During updates, some nodes are going very high cpu and becomes
unavailable. The thread dump shows the following thread is blocked 870
threads which explains high CPU. Any clues on where to look?
"Thread-56848" id=79207 id
Thanks for the insight into this Erick. Thanks.
On 10/2/15 8:58 AM, Erick Erickson wrote:
Rallavagu:
Absent nodes going up and down or otherwise changing state, Zookeeper
isn't involved in the normal operations of Solr (adding docs,
querying, all that). That said, things that change the
what are those work entries?
Thanks
On 10/1/15 10:58 PM, Shawn Heisey wrote:
On 10/1/2015 1:26 PM, Rallavagu wrote:
Solr 4.6.1 single shard with 4 nodes. Zookeeper 3.4.5 ensemble of 3.
See following errors in ZK and Solr and they are connected.
When I see the following error in Zookeeper
Awesome. This is what I was looking for. Will try these. Thanks.
On 10/1/15 1:31 PM, Shawn Heisey wrote:
On 10/1/2015 12:39 PM, Rallavagu wrote:
Thanks for the response Andrea.
Assuming that Solr has it's own thread pool, it appears that
"PoolingClientConnectionManager" h
Solr 4.6.1 single shard with 4 nodes. Zookeeper 3.4.5 ensemble of 3.
See following errors in ZK and Solr and they are connected.
When I see the following error in Zookeeper,
unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Packet len11823809 is out of ra
Thanks Shawn. This is good data.
On 10/1/15 11:43 AM, Shawn Heisey wrote:
On 10/1/2015 11:50 AM, Rallavagu wrote:
Solr 4.6.1, single Shard, cloud with 4 nodes
Solr is running on Tomcat configured with 200 threads for thread pool.
As Solr uses
Thanks for the response Andrea.
Assuming that Solr has it's own thread pool, it appears that
"PoolingClientConnectionManager" has a maximum 20 threads per host as
default. Is there a way to changes this increase to handle heavy update
traffic? Thanks.
On 10/1/15 11:05 AM, Andrea Gazzarini
Solr 4.6.1, single Shard, cloud with 4 nodes
Solr is running on Tomcat configured with 200 threads for thread pool.
As Solr uses "org.apache.http.impl.conn.PoolingClientConnectionManager"
for replication, my question is does Solr threads use connections from
tomcat thread pool or they create t
Best,
Erick
On Mon, Aug 24, 2015 at 2:47 AM, Rallavagu wrote:
As a follow up, the default is set to "NRTCachingDirectoryFactory" for
DirectoryFactory but not MMapDirectory. It is mentioned that
NRTCachingDirectoryFactory "caches small files in memory for better NRT
performan
that consist of only the docs the belong on
that shard. You can getnearly linear throughput with increasing numbers of
shards this way.
Best,
Erick
On Tue, Aug 18, 2015 at 9:03 AM, Rallavagu wrote:
Thanks Shawn.
All participating cloud nodes are running Tomcat and as you suggested will
review
One other item to check is non heap memory usage. This can be monitored
from admin page.
On 8/23/15 11:48 PM, Pavel Hladik wrote:
Hi,
we have a Solr 5.2.1 with 9 cores and one of them has 140M docs. Can you
please recommend tuning of those GC parameters? The performance is not a
issue, sometim
s that out on the client side and
only sends packets to each leader that consist of only the docs the belong on
that shard. You can getnearly linear throughput with increasing numbers of
shards this way.
Best,
Erick
On Tue, Aug 18, 2015 at 9:03 AM, Rallavagu wrote:
Thanks Shawn.
All participati
ppose?
Thanks
On 8/18/15 8:28 AM, Shawn Heisey wrote:
On 8/18/2015 8:18 AM, Rallavagu wrote:
Thanks for the response. Does this cache behavior influence the delay
in catching up with cloud? How can we explain solr cloud replication
and what are the option to monitor and take proactive action (such
/17/2015 10:53 PM, Rallavagu wrote:
Also, I have noticed that the memory consumption goes very high. For
instance, each node is configured with 48G memory while java heap is
configured with 12G. The available physical memory is consumed almost
46G and the heap size is well within the limits (at
By the time the last email was sent, other node also caught up. Makes me
wonder what happened and how does this work.
Thanks
On 8/17/15 9:53 PM, Rallavagu wrote:
response inline..
On 8/17/15 8:40 PM, Erick Erickson wrote:
Is this 4 shards? Two shards each with a leader and follower? Details
/solr/UsingMailingLists
Sorry for not being very clear to start with. Hope the provided
information would help.
Thanks
Best,
Erick
On Mon, Aug 17, 2015 at 6:19 PM, Rallavagu wrote:
Hello,
Have 4 nodes participating solr cloud. After indexing about 2 mil documents,
only two nodes are "
Hello,
Have 4 nodes participating solr cloud. After indexing about 2 mil
documents, only two nodes are "Active" (green) while other two are shown
as "down". How can I "initialize" the replication from leader so other
two nodes would receive updates?
Thanks
All,
What is the best practice or guideline towards considering multiple
collections particularly in the solr cloud env?
Thanks
Srikanth
Sorry. I should have mentioned earlier. I have removed the original host
name on purpose. Thanks.
On 4/9/14, 1:42 PM, Siegfried Goeschl wrote:
Hi folks,
the URL looks wrong (misconfigured)
http://:8080/solr/collection1
Cheers,
Siegfried Goeschl
On 09 Apr 2014, at 14:28, Rallavagu wrote
te to host error.
Thanks,
Greg
On Apr 9, 2014, at 3:28 PM, Rallavagu wrote:
All,
I see the following error in the log file. The host that it is trying to find
is itself. Wondering if anybody experienced this before or any other info would
helpful. Thanks.
709703139 [http-bio-8080-exec
All,
I see the following error in the log file. The host that it is trying to
find is itself. Wondering if anybody experienced this before or any
other info would helpful. Thanks.
709703139 [http-bio-8080-exec-43] ERROR
org.apache.solr.update.SolrCmdDistributor –
org.apache.solr.client.sol
ndexing as well.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Thu, Mar 6, 2014 at 9:57 AM, Rallavagu wrote:
Yeah. I have thought about spitting out JSON and run it against Solr using
parallel Http threads separately. Thanks.
On 3/5/14, 6:46 PM, Susheel Kumar wrote:
1 - 100 of 105 matches
Mail list logo