OK, thanks for wrapping this up!
On Mon, Aug 31, 2015 at 10:08 AM, Rallavagu wrote:
> Erick,
>
> Apologies for missing out on status on indexing (replication) issues as I
> have originally started this thread. After implementing CloudSolrServer
> instead of CouncurrentUpdateSolrServer things were
Erick,
Apologies for missing out on status on indexing (replication) issues as
I have originally started this thread. After implementing
CloudSolrServer instead of CouncurrentUpdateSolrServer things were much
better. I simply wanted to follow up on understanding the memory
behavior better tho
bq: As a follow up, the default is set to "NRTCachingDirectoryFactory"
for DirectoryFactory but not MMapDirectory. It is mentioned that
NRTCachingDirectoryFactory "caches small files in memory for better
NRT performance".
NRTCachingDirectoryFactory also uses MMapDirectory under the covers as
well
As a follow up, the default is set to "NRTCachingDirectoryFactory" for
DirectoryFactory but not MMapDirectory. It is mentioned that
NRTCachingDirectoryFactory "caches small files in memory for better NRT
performance".
Wondering if the this would also consume physical memory to the amount
of M
Thanks for the response. Will take a look into using cloud solr server
for updates and review tlog mechanism.
On 8/18/15 9:29 AM, Erick Erickson wrote:
Couple of things:
1> Here's an excellent backgrounder for MMapDirectory, which is
what makes it appear that Solr is consuming all the physical
Couple of things:
1> Here's an excellent backgrounder for MMapDirectory, which is
what makes it appear that Solr is consuming all the physical memory
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
2> It's possible that your transaction log was huge. Perhaps not likel
Thanks Shawn.
All participating cloud nodes are running Tomcat and as you suggested
will review the number of threads and increase them as needed.
Essentially, what I have noticed was that two of four nodes caught up
with "bulk" updates instantly while other two nodes took almost 3 hours
to
On 8/18/2015 8:18 AM, Rallavagu wrote:
> Thanks for the response. Does this cache behavior influence the delay
> in catching up with cloud? How can we explain solr cloud replication
> and what are the option to monitor and take proactive action (such as
> initializing, pausing etc) if needed?
I do
Thanks for the response. Does this cache behavior influence the delay in
catching up with cloud? How can we explain solr cloud replication and
what are the option to monitor and take proactive action (such as
initializing, pausing etc) if needed?
On 8/18/15 5:57 AM, Shawn Heisey wrote:
On 8/
On 8/17/2015 10:53 PM, Rallavagu wrote:
> Also, I have noticed that the memory consumption goes very high. For
> instance, each node is configured with 48G memory while java heap is
> configured with 12G. The available physical memory is consumed almost
> 46G and the heap size is well within the li
By the time the last email was sent, other node also caught up. Makes me
wonder what happened and how does this work.
Thanks
On 8/17/15 9:53 PM, Rallavagu wrote:
response inline..
On 8/17/15 8:40 PM, Erick Erickson wrote:
Is this 4 shards? Two shards each with a leader and follower? Details
response inline..
On 8/17/15 8:40 PM, Erick Erickson wrote:
Is this 4 shards? Two shards each with a leader and follower? Details
matter a lot
It is a single collection single shard.
What, if anything, is in the log file for the down nodes? I'm assuming
that when you
start, all the node
Is this 4 shards? Two shards each with a leader and follower? Details
matter a lot
What, if anything, is in the log file for the down nodes? I'm assuming
that when you
start, all the nodes are active
You might review:
http://wiki.apache.org/solr/UsingMailingLists
Best,
Erick
On Mon, Aug
13 matches
Mail list logo