\
> &f.cs_rep.separator=%5E" --data-binary @- -H 'Content-type:text/plain;
> charset=utf-8'
> EnD)
>
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Tuesday, April 08, 2014 2:21 PM
> To: solr-user@lucene.apache.o
Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Tuesday, April 08, 2014 2:21 PM
To: solr-user@lucene.apache.org
Subject: Re: solr4 performance question
What do you have for hour _softcommit_ settings in solrconfig.xml? I'm
guessing you're using SolrJ or similar, but the solrconfig setti
What do you have for hour _softcommit_ settings in solrconfig.xml? I'm
guessing you're using SolrJ or similar, but the solrconfig settings
will trip a commit as well.
For that matter ,what are all our commit settings in solrconfig.xml,
both hard and soft?
Best,
Erick
On Tue, Apr 8, 2014 at 10:28
Hi Joshi;
Click to the Plugins/Stats section under your collection at Solr Admin UI.
You will see the cache statistics for different types of caches. hitratio
and evictions are good statistics to look at first. On the other hand you
should read here: https://wiki.apache.org/solr/SolrPerformanceFac
Hi,
We have 10 node Solr Cloud (5 shards, 2 replicas) with 30 GB JVM on 60GB
machine and 40 GB of index.
We're constantly noticing that Solr queries take longer time while update (with
commit=false setting) is in progress. The query which usually takes .5 seconds,
take up to 2 minutes while up
com]
Sent: Thursday, February 27, 2014 3:45 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr4 performance
You would get more room for disk cache by reducing your large heap.
Otherwise, you'd have to add more RAM to your systems or shard your index
to more nodes to gain more RAM that way
gt; If page cache is the issue, what is the solution?
>
> Thanks!
>
> -Original Message-
> From: Michael Della Bitta [mailto:michael.della.bi...@appinions.com]
> Sent: Monday, February 24, 2014 9:54 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr4 performance
&g
On 2/27/2014 1:09 PM, Joshi, Shital wrote:
If page cache is the issue, what is the solution?
What operating system are you using, and what tool are you looking at to
see your memory usage? Can you share a screenshot with us? Use a file
sharing website for that - the list generally doesn't l
Hi Michael,
If page cache is the issue, what is the solution?
Thanks!
-Original Message-
From: Michael Della Bitta [mailto:michael.della.bi...@appinions.com]
Sent: Monday, February 24, 2014 9:54 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr4 performance
I'm not sure how y
appen?
>
>
> -Original Message-
> From: Michael Della Bitta [mailto:michael.della.bi...@appinions.com]
> Sent: Friday, February 21, 2014 5:28 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr4 performance
>
> It could be that your query is churning the page cache o
appen?
-Original Message-
From: Michael Della Bitta [mailto:michael.della.bi...@appinions.com]
Sent: Friday, February 21, 2014 5:28 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr4 performance
It could be that your query is churning the page cache on that node
sometimes, so Solr pauses so t
ally quick. The only thing stands out is
> shard on which query takes long time has couple million more documents than
> other shards.
>
> -Original Message-
> From: Michael Della Bitta [mailto:michael.della.bi...@appinions.com]
> Sent: Thursday, February 20, 2014 5:26 PM
>
shards.
-Original Message-
From: Michael Della Bitta [mailto:michael.della.bi...@appinions.com]
Sent: Thursday, February 20, 2014 5:26 PM
To: solr-user@lucene.apache.org
Subject: RE: Solr4 performance
Hi,
As for your first question, setting openSearcher to true means you will see
the new
i!
>
> I have few other questions regarding Solr4 performance issue we're facing.
>
> We're committing data to Solr4 every ~30 seconds (up to 20K rows). We use
> commit=false in update URL. We have only hard commit setting in Solr4
> config.
>
>
>
Hi!
I have few other questions regarding Solr4 performance issue we're facing.
We're committing data to Solr4 every ~30 seconds (up to 20K rows). We use
commit=false in update URL. We have only hard commit setting in Solr4 config.
${solr.autoCommit.maxTime:60}
On 2/18/2014 2:14 PM, Joshi, Shital wrote:
Thanks much for all suggestions. We're looking into reducing allocated heap
size of Solr4 JVM.
We're using NRTCachingDirectoryFactory. Does it use MMapDirectory internally?
Can someone please confirm?
In Solr, NRTCachingDirectory does indeed use MMa
hours for 700 mil documents)
Thanks!
-Original Message-
From: Roman Chyla [mailto:roman.ch...@gmail.com]
Sent: Wednesday, February 12, 2014 3:17 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr4 performance
And perhaps one other, but very pertinent, recommendation is: allocate only
And perhaps one other, but very pertinent, recommendation is: allocate only
as little heap as is necessary. By allocating more, you are working against
the OS caching. To know how much is enough is bit tricky, though.
Best,
roman
On Wed, Feb 12, 2014 at 2:56 PM, Shawn Heisey wrote:
> On 2/1
On 2/12/2014 12:07 PM, Greg Walters wrote:
Take a look at
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html as it's
a pretty decent explanation of memory mapped files. I don't believe that the
default configuration for solr is to use MMapDirectory but even if it does my
mory mapped file? What is the eviction
> policy of this memory mapped file? Can we control it?
>
> _
> From: Joshi, Shital [Tech]
> Sent: Wednesday, February 05, 2014 12:00 PM
> To: 'solr-user@lucene.apache.org'
> Subject: Solr4
mapped file? What is the eviction
> policy of this memory mapped file? Can we control it?
>
> _
> From: Joshi, Shital [Tech]
> Sent: Wednesday, February 05, 2014 12:00 PM
> To: 'solr-user@lucene.apache.org'
> Subject:
Does Solr4 load entire index in Memory mapped file? What is the eviction policy
of this memory mapped file? Can we control it?
_
From: Joshi, Shital [Tech]
Sent: Wednesday, February 05, 2014 12:00 PM
To: 'solr-user@lucene.apache.org'
Subj
Hi,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 dynamic compute boxes
(cloud). We're using local disk (/local/data) to store solr index files. All
hosts have 60GB ram and Solr4 JVM are running with max 30GB heap size. So far
we have 470 million documents. We are using custom shard
23 matches
Mail list logo