New gen should be big enough to handle all allocations that have a lifetime of
a single request, considering that you'll have multiple concurrent requests. If
new gen is routinely overflowed, you can put short-lived objects in the old gen.
Yes, you need to go to CMS.
I have usually seen the hit
Actually, I haven't ever seen a PermGen with 2.8 GB.
So you must have a very special use case with SOLR.
For my little index with 60 million docs and 170GB index size I gave
PermGen 82 MB and it is only using 50.6 MB for a single VM.
Permanent Generation (PermGen) is completely separate from the
Thanks, Walter
Hit rate on the document caches is close to 70-80% and the filter caches
are a 100% hit (since most of our queries filter on the same fields but
have a different q parameter). Query result cache is not of great
importance to me since the hit rate their is almost negligible.
Does it
As I suggested, you have a couple of field that do not ignore stop words, so
the stop word must be present in at least one of those fields:
(number:of^3.0 | all_code:of^2.0)
The solution would be to remove the "number" and "all_code" fields from qf.
-- Jack Krupansky
-Original Message
Jack,
Thanks for the reply.
Yes. your observation is right. I see, stopwords are not being ignore at
query time.
Say, I'm searching for 'bank of america'. I'm expecting 'of' should not be
the part of search.
But, here I see 'of' is being sent. Same is the query syntax for 'OR' and
'AND' operator
An LRU cache will always fill up the old generation. Old objects are ejected,
and those are usually in the old generation.
Increasing the heap size will not eliminate this. It will make major, stop the
world collections longer.
Increase the new generation size until the rate of old gen increase
Hi
I have very large index for a few collections and when they are being
queried, i see the Old gen space close to 100% Usage all the time. The
system becomes extremely slow due to GC activity right after that and it
gets into this cycle very often
I have given solr close to 30G of heap in a 65 G
The heartbeat that keeps the node alive is the connection it maintains with
ZooKeeper.
We don’t currently have anything built in that will actively make sure each
node can serve queries and remove it from clusterstatem.json if it cannot. If a
replica is maintaining it’s connection with ZooKeepe
Hi,
I'm using Solr 4.0 Final (yes, I know I need to upgrade)
I'm getting this error:
SEVERE: org.apache.solr.common.SolrException: no field name specified in
query and no default specified via 'df' param
And I applied this fix: https://issues.apache.org/jira/browse/SOLR-3646
And unfortunately, t
On 3/1/2014 6:53 PM, Jack Krupansky wrote:
NoSQL? To me it's just a marketing term, like Big Data.
Data store? That does imply support for persistence, as opposed to
mere caching, but mere persistence doesn't assure that the store is
suitable for use as a System of Record which is a requiremen
We had a brief SolrCloud outage this weekend when a node's SSD began to
fail but the node still appeared to be up to the rest of the SolrCloud
cluster (i.e. still green in clusterstate.json). Distributed queries that
reached this node would fail but whatever heartbeat keeps the node in the
clustrst
Thanks again for the info. Hopefully we find some more clues if it
continues to occur. The ops team are looking at alternative deployment
methods as well, so we might end up avoiding the issue altogether.
Ta,
Greg
On 28 February 2014 02:42, Shalin Shekhar Mangar wrote:
> I think it is just a si
If you are trying to serve results as users are typing, then you can use
EdgeNGramFilter (see
https://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.EdgeNGramFilterFactory
).
Let's say you configure your field like this, as shown in the Solr wiki:
Erick,
Thanks a lot for the detailed explanation. That clarified things for me
better.
On Sun, Mar 2, 2014 at 10:04 AM, Erick Erickson wrote:
> Well, in M/S setups the master shouldn't be searching at all,
> but that's a nit.
>
> That aside, whether the master has opened a new or
> searcher or n
Hmmm, you _ought_ to be able to specify a relative path
in solrconfig_slave.xml:solrconfig.xml,x.xml,y.xml
But there's certainly the chance that this is hard-coded in
the query elevation component so I can't say that this'll work
with assurance.
Best,
Erick
On Sun, Mar 2, 2014 at 6:14 AM, David
Well, in M/S setups the master shouldn't be searching at all,
but that's a nit.
That aside, whether the master has opened a new or
searcher or not is irrelevant to what the slave replicates.
What _is_ relevant is whether any of the files on disk that
comprise the index (i.e. the segment files) hav
Perhaps you just need StatsComponent?
https://cwiki.apache.org/confluence/display/solr/The+Stats+Component
On Sun, Mar 2, 2014 at 6:32 AM, Soumitra Kumar wrote:
> In general, yes.
>
> I don't how SolrCloud serves a distributed query? What all it does on the
> shards, and what on the server servi
Hi sorry for the cross post but I got no response in the dev group so assumed I
posted in the wrong place.
I am using Solr 3.6 and am trying to automate the deployment of cores with a
custom elevate file. It is proving to be difficult as most of the file (schema,
stop words etc) support absol
18 matches
Mail list logo