_Have_ they crashed due to OOMs? It’s quite normal for Java to create
a sawtooth pattern of memory consumption. If you attach, say, jconsole
to the running Solr and hit the GC button, does the memory drop back?
To answer your question, though, no there’s no reason memory should creep.
That said, t
Hi,
I use solr for distributed indexing in cloud mode. I run solr in kubernetes
on a 72 core, 256 GB sever. In the work im doing, i benchmark index times
so we are constantly indexing, and then deleting the collection, etc for
accurate benchmarking on certain size of GB. In theory, this should not
y big machines... Is this calculation even correct in new Solr
versions?
And we do have a bit restricted problem: Our data are time based logs and we
generally have a restricted search for last 3 months. Which will match let's
say 10M of documents. How will this affect SOLR memory requirement
Solr
versions?
And we do have a bit restricted problem: Our data are time based logs and we
generally have a restricted search for last 3 months. Which will match let's
say 10M of documents. How will this affect SOLR memory requirements? Will we
still need to have the whole inverted indexes in
)
On Aug 28, 2017, at 8:48 AM, Markus Jelsma
wrote:
It is, unfortunately, not committed for 6.7.
-Original message-
From:Markus Jelsma
Sent: Monday 28th August 2017 17:46
To: solr-user@lucene.apache.org
Subject: RE: Solr memory leak
See https://issues.apache.org/jira
gt;>
>>>> It is, unfortunately, not committed for 6.7.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> -Original message-
>>>>>
>>>>> From:Markus Jelsma
>>>>> Sent: Mon
apache.org
Subject: RE: Solr memory leak
See https://issues.apache.org/jira/browse/SOLR-10506
Fixed for 7.0
Markus
-Original message-
From:Hendrik Haddorp
Sent: Monday 28th August 2017 17:42
To: solr-user@lucene.apache.org
Subject: Solr memory leak
Hi,
we noticed that triggering
solr-user@lucene.apache.org
Subject: RE: Solr memory leak
See https://issues.apache.org/jira/browse/SOLR-10506
Fixed for 7.0
Markus
-Original message-
From:Hendrik Haddorp
Sent: Monday 28th August 2017 17:42
To: solr-user@lucene.apache.org
Subject: Solr memory leak
Hi,
we noticed
blog)
>
>
>> On Aug 28, 2017, at 8:48 AM, Markus Jelsma
>> wrote:
>>
>> It is, unfortunately, not committed for 6.7.
>>
>>
>>
>>
>>
>> -Original message-
>>> From:Markus Jelsma
>>> Sent: Monday 28t
> -Original message-
>> From:Markus Jelsma
>> Sent: Monday 28th August 2017 17:46
>> To: solr-user@lucene.apache.org
>> Subject: RE: Solr memory leak
>>
>> See https://issues.apache.org/jira/browse/SOLR-10506
>> Fixed for 7.0
>>
>> Markus
>>
It is, unfortunately, not committed for 6.7.
-Original message-
> From:Markus Jelsma
> Sent: Monday 28th August 2017 17:46
> To: solr-user@lucene.apache.org
> Subject: RE: Solr memory leak
>
> See https://issues.apache.org/jira/browse/SOLR-10506
> Fixe
See https://issues.apache.org/jira/browse/SOLR-10506
Fixed for 7.0
Markus
-Original message-
> From:Hendrik Haddorp
> Sent: Monday 28th August 2017 17:42
> To: solr-user@lucene.apache.org
> Subject: Solr memory leak
>
> Hi,
>
> we noticed that triggering co
Hi,
we noticed that triggering collection reloads on many collections has a
good chance to result in an OOM-Error. To investigate that further I did
a simple test:
- Start solr with a 2GB heap and 1GB Metaspace
- create a trivial collection with a few documents (I used only 2
fields a
Hi Steve,
Fluctuation is OK. 100% utilization for more than a moment is not :)
Not sure what tool(s) you use for monitoring your Solr servers, but look
under "JVM Pool Utilization" in SPM if you're using SPM.
Or this live demo of a Solr system:
* click on https://apps.sematext.com/demo to get in
On 12/9/2015 7:56 AM, Steven White wrote:
> Thanks Erick!! Your summary and the blog by Uwe (thank you too Uwe) are
> very helpful.
>
> A follow up question. I also noticed the "JVM-Memory" report off Solr's
> home page is fluctuating. I expect some fluctuation, but it kinda worries
> me when it
-
> From:Steven White
> Sent: Wednesday 9th December 2015 15:56
> To: solr-user@lucene.apache.org
> Subject: Re: Solr memory usage
>
> Thanks Erick!! Your summary and the blog by Uwe (thank you too Uwe) are
> very helpful.
>
> A follow up question. I also noticed
Thanks Erick!! Your summary and the blog by Uwe (thank you too Uwe) are
very helpful.
A follow up question. I also noticed the "JVM-Memory" report off Solr's
home page is fluctuating. I expect some fluctuation, but it kinda worries
me when it fluctuates up / down in a range of 4 GB and maybe mo
You're doing nothing wrong, that particular bit of advice has
always needed a bit of explanation.
Solr (well, actually Lucene) uses MMapDirectory for much of
the index structure which uses the OS memory rather than
the JVM heap. See Uwe's excellent:
http://blog.thetaphi.de/2012/07/use-lucenes-mma
Hi folks,
My index size on disk (optimized) is 20 GB (single core, single index). I
have a system with 64 GB of RAM. I start Solr with 24 GB of RAM.
I have run load tests (up to 100 concurrent users) for hours where each
user issuing unique searches (the same search is never executed again for
Meaning this was working fine until Solr 5.0.0? I'm quite new to Solr and I
only started to use it when Solr 5.0.0 was released.
Regards,
Edwin
On 24 April 2015 at 18:20, Tom Evans wrote:
> On Fri, Apr 24, 2015 at 8:31 AM, Zheng Lin Edwin Yeo
> wrote:
> > Hi,
> >
> > So has anyone knows what i
On Fri, Apr 24, 2015 at 8:31 AM, Zheng Lin Edwin Yeo
wrote:
> Hi,
>
> So has anyone knows what is the issue with the "Heap Memory Usage" reading
> showing the value -1. Should I open an issue in Jira?
I have solr 4.8.1 and solr 5.0.0 servers, on the solr 4.8.1 servers
the core statistics have val
Hi,
So has anyone knows what is the issue with the "Heap Memory Usage" reading
showing the value -1. Should I open an issue in Jira?
Regards,
Edwin
On 22 April 2015 at 21:23, Zheng Lin Edwin Yeo wrote:
> I see. I'm running on SolrCloud with 2 replicia, so I guess mine will
> probably use much
I see. I'm running on SolrCloud with 2 replicia, so I guess mine will
probably use much more when my system reaches millions of documents.
Regards,
Edwin
On 22 April 2015 at 20:47, Shawn Heisey wrote:
> On 4/22/2015 12:11 AM, Zheng Lin Edwin Yeo wrote:
> > Roughly how many collections and how
On 4/22/2015 12:11 AM, Zheng Lin Edwin Yeo wrote:
> Roughly how many collections and how much records do you have in your Solr?
>
> I have 8 collections with a total of roughly 227000 records, most of which
> are CSV records. One of my collections have 142000 records.
The core that shows 82MB for
Roughly how many collections and how much records do you have in your Solr?
I have 8 collections with a total of roughly 227000 records, most of which
are CSV records. One of my collections have 142000 records.
Regards,
Edwin
On 22 April 2015 at 13:49, Shawn Heisey wrote:
> On 4/21/2015 11:33
On 4/21/2015 11:33 PM, Zheng Lin Edwin Yeo wrote:
> I've got the amount of disk space used, but for the "Heap Memory Usage"
> reading, it is showing the value -1.
> Do we need to change any settings for it? When I check from the Windows
> Task Manager, it is showing about 300MB for shard1 and 150MB
Thanks Shawn.
I've got the amount of disk space used, but for the "Heap Memory Usage"
reading, it is showing the value -1.
Do we need to change any settings for it? When I check from the Windows
Task Manager, it is showing about 300MB for shard1 and 150MB for shard2.
But I suppose that is the usag
On 4/21/2015 7:48 PM, Zheng Lin Edwin Yeo wrote:
> Does anyone knows the way to check the accurate memory and disk usage for
> each individual collections that's running in Solr?
>
>
> I'm using Solr-5.0.0 with 3 instance of external zookeeper-3.4.6, running
> on 2 shards/
Solr's admin UI will t
Hi everyone,
Does anyone knows the way to check the accurate memory and disk usage for
each individual collections that's running in Solr?
I'm using Solr-5.0.0 with 3 instance of external zookeeper-3.4.6, running
on 2 shards/
Regards,
Edwin
On 3/27/2015 8:10 AM, phi...@free.fr wrote:
>> You must send indexing requests to Solr,
>
> Are you referring to posting queries to SOLR, or to something
> else?
>
>> If you can set up multiple threads or processes...
>
> How do you do that?
Yes, I am referring to posting requests to the
tory
Can you update the stopwords.txt file, and then re-index the documents?
How?
Many thanks.
Philippe
- Mail original -
De: "Shawn Heisey"
À: solr-user@lucene.apache.org
Envoyé: Vendredi 27 Mars 2015 14:38:20
Objet: Re: Tweaking SOLR memory and cull facet words
On 3/2
On 3/27/2015 4:14 AM, phi...@free.fr wrote:
> Hi,
>
> my SOLR 5 solrconfig.xml file contains the following lines:
>
>
>on
> text
>100
>
>
> where the 'text' field contains thousands of words.
>
> When I start SOLR, the search engin
Hi,
my SOLR 5 solrconfig.xml file contains the following lines:
on
text
100
where the 'text' field contains thousands of words.
When I start SOLR, the search engine takes several minutes to index the words
in the 'text' field (although
And keep in mind that starving the OS of memory to
give it to the JVM is an anti-pattern, see Uwe's
excellent blog on MMapDirectory here:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
Best,
Erick
On Wed, Jan 7, 2015 at 5:55 AM, Shawn Heisey wrote:
> On 1/6/2015 1:10 PM
On 1/6/2015 1:10 PM, Abhishek Sharma wrote:
> *Q* - I am forced to set Java Xmx as high as 3.5g for my solr app.. If i
> keep this low, my CPU hits 100% and response time for indexing increases a
> lot.. And i have hit OOM Error as well when this value is low..
>
> Is this too high? If so, how can
Abhishek Sharma [abhishe...@unbxd.com] wrote:
> *Q* - I am forced to set Java Xmx as high as 3.5g for my solr app.. If i
> keep this low, my CPU hits 100% and response time for indexing increases a
> lot.. And i have hit OOM Error as well when this value is low..
[...]
> 2. Index Size - 2 g
>
*Q* - I am forced to set Java Xmx as high as 3.5g for my solr app.. If i
keep this low, my CPU hits 100% and response time for indexing increases a
lot.. And i have hit OOM Error as well when this value is low..
Is this too high? If so, how can I reduce this?
*Machine Details* 4 G RAM, SSD
*Solr
On Wed, 2014-10-29 at 23:37 +0100, Will Martin wrote:
> This command only touches OS level caches that hold pages destined for (or
> not) the swap cache. Its use means that disk will be hit on future requests,
> but in many instances the pages were headed for ejection anyway.
>
> It does not have
On 10/29/2014 1:05 PM, Toke Eskildsen wrote:
> We did have some problems on a 256GB machine churning terabytes of data
> through 40 concurrent Tika processes and into Solr. After some days,
> performance got really bad. When we did a top, we noticed that most of the
> time was used in the kernel
;solr-user@lucene.apache.org'
Subject: RE: Solr Memory Usage
This command only touches OS level caches that hold pages destined for (or
not) the swap cache. Its use means that disk will be hit on future requests,
but in many instances the pages were headed for ejection anyway.
It does not have anyt
. from people who don't even research the matter.
-Original Message-
From: Toke Eskildsen [mailto:t...@statsbiblioteket.dk]
Sent: Wednesday, October 29, 2014 3:06 PM
To: solr-user@lucene.apache.org
Subject: RE: Solr Memory Usage
Vijay Kokatnur [kokatnur.vi...@gmail.com] wrote:
&g
Vijay Kokatnur [kokatnur.vi...@gmail.com] wrote:
> For the Solr Cloud setup, we are running a cron job with following command
> to clear out the inactive memory. It is working as expected. Even though
> the index size of Cloud is 146GB, the used memory is always below 55GB.
> Our response times
On 10/29/2014 11:43 AM, Vijay Kokatnur wrote:
> I am observing some weird behavior with how Solr is using memory. We are
> running both Solr and zookeeper on the same node. We tested memory
> settings on Solr Cloud Setup of 1 shard with 146GB index size, and 2 Shard
> Solr setup with 44GB index s
I am observing some weird behavior with how Solr is using memory. We are
running both Solr and zookeeper on the same node. We tested memory
settings on Solr Cloud Setup of 1 shard with 146GB index size, and 2 Shard
Solr setup with 44GB index size. Both are running on similar beefy
machines.
Af
On 8/1/2014 3:17 PM, Ethan wrote:
> Our SolrCloud setup : 3 Nodes with Zookeeper, 2 running SolrCloud.
>
> Current dataset size is 97GB, JVM is 10GB, but 6GB is used(for less garbage
> collection time). RAM is 96GB,
>
> Our softcommit is set to 2secs and hardcommit is set to 1 hour.
>
> We are sud
4.5.0.
We are trying to free memory by deleting data from 2010. But that hasn't
helped so far.
On Fri, Aug 1, 2014 at 3:13 PM, Otis Gospodnetic wrote:
> Which version of Solr?
>
> Otis
> --
> Performance Monitoring * Log Analytics * Search Analytics
> Solr & Elasticsearch Support * http://sema
Which version of Solr?
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Fri, Aug 1, 2014 at 11:17 PM, Ethan wrote:
> Our SolrCloud setup : 3 Nodes with Zookeeper, 2 running SolrCloud.
>
> Current dataset size is 97GB, JVM
Our SolrCloud setup : 3 Nodes with Zookeeper, 2 running SolrCloud.
Current dataset size is 97GB, JVM is 10GB, but 6GB is used(for less garbage
collection time). RAM is 96GB,
Our softcommit is set to 2secs and hardcommit is set to 1 hour.
We are suddenly seeing high disk and network IOs. During
thanks!
On Tue, Mar 18, 2014 at 4:37 PM, Erick Erickson wrote:
> Avishai:
>
> It sounds like you already understand mmap. Even so you might be
> interested in this excellent writeup of MMapDirectory and Lucene by
> Uwe:
> http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
>
On 3/18/2014 8:37 AM, Erick Erickson wrote:
> It sounds like you already understand mmap. Even so you might be
> interested in this excellent writeup of MMapDirectory and Lucene by
> Uwe: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
There is some actual bad memory report
Avishai:
It sounds like you already understand mmap. Even so you might be
interested in this excellent writeup of MMapDirectory and Lucene by
Uwe: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
Best,
Erick
On Tue, Mar 18, 2014 at 7:23 AM, Avishai Ish-Shalom
wrote:
> aha
aha! mmap explains it. thank you.
On Tue, Mar 18, 2014 at 3:11 PM, Shawn Heisey wrote:
> On 3/18/2014 5:30 AM, Avishai Ish-Shalom wrote:
> > My solr instances are configured with 10GB heap (Xmx) but linux shows
> > resident size of 16-20GB. even with thread stack and permgen taken into
> > acco
On 3/18/2014 5:30 AM, Avishai Ish-Shalom wrote:
> My solr instances are configured with 10GB heap (Xmx) but linux shows
> resident size of 16-20GB. even with thread stack and permgen taken into
> account i'm still far off from these numbers. Could it be that jvm IO
> buffers take so much space? doe
How large is your index on disk? Solr memory maps the index into
memory. Thus the virtual memory used will often be quite large. Your
numbers don't sound inconceivable.
A good reference point is Grant Ingersoll's blog post on searchhub:
http://searchhub.org/2011/09/14/estimating-
Hi,
My solr instances are configured with 10GB heap (Xmx) but linux shows
resident size of 16-20GB. even with thread stack and permgen taken into
account i'm still far off from these numbers. Could it be that jvm IO
buffers take so much space? does lucene use JNI/JNA memory allocations?
More on this, I think I found something...
*Slave admin console- --> stats.jsp#cache**, FieldCache**
*
...
entries count: 22
entry#0 :
'MMapIndexInput(path="/home/agazzarini/solr-indexes/slave-data-dir/cbt/main/data/index/*_mp*.frq")'=>*'title_sort'*,class
...
entry#9 :
'MMapIndexInput(path=
Hi,
I'm getting some Out of memory (heap space) from my solr instance and
after investigating a little bit, I found several threads about sorting
behaviour in SOLR.
First, some information about the environment
- I'm using SOLR 3.6.1 and master / slave architecture with 1 master and
2 slaves
I just skimmed your post, but have you seen:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
MMapDirectory may be giving you a false sense of how much physical
memory is actually being used.
Best
Erick
On Mon, Oct 29, 2012 at 1:59 PM, Nicolai Scheer
wrote:
> Hi again!
>
Hi again!
On 29 October 2012 18:39, Nicolai Scheer wrote:
> Hi!
>
> We're currently facing a strange memory issue we can't explain, so I'd
> like to kindly ask if anyone is able to shed a light an the behavour
> we encounter.
>
> We use a Solr 3.5 instance on a Windows Server 2008 machine equippe
Hi!
We're currently facing a strange memory issue we can't explain, so I'd
like to kindly ask if anyone is able to shed a light an the behavour
we encounter.
We use a Solr 3.5 instance on a Windows Server 2008 machine equipped
with 16GB of ram.
The index uses 8 cores, 10 million documents, disk s
any input on this?
thanks
Jie
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-memory-leak-prevent-tomcat-shutdown-tp4014788p4015265.html
Sent from the Solr - User mailing list archive at Nabble.com.
light on if this means I need to upgrade to lucene 3.5?
thanks
jie
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-memory-leak-prevent-tomcat-shutdown-tp4014788p4014833.html
Sent from the Solr - User mailing list archive at Nabble.com.
by the way, I am running tomcat 6, solr 3.5 on redhat 2.6.18-274.el5 #1 SMP
Fri Jul 8 17:36:59 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-memory-leak-prevent-tomcat-shutdown-tp4014788p4014792.html
Sent from the Solr - User
=0x19f9 runnable [0x]
java.lang.Thread.State: RUNNABLE
... ...
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-memory-leak-prevent-tomcat-shutdown-tp4014788.html
Sent from the Solr - User mailing list archive at Nabble.com.
Yeah, I sent a note to the web folks there about the images.
I'll leave the rest to people who really _understand_ all that stuff
On Thu, Sep 20, 2012 at 8:31 AM, Bernd Fehling
wrote:
> Hi Erik,
>
> thanks for the link.
> Now if we could see the images in that article that would be great
Hi Erik,
thanks for the link.
Now if we could see the images in that article that would be great :-)
By the way, one cause for the memory jumps was located as "killer search" from
a user.
The interesting part is that the verbose gc.log showed a "hiccup" in the GC.
Which means that during a GC r
Here's a wonderful writeup about GC and memory in Solr/Lucene:
http://searchhub.org/dev/2011/03/27/garbage-collection-bootcamp-1-0/
Best
Erick
On Thu, Sep 20, 2012 at 5:49 AM, Robert Muir wrote:
> On Thu, Sep 20, 2012 at 3:09 AM, Bernd Fehling
> wrote:
>
>> By the way while looking for upgradi
On Thu, Sep 20, 2012 at 3:09 AM, Bernd Fehling
wrote:
> By the way while looking for upgrading to JDK7, the release notes say under
> section
> "known issues" about the "PorterStemmer" bug:
> "...The recommended workaround is to specify -XX:-UseLoopPredicate on the
> command line."
> Is this st
That is the problem with a jvm, it is a virtual machine.
Ask 10 experts about a good jvm settings and you get 15 answers. May be a
tradeoff
of the flexibility of jvm's. There is always a right setting for any application
running on a jvm but you just have to find it.
How about a Solr Wiki page abo
I have used this setting to reduce gc pauses with CMS - java 6 u23
XX:+ParallelRefProcEnabled
With this setting, jvm does gc of weakrefs with multiple threads and pauses are
low.
Please use this option only when you have multiple cores.
For me, CMS gives better results
Sent from my iPhone
On
Ooh, that is a nasty one. Is this JDK 7 only or also in 6?
It looks like the "-XX:ConcGCThreads=1" option is a workaround, is that right?
We've had some 1.6 JVMs behave in the same way that bug describes, but I
haven't verified it is because of finalizer problems.
wunder
On Sep 19, 2012, at 5:
Two in one morning
The JVM bug I'm familiar with is here:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7112034
FWIW,
Erick
On Wed, Sep 19, 2012 at 8:20 AM, Shawn Heisey wrote:
> On 9/18/2012 9:29 PM, Lance Norskog wrote:
>>
>> There is a known JVM garbage collection bug that causes th
On 9/18/2012 9:29 PM, Lance Norskog wrote:
There is a known JVM garbage collection bug that causes this. It has to do with
reclaiming Weak references, I think in WeakHashMap. Concurrent garbage
collection collides with this bug and the result is that old field cache data
is retained after clos
he problem.)
- Original Message -
| From: "Bernd Fehling"
| To: solr-user@lucene.apache.org
| Sent: Tuesday, September 18, 2012 11:29:56 PM
| Subject: Re: SOLR memory usage jump in JVM
|
| Hi Lance,
|
| thanks for this hint. Something I also see, a sawtooth. This is
| coming from Eden
Hi Otis,
because I see this on my slave without replication there is no index file
change.
I have also tons of logged data to dig in :-)
I took dumps from different stages, fresh installed, after 5GB jump, after the
system was hanging right after replication,...
The last one was interesting when
- Original Message -
> | From: "Yonik Seeley"
> | To: solr-user@lucene.apache.org
> | Sent: Tuesday, September 18, 2012 7:38:41 AM
> | Subject: Re: SOLR memory usage jump in JVM
> |
> | On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
> | wrote:
> | > I use
Hi Bernd,
On Tue, Sep 18, 2012 at 3:09 AM, Bernd Fehling
wrote:
> Hi Otis,
>
> not really a problem because I have plenty of memory ;-)
> -Xmx25g -Xms25g -Xmn6g
Good.
> I'm just interested into this.
> Can you report similar jumps within JVM with your monitoring at sematext?
Yes. More importan
| Sent: Tuesday, September 18, 2012 7:38:41 AM
| Subject: Re: SOLR memory usage jump in JVM
|
| On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
| wrote:
| > I used GC in different situations and tried back and forth.
| > Yes, it reduces the used heap memory, but not by 5GB.
| > Even so t
On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
wrote:
> I used GC in different situations and tried back and forth.
> Yes, it reduces the used heap memory, but not by 5GB.
> Even so that GC from jconsole (or jvisualvm) is "Full GC".
Whatever "Full GC" means ;-)
In the past at least, I've found th
I used GC in different situations and tried back and forth.
Yes, it reduces the used heap memory, but not by 5GB.
Even so that GC from jconsole (or jvisualvm) is "Full GC".
But while you bring GC into this, there is another interesting thing.
- I have one slave running for a week which ends up aro
What happens if you attach jconsole (should ship with your SDK) and force a GC?
Does the extra 5G go away?
I'm wondering if you get a couple of warming searchers going simultaneously
and happened to measure after that.
Uwe has an interesting blog about memory, he recommends using as
little as pos
Hi Otis,
not really a problem because I have plenty of memory ;-)
-Xmx25g -Xms25g -Xmn6g
I'm just interested into this.
Can you report similar jumps within JVM with your monitoring at sematext?
Actually I would assume to see jumps of 0.5GB or even 1GB, but 5GB?
And what is the cause, a cache?
A
Hi Bernd,
But is this really (causing) a problem? What -Xmx are you using?
Otis
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - http://sematext.com/spm/index.html
On Tue, Sep 18, 2012 at 2:50 AM, Bernd Fehling
wrote:
> Hi list,
>
> while monitoring
Hi list,
while monitoring my systems I see a jump in memory consumption in JVM
after 2 to 5 days of running of about 5GB.
After starting the system (search node only, no replication during search)
SOLR uses between 6.5GB to 10.3GB of JVM when idle.
If the search node is online and serves requests
Check your cores' "status" page and see if you're running the
MMapDirectory (you probably are.)
In that case, you probably want to devote even less RAM to Tomcat's
heap because the index files are being read out of memory-mapped pages
that don't reside on the heap, so you'd be devoting more memory
Le 22/08/2012 16:57, Bruno Mannina a écrit :
Dear users,
I try to know if my add in the setenv.sh (which I need to create
because it didn't exist) file has been set but when I click on the
link Java Properties on Admin Solr web page
I can't see the variable CATALINA_OPTS.
In fact, I would li
Dear users,
I try to know if my add in the setenv.sh (which I need to create because
it didn't exist) file has been set but when I click on the link Java
Properties on Admin Solr web page
I can't see the variable CATALINA_OPTS.
In fact, I would like to know if my line added in the file setenv
No, that's 255 bytes/record. Also, any time you store a field, the
raw data is preserved in the *.fdt and *.fdx files. If you're thinking
about RAM requirements, you must subtract the amount of data
in those files from the total, as a start. This might help:
http://lucene.apache.org/core/old_versi
thanks for help
hey
I tried some exercise
I m storing schema (uuid,key, userlocation)
uuid and key are unique and user location have cardinality as 150
uuid and key are stored and indexed while userlocation is indexed not
stored.
still the index directory size is 51 MB just for 200,000 records do
This is really difficult to answer because there are so many variables;
the number of unique terms, whether you store fields or not (which is
really unrelated to memory consumption during searching), etc, etc,
etc. So even trying the index and just looking at the index directory
won't tell you much
Faceting and sorting are the two biggest places people get into trouble.
You've been asking questions about Solr Cloud, so I assume you're
working on a trunk release. Note that most everything people know
about memory consumption painfully gained over the years is...wrong
on trunk.
Or at least may
I am not currently running into memory issues, but I was wondering if
anyone could explain to me Solrs memory usage? What does Solr
actually store in memory? What are some of the largest memory
consumers (i.e. faceting, sorting, etc). Is the best way to start
addressing questions like this to ju
Mike
Actually i'm not able to tell you what each value stands for .. but what i can
tell you is, where the information is coming from.
The interface requests /admin/system which is using
https://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/admin/SystemInf
I'm looking at the solr admin interface site. On the dashboard right
panel, I see three sections with size numbers like 227MB(light),
124MB(darker), and 14MB(darkest).
I'm on a windows server.
Couple questions about what I see in the solr app admin interface:
- In the top right section of the d
on selection issue another query to get your additional data (if i
follow what you want)
On 22 January 2012 18:53, Dave wrote:
> I take it from the overwhelming silence on the list that what I've asked is
> not possible? It seems like the suggester component is not well supported
> or understood,
I take it from the overwhelming silence on the list that what I've asked is
not possible? It seems like the suggester component is not well supported
or understood, and limited in functionality.
Does anyone have any ideas for how I would implement the functionality I'm
looking for. I'm trying to i
That was how I originally tried to implement it, but I could not figure out
how to get the suggester to return anything but the suggestion. How do you
do that?
On Thu, Jan 19, 2012 at 1:13 PM, Robert Muir wrote:
> I really don't think you should put a huge json document as a search term.
>
> Jus
I really don't think you should put a huge json document as a search term.
Just make "Brooklyn, New York, United States" or whatever you intend
the user to actually search on/type in as your search term.
put the rest in different fields (e.g. stored-only, not even indexed
if you dont need that) an
In my original post I included one of my terms:
Brooklyn, New York, United States?{ |id|: |2620829|,
|timezone|:|America/New_York|,|type|: |3|, |country|: { |id| : |229| },
|region|: { |id| : |3608| }, |city|: { |id|: |2616971|, |plainname|:
|Brooklyn|, |name|: |Brooklyn, New York, United States|
I don't think the problem is FST, since it sorts offline in your case.
More importantly, what are you trying to put into the FST?
it appears you are indexing terms from your term dictionary, but your
term dictionary is over 1GB, why is that?
what do your terms look like? 1GB for 2,784,937 docume
1 - 100 of 188 matches
Mail list logo