I just skimmed your post, but have you seen:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
MMapDirectory may be giving you a false sense of how much physical
memory is actually being used.
Best
Erick
On Mon, Oct 29, 2012 at 1:59 PM, Nicolai Scheer
wrote:
> Hi again!
>
Hi again!
On 29 October 2012 18:39, Nicolai Scheer wrote:
> Hi!
>
> We're currently facing a strange memory issue we can't explain, so I'd
> like to kindly ask if anyone is able to shed a light an the behavour
> we encounter.
>
> We use a Solr 3.5 instance on a Windows Server 2008 machine equippe
Hi!
We're currently facing a strange memory issue we can't explain, so I'd
like to kindly ask if anyone is able to shed a light an the behavour
we encounter.
We use a Solr 3.5 instance on a Windows Server 2008 machine equipped
with 16GB of ram.
The index uses 8 cores, 10 million documents, disk s
No, that's 255 bytes/record. Also, any time you store a field, the
raw data is preserved in the *.fdt and *.fdx files. If you're thinking
about RAM requirements, you must subtract the amount of data
in those files from the total, as a start. This might help:
http://lucene.apache.org/core/old_versi
thanks for help
hey
I tried some exercise
I m storing schema (uuid,key, userlocation)
uuid and key are unique and user location have cardinality as 150
uuid and key are stored and indexed while userlocation is indexed not
stored.
still the index directory size is 51 MB just for 200,000 records do
This is really difficult to answer because there are so many variables;
the number of unique terms, whether you store fields or not (which is
really unrelated to memory consumption during searching), etc, etc,
etc. So even trying the index and just looking at the index directory
won't tell you much
> Commits are divided into 2 groups:
> - often but small (last changed
> info)
1) Make sure that it's not too often and you don't have commit
overlapping problem.
http://wiki.apache.org/solr/FAQ#What_does_.22PERFORMANCE_WARNING:_Overlapping_onDeckSearchers.3DX.22_mean_in_my_logs.3F
2) You may
> Hey Denis,
> * How big is your index in terms of number of documents and index size?
5 cores, average 250.000 documents, one with about 1 million (but
without text, just int/float fields), one with about 10 million
id/name documents, but with n-gram.
Size: 4 databases about 1G (sum),
I ran out of memory on some big indexes when using solr 1.4. Found out that
increasing
termInfosIndexDivisor
in solrconfig.xml could help a lot.
It may slow down your searching your index.
cheers,
:-Dennis
On 02/06/2011, at 01.16, Alexey Serba wrote:
> Hey Denis,
>
> * How big is your in
Hey Denis,
* How big is your index in terms of number of documents and index size?
* Is it production system where you have many search requests?
* Is there any pattern for OOM errors? I.e. right after you start your
Solr app, after some search activity or specific Solr queries, etc?
* What are 1)
There were no parameters at all, and java hitted "out of memory"
almost every day, then i tried to add parameters but nothing changed.
Xms/Xmx - did not solve the problem too. Now i try the MaxPermSize,
because it's the last thing i didn't try yet :(
Wednesday, June 1, 2011, 9:00:56 PM,
Could be related to your crazy high MaxPermSize like Marcus said.
I'm no JVM tuning expert either. Few people are, it's confusing. So if
you don't understand it either, why are you trying to throw in very
non-standard parameters you don't understand? Just start with whatever
the Solr example
PermSize and MaxPermSize don't need to be higher than 64M. You should read on
JVM tuning. The permanent generation is only used for the code that's being
executed.
> So what should i do to evoid that error?
> I can use 10G on server, now i try to run with flags:
> java -Xms6G -Xmx6G -XX:MaxPer
Overall memory on server is 24G, and 24G of swap, mostly all the time
swap is free and is not used at all, that's why "no free swap" sound
strange to me..
> There is no simple answer.
> All I can say is you don't usually want to use an Xmx that's more than
> you actually have available RAM, a
There is no simple answer.
All I can say is you don't usually want to use an Xmx that's more than
you actually have available RAM, and _can't_ use more than you have
available ram+swap, and the Java error seems to be suggesting you are
using more than is available in ram+swap. That may not be
So what should i do to evoid that error?
I can use 10G on server, now i try to run with flags:
java -Xms6G -Xmx6G -XX:MaxPermSize=1G -XX:PermSize=512M -D64
Or should i set xmx to lower numbers and what about other params?
Sorry, i don't know much about java/jvm =(
Wednesday, June 1, 2011, 7:29:
Are you in fact out of swap space, as the java error suggested?
The way JVM's work always, if you tell it -Xmx6g, it WILL use all 6g
eventually. The JVM doesn't Garbage Collect until it's going to run out
of heap space, until it gets to your Xmx. It will keep using RAM until
it reaches your
Here is output after about 24 hours running solr. Maybe there is some
way to limit memory consumption? :(
test@d6 ~/solr/example $ java -Xms3g-Xmx6g-D64
-Dsolr.solr.home=/home/test/solr/example/multicore/ -jar start.jar
2011-05-31 17:05:14.265:INFO::Logging to STDERR via
. it counts all memory, not
> sure... if you don't have big values for 99.9%wa (which means WAIT I/O -
> disk swap usage) everyhing is fine...
> -Original Message-
> From: Denis Kuzmenok
> Sent: May-31-11 4:18 PM
> To: solr-user@lucene.apache.org
> Subject: Solr
ge-
From: Denis Kuzmenok
Sent: May-31-11 4:18 PM
To: solr-user@lucene.apache.org
Subject: Solr memory consumption
I run multiple-core solr with flags: -Xms3g -Xmx6g -D64, but i see this
in top after 6-8 hours and still raising:
17485 test214 10.0g 7.4g 9760 S 308.2 31.3 448:00
I run multiple-core solr with flags: -Xms3g -Xmx6g -D64, but i see
this in top after 6-8 hours and still raising:
17485 test214 10.0g 7.4g 9760 S 308.2 31.3 448:00.75 java
-Xms3g -Xmx6g -D64 -Dsolr.solr.home=/home/test/solr/example/multicore/ -jar
start.jar
Are there any ways t
21 matches
Mail list logo