Steve Rowe [sar...@gmail.com] wrote:
> 1 Lakh (aka Lac) = 10^5 is written as 1,00,000
>
> It’s used in Bangladesh, India, Myanmar, Nepal, Pakistan, and Sri Lanka,
> roughly 1/4 of the world’s population.
Yet still it causes confusion and distracts from the issue. Let's just stick to
metric, okay
On Jul 25, 2014, at 9:13 AM, Shawn Heisey wrote:
> On 7/24/2014 7:53 AM, Ameya Aware wrote:
> The odd location of the commas in the start of this thread make it hard
> to understand exactly what numbers you were trying to say
On Jul 24, 2014, at 9:32 AM, Ameya Aware wrote:
> I am in process o
On 7/24/2014 7:53 AM, Ameya Aware wrote:
> I did not make any other change than this.. rest of the settings are
> default.
>
> Do i need to set garbage collection strategy?
The collector chosen and its and tuning params can have a massive impact
on performance, but it will make no difference at a
How about simply increasing the heap size if RAM is available? You should also
check the update handler config, e.g. auto commit, if docs aren’t being written
to disk, they would be hanging around in memory. And “openSearcher” setting too
as opening new searchers consumes memory, especially if e
A default garbage collector will be chosen for you by the VM, might help to get
the stack trace to look at.
François
On Jul 24, 2014, at 10:06 AM, Ameya Aware wrote:
> ooh ok.
>
> So you want to say that since i am using large heap but didnt set my
> garbage collection, thats why i why gettin
ooh ok.
So you want to say that since i am using large heap but didnt set my
garbage collection, thats why i why getting java heap space error?
On Thu, Jul 24, 2014 at 9:58 AM, Marcello Lorenzi
wrote:
> I think that on large heap is suggested to monitor the garbage collection
> behavior an
I think that on large heap is suggested to monitor the garbage
collection behavior and try to add a strategy adapted to your
performance. On my production environment with a heap of 6 GB I set
this parameter (server with 8 cores):
-server -Xms6144m -Xmx6144m -XX:MaxPermSize=512m
-Dcom.sun.ma
I did not make any other change than this.. rest of the settings are
default.
Do i need to set garbage collection strategy?
On Thu, Jul 24, 2014 at 9:49 AM, Marcello Lorenzi
wrote:
> Hi,
> Did you set a Garbage collection strategy on your JVM ?
>
> Marcello
>
>
> On 07/24/2014 03:32 PM, Ameya
Hi,
Did you set a Garbage collection strategy on your JVM ?
Marcello
On 07/24/2014 03:32 PM, Ameya Aware wrote:
Hi
I am in process of indexing around 2,00,000 documents.
I have increase java jeap space to 4 GB using below command :
java -Xmx4096M -Xms4096M -jar start.jar
Still after indexin
You may want to change your solr startup script such that it creates a
heap dump on OOM. Add -XX:+HeapDumpOnOutOfMemoryError as an option.
The heap dump can be nicely analyzed with http://www.eclipse.org/mat/.
Just increasing -Xmx is a workaround that may help to get around for a
while. With m
Hello!
Yes, just edit your Jetty configuration file and add -Xmx and -Xms
parameters. For example, the file you may be looking at it
/etc/default/jetty.
--
Regards,
Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
> So can
So can i come over this exception by increasing heap size somewhere?
Thanks,
Ameya
On Tue, Jul 22, 2014 at 2:00 PM, Shawn Heisey wrote:
> On 7/22/2014 11:37 AM, Ameya Aware wrote:
> > i am running into java heap space issue. Please see below log.
>
> All we have here is an out of memory except
On 7/22/2014 11:37 AM, Ameya Aware wrote:
> i am running into java heap space issue. Please see below log.
All we have here is an out of memory exception. It is impossible to
know *why* you are out of memory from the exception. With enough
investigation, we could determine the area of code where
400M docs is quite a large number of documents for a single piece of
hardware, and
if you're faceting over a large number of unique values, this will
chew up memory.
So it's not surprising that you're seeing OOMs, I suspect you just have too many
documents on a single machine..
Best
Erick
On Mo
I am sorry about a type mistake 8,000,000,000 -> 800,000,000
2013/5/27 Jam Luo
> I have the same problem. at 4.1 ,a solr instance could take 8,000,000,000
> doc. but at 4.2.1, a instance only take 400,000,000 doc, it will oom at
> facet query. the facet field was token by space.
>
> May 27,
I have the same problem. at 4.1 ,a solr instance could take 8,000,000,000
doc. but at 4.2.1, a instance only take 400,000,000 doc, it will oom at
facet query. the facet field was token by space.
May 27, 2013 11:12:55 AM org.apache.solr.common.SolrException log
SEVERE: null:java.lang.RuntimeExcep
aah… was doing a facet on a double field which was having 6 decimal places…
No surprise that the lucene cache got full…
.z/ahoor
On 17-May-2013, at 11:56 PM, J Mohamed Zahoor wrote:
> Memory increase a lot with queries which have facets…
>
>
> ./Zahoor
>
>
> On 17-May-2013, at 10:00 PM, S
Memory increase a lot with queries which have facets…
./Zahoor
On 17-May-2013, at 10:00 PM, Shawn Heisey wrote:
> On 5/17/2013 1:17 AM, J Mohamed Zahoor wrote:
>> I moved to 4.2.1 from 4.1 recently.. everything was working fine until i
>> added few more stats query..
>> Now i am getting thi
On 5/17/2013 1:17 AM, J Mohamed Zahoor wrote:
> I moved to 4.2.1 from 4.1 recently.. everything was working fine until i
> added few more stats query..
> Now i am getting this error frequently that solr does not run even for 2
> minutes continuously.
> All 5GB is getting used instantaneously in f
Hprof introspection shows that huge Double Array are using up 75% of heap
space... which belongs to Lucen's FieldCache..
./zahoor
On 17-May-2013, at 12:47 PM, J Mohamed Zahoor wrote:
> Hi
>
> I moved to 4.2.1 from 4.1 recently.. everything was working fine until i
> added few more stats qu
I'm embarrassed (but hugely relieved) to say that, the script I had for
starting Jetty had a bug in the way it set java options! So, my heap
start/max was always set at the default. I did end up using jconsole and
learned quite a bit from that too.
Thanks for your help Yonik :)
Matt
On Sat, Jan
On Sat, Jan 16, 2010 at 11:04 AM, Matt Mitchell wrote:
> These are single valued fields. Strings and integers. Is there more specific
> info I could post to help diagnose what might be happening?
Faceting on either should currently take ~24MB (6M docs @ 4 bytes per
doc + size_of_unique_values)
Wi
These are single valued fields. Strings and integers. Is there more specific
info I could post to help diagnose what might be happening?
Thanks!
Matt
On Sat, Jan 16, 2010 at 10:42 AM, Yonik Seeley
wrote:
> On Sat, Jan 16, 2010 at 10:01 AM, Matt Mitchell
> wrote:
> > I have an index with more tha
On Sat, Jan 16, 2010 at 10:01 AM, Matt Mitchell wrote:
> I have an index with more than 6 million docs. All is well, until I turn on
> faceting and specify a facet.field. There is only about unique 20 values for
> this particular facet throughout the entire index.
Hmmm, that doesn't sound right..
: I am new in Solr and try to use Jitty and example with 13 million records.
: During running it, I have the error -
: java.lang.OutOfMemoryError: Java heap space
: Any recommendation? We have a million transactions, so would it be better to
: use Tomcat?
millions of records takes up memory. wh
Hello,
You need to start your jetty or tomcat server with higher vm memory
settings.
On this page you can find some explanation
http://www.caucho.com/resin-3.0/performance/jvm-tuning.xtp
The 2 parameters that are important are -Xms and -Xmx.
So instead of starting jetty with
java -jar start.j
On 5/15/06, Marcus Stratmann <[EMAIL PROTECTED]> wrote:
The only situation I get OutOfMemory
errors is after an optimize when the server performs an auto-warming
of the cahces:
A single filter that is big enough to be represented as a bitset
(>3000 in general) will take up 1.3MB
Some ways to
On 5/4/06, I wrote:
> From my point of view it looks like this: Revision 393957 works while
> the latest revision cause problems. I don't know what part of the
> distribution causes the problems but I will try to find out. I think a
> good start would be to find out which was the first revision no
Sorry, hit the wrong key before...
FYI, I have just committed all the changes related to the Jetty downgrade
into SVN.
Let me know if you notice anything problems.
Bill
On 5/9/06, Bill Au <[EMAIL PROTECTED]> wrote:
FYI, I have just committed the a
On 5/8/06, Bill Au <[EMAIL PROTECTED]> wrot
FYI, I have just committed the a
On 5/8/06, Bill Au <[EMAIL PROTECTED]> wrote:
I was able to produce an OutOfMemoryError using Yonik's python script with
Jetty 6.
I was not able to do so with Jetty 5.1.11RC0, the latest stable version.
So that's the
version of Jetty with which I will downgrade
I was able to produce an OutOfMemoryError using Yonik's python script with
Jetty 6.
I was not able to do so with Jetty 5.1.11RC0, the latest stable version. So
that's the
version of Jetty with which I will downgrade the Solr example app to.
Bill
On 5/5/06, Erik Hatcher <[EMAIL PROTECTED]> wrote
Along these lines, locally I've been using the latest stable version
of Jetty and it has worked fine, but I did see an "out of memory"
exception the other day but have not seen it since so I'm not sure
what caused it.
Moving to Tomcat, as long as we can configure it to be as lightweight
a
There seems to be a fair number of folks using the jetty with the example
app
as oppose to using Solr with their own appserver. So I think it is best to
use a stable version of Jetty instead of the beta. If no one objects, I can
go ahead and take care of this.
Bill
On 5/4/06, Yonik Seeley <[EM
I verified that Tomcat 5.5.17 doesn't experience this problem.
-Yonik
On 5/4/06, Yonik Seeley <[EMAIL PROTECTED]> wrote:
On 5/3/06, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> I just tried sending in 100,000 deletes and it didn't cause a problem:
> the memory grew from 22M to 30M.
>
> Random thou
On 5/3/06, Yonik Seeley <[EMAIL PROTECTED]> wrote:
I just tried sending in 100,000 deletes and it didn't cause a problem:
the memory grew from 22M to 30M.
Random thought: perhaps it has something to do with how you are
sending your requests?
Yep, I was able to reproduce a memory problem w/ Jet
Chris Hostetter wrote:
This is because building a full Solr distribution from scratch requires
that you have JUnit. Bt it is not required to run Solr.
Ah, I see. That was a very valuable hint for me.
I was able now to compile an older revision (393957). Testing this
revision I was able to dele
I just tried sending in 100,000 deletes and it didn't cause a problem:
the memory grew from 22M to 30M.
Random thought: perhaps it has something to do with how you are
sending your requests?
If the client creates a new connection for each request, but doesn't
send the Connection:close header or c
: next thing I tried was to get the code via svn. Unfortunately the code
: does not compile ("package junit.framework does not exist"). I found out
This is because building a full Solr distribution from scratch requires
that you have JUnit. Bt it is not required to run Solr.
: > Is your problem
Yonik Seeley wrote:
Is your problem reproducable with a test case you can share?
Well, you can get the configuration files. If you ask for the data, this
could be a problem, since this is "real" data from our production
database. The amount of data needed could be another problem.
You could al
Hi Marcus,
Is your problem reproducable with a test case you can share?
You could also try a different app-server like Tomcat to see if that
makes a difference.
What type is your id field defined to be?
-Yonik
On 5/3/06, Marcus Stratmann <[EMAIL PROTECTED]> wrote:
Hello,
deleting or updating
Hello,
deleting or updating documents is still not possible for me so now I
tried to built a completely new index. Unfortunately this didn't work
either. Now I'm getting OOM after inserting slightly more than 20,000
documents to the new index.
To me this looks as if a bug has been introduced
Chris Hostetter wrote:
> this is off the subject of the heap space issue ... but if the id changes,
> then maybe it shouldn't be the uniqueId of your index? .. your code must
> have someone of recognizing that article B with id 222 is a changed
> version of article A with id 111 (otherwise how woul
: > If you are first deleting so you can re-add a newer version of the
: > document, you don't need too... overwriting older documents based on
: > the uniqueKeyField is something Solr does for you!
:
: Yes, I know. But the articles in our (sql-)database get new IDs when
: they are changed so they
Yonik Seeley wrote:
Yes, on a delete operation. I'm not doing any commits until the end of
all delete operations.
I assume this is a delete-by-id and not a delete-by-query? They work
very differently.
Yes, all queries are delete-by-id.
If you are first deleting so you can re-add a newer ve
On 4/29/06, Marcus Stratmann <[EMAIL PROTECTED]> wrote:
Yes, on a delete operation. I'm not doing any commits until the end of
all delete operations.
I assume this is a delete-by-id and not a delete-by-query? They work
very differently.
There is some state stored for each pending delete-by-id
Chris Hostetter wrote:
> interesting .. are you getting the OutOfMemory on an actual delete
> operation or when doing a commit after executing some deletes?
Yes, on a delete operation. I'm not doing any commits until the end of
all delete operations.
After reading this I was curious if using commi
: > How big is your physical index directory on disk?
: It's about 2.9G now.
: Is there a direct connection between size of index and usage of ram?
Generally yes. Lucene loads a lot of your index into memory ... not
neccessarily the "stored" fields, but quite a bit of the index structure
needed f
Chris Hostetter wrote:
How big is your physical index directory on disk?
It's about 2.9G now.
Is there a direct connection between size of index and usage of ram?
Your best bet is to allocate as much ram to the server as you can.
Depending on how full your caches are, and what hitratios you ar
: I'm currently testing a large index with more than 10 million
: documents and 24 fields, using the example installation with
: Jetty.
: When deleting or updateing documents from the index or doing
: search queries I get "Java heap space" error messages like
: this (in this case while trying to de
49 matches
Mail list logo