Oh Wow, I didnt know that was the case. I am completely left baffled now. BAck
to square one I guess. :)
> Date: Tue, 5 Aug 2008 14:31:28 -0700> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: RE: Out of memory on Solr sorting> >
> Sundar, very
Sundar, very strange that increase of size/initialSize of LRUCache
helps with OutOfMemoryError...
2048 is number of entries in cache and _not_ 2Gb of memory...
Making size==initialSize of HashMap-based LRUCache would help with
performance anyway; may be with OOMs (probably no need to resize
reinexing, replaving the
text_ws to string and having the default size of all 3 caches to 512 and seeing
if the problem goes away.
-Sundar
> Date: Tue, 5 Aug 2008 14:05:05 -0700> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: Re: Out of memory on Solr sorti
I know, and this is strange... I was guessing filterCache is used
implicitly to get DocSet for token; as Sundar wrote, increase of
LRUCache helped him (he is sorting on 'text-ws' field)
-Fuad
If increasing LRU cache helps you:
- you are probably using 'tokenized' field for sorting (could you
On Tue, Aug 5, 2008 at 1:59 PM, Fuad Efendi <[EMAIL PROTECTED]> wrote:
> If increasing LRU cache helps you:
> - you are probably using 'tokenized' field for sorting (could you confirm
> please?)...
Sorting does not utilize any Solr caches.
-Yonik
Best choice for sorting field:
sortMissingLast="true" omitNorms="true">
- case-insentitive etc...
I might be partially wrong about SOLR LRU Cache but it is used somehow
in your specific case... 'filterCache' is probably used for
'tokenized' sorting: it stores (token, DocList)...
My understanding of Lucene Sorting is that it will sort by 'tokens'
and not by 'full fields'... so that for sorting you need 'full-string'
(non-tokenized) field, and to search you need another one tokenized.
For instance, use 'string' for sorting, and 'text_ws' for search; and
use 'copyFiel
The field is of type "text_ws". Is this not recomended. Should I use text
instead?
> Date: Tue, 5 Aug 2008 10:58:35 -0700> From: [EMAIL PROTECTED]> To: [EMAIL
> PROTECTED]> Subject: RE: Out of memory on Solr sorting> > Hi Sundar,> > > If
> increasi
Hi Sundar,
If increasing LRU cache helps you:
- you are probably using 'tokenized' field for sorting (could you
confirm please?)...
...you should use 'non-tokenized single-valued non-boolean' for better
performance of
sorting...
Fuad Efendi
==
http://www.tokenizer.org
Quo
.
Sundar
> From: [EMAIL PROTECTED]> To: solr-user@lucene.apache.org> Subject: RE: Out of
> memory on Solr sorting> Date: Tue, 29 Jul 2008 10:43:05 -0700> > A sneaky
> source of OutOfMemory errors is the permanent generation. If you> add this:>
> -XX:PermSize
that is not reclaimed, and so each undeploy/redeploy cycle
eats up the permanent generation pool.
-Original Message-
From: david w [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2008 7:20 AM
To: solr-user@lucene.apache.org
Subject: Re: Out of memory on Solr sorting
Hi, Daniel
I got the
unning on this
> 2Gb heap VM.
>
> Cheers,
> Daniel
>
> -Original Message-
> From: sundar shankar [mailto:[EMAIL PROTECTED]
> Sent: 23 July 2008 23:45
> To: solr-user@lucene.apache.org
> Subject: RE: Out of memory on Solr sorting
>
>
> Hi Daniel,
>
have a 3.5 million documents (aprox. 10Gb) running on this
2Gb heap VM.
Cheers,
Daniel
-Original Message-
From: sundar shankar [mailto:[EMAIL PROTECTED]
Sent: 23 July 2008 23:45
To: solr-user@lucene.apache.org
Subject: RE: Out of memory on Solr sorting
Hi Daniel,
I am a
-Xmx2048m -XX:MinHeapFreeRatio=50
-XX:NewSize=1024m -XX:NewRatio=2 -Dsun.rmi.dgc.client.gcInterval=360
-Dsun.rmi.dgc.server.gcInterval=360
Jboss 4.05
> Subject: RE: Out of memory on Solr sorting
> Date: Wed, 23 Jul 2008 10:49:06 +0100
> From: [EMAIL PROTECTED]
> T
Sent: 22 July 2008 23:23
To: solr-user@lucene.apache.org
Subject: RE: Out of memory on Solr sorting
Yes, it is a cache, it stores "sorted" by "sorted field" array of
Document IDs together with sorted fields; query results can intersect
with it and reorder accordingly.
But memory requirement
On Tue, 22 Jul 2008 20:19:49 +
sundar shankar <[EMAIL PROTECTED]> wrote:
> Thanks for the explanation mark. The reason I had it as 512 max was cos
> earlier the data file was just about 30 megs and it increased to this much
> for of the usage of EdgeNGramFactoryFilter for 2 fields. Thats gre
t least in case of field level sorting? I could be
wrong too and the implementation might probably be better. But
don't know why all of the fields have had to be loaded.
Date: Tue, 22 Jul 2008 14:26:26 -0700> From: [EMAIL PROTECTED]> To:
solr-user@lucene.apache.org> Subject
n might probably be better. But don't
know why all of the fields have had to be loaded.
Date: Tue, 22 Jul 2008 14:26:26 -0700> From: [EMAIL PROTECTED]> To:
solr-user@lucene.apache.org> Subject: Re: Out of memory on Solr
sorting> > > Ok, after some analysis of Fi
hy all of the fields have had to be loaded.
> Date: Tue, 22 Jul 2008 14:26:26 -0700> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: Re: Out of memory on Solr sorting> > >
> Ok, after some analysis of FieldCacheImpl:> > - it is sup
e. Queries with bigger results
seems to come out fine too. But why just sort of that too
just 10 rows??
-Sundar
Date: Tue, 22 Jul 2008 12:24:35 -0700> From: [EMAIL PROTECTED]>
To: solr-user@lucene.apache.org> Subject: RE: Out of
memory on Solr sorting> >
hat too
just 10 rows??
-Sundar
Date: Tue, 22 Jul 2008 12:24:35 -0700> From: [EMAIL PROTECTED]>
To: solr-user@lucene.apache.org> Subject: RE: Out of
memory on Solr sorting> >
org.apache.lucene.search.FieldCacheImpl$10.createValue(FieldCacheImpl.ja
Thanks for your help Mark. Lemme explore a little more and see if some one else
can help me out too. :)
> Date: Tue, 22 Jul 2008 16:53:47 -0400> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: Re: Out of memory on Solr sorting> >
> Someone else
. The dev is a linux with over 2 Gigs of memory and 1024 allocated to heap now. :S
-Sundar
Date: Tue, 22 Jul 2008 13:17:40 -0700> From: [EMAIL PROTECTED]> To: solr-user@lucene.apache.org> Subject: Re: Out of memory on Solr sorting> > Mark,> > Question: how much memory
ect: Re: Out of memory on Solr sorting> >
> Mark,> > Question: how much memory I need for 25,000,000 docs if I do a sort
> by > field, 256 bytes. 6.4Gb?> > > Quoting Mark Miller <[EMAIL
> PROTECTED]>:> > > Because to sort efficiently, Solr loads th
Date: Tue, 22 Jul 2008 12:24:35 -0700> From: [EMAIL PROTECTED]>
To: solr-user@lucene.apache.org> Subject: RE: Out of memory
on Solr sorting> >
org.apache.lucene.search.FieldCacheImpl$10.createValue(FieldCacheImpl.java:403)>
> - this piece of code do not request Array[1
ies with bigger results seems
to come out fine too. But why just sort of that too just 10
rows??
-Sundar
Date: Tue, 22 Jul 2008 12:24:35 -0700> From: [EMAIL PROTECTED]>
To: solr-user@lucene.apache.org> Subject: RE: Out of memory
on Solr sorting> >
org.apac
Mark,
Question: how much memory I need for 25,000,000 docs if I do a sort by
field, 256 bytes. 6.4Gb?
Quoting Mark Miller <[EMAIL PROTECTED]>:
Because to sort efficiently, Solr loads the term to sort on for each
doc in the index into an array. For ints,longs, etc its just an array
the siz
ws??
-Sundar
Date: Tue, 22 Jul 2008 12:24:35 -0700> From: [EMAIL PROTECTED]> To:
solr-user@lucene.apache.org> Subject: RE: Out of memory on Solr
sorting> >
org.apache.lucene.search.FieldCacheImpl$10.createValue(FieldCacheImpl.java:403)>
> - this piece of code d
posted on the turn
arounds.Thanks-Sundar> Date: Tue, 22 Jul 2008 15:46:04 -0400> From: [EMAIL
PROTECTED]> To: solr-user@lucene.apache.org> Subject: Re: Out of memory on Solr
sorting> > Because to sort efficiently, Solr loads the term to sort on for each
doc > in the ind
solr-user@lucene.apache.org
> Subject: Re: Out of memory on Solr sorting
>
> Because to sort efficiently, Solr loads the term to sort on for each doc
> in the index into an array. For ints,longs, etc its just an array the
> size of the number of docs in your index (i believe deleted or not). For
Jul 2008 12:24:35 -0700> From: [EMAIL PROTECTED]> To:
solr-user@lucene.apache.org> Subject: RE: Out of memory on Solr
sorting> >
org.apache.lucene.search.FieldCacheImpl$10.createValue(FieldCacheImpl.java:403)> > - this piece of code do not request Array[100M] (as I se
igger results seems to come out
fine too. But why just sort of that too just 10 rows??
-Sundar
Date: Tue, 22 Jul 2008 12:24:35 -0700> From: [EMAIL PROTECTED]> To:
solr-user@lucene.apache.org> Subject: RE: Out of memory on Solr
sorting> >
org.apache.lucene.search.FieldCa
Date: Tue, 22 Jul 2008 12:24:35 -0700> From: [EMAIL PROTECTED]> To: solr-user@lucene.apache.org> Subject: RE: Out of memory on Solr sorting> > org.apache.lucene.search.FieldCacheImpl$10.createValue(FieldCacheImpl.java:403)> > - this piece of code do not request Array[100M] (as
sort of that too just 10 rows??
-Sundar
> Date: Tue, 22 Jul 2008 12:24:35 -0700> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: RE: Out of memory on Solr sorting> >
> org.apache.lucene.search.FieldCacheImpl$10.createValue(FieldCacheImpl.java:403)>
org.apache.lucene.search.FieldCacheImpl$10.createValue(FieldCacheImpl.java:403)
- this piece of code do not request Array[100M] (as I seen with
Lucene), it asks only few bytes / Kb for a field...
Probably 128 - 512 is not enough; it is also advisable to use equal sizes
-Xms1024M -Xmx1024M
(i
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: Out of memory on Solr sorting
> Date: Tue, 22 Jul 2008 19:11:02 +
>
>
> Hi,
> Sorry again fellos. I am not sure whats happening. The day with solr is bad
> for me I guess. EZMLM didnt let me send any mails this morning
36 matches
Mail list logo