Colin:
FYI, you might consider just setting up the autocommit (or commitWithin if
you're using SolrJ) for some reasonable interval (I often use 10 minutes or so).
Even though you've figured it is a Tomcat issue, each
commit causes searcher re-opens, perhaps replication in a master/slave
setup, in
Ah, good to know! Thank you!
I already had Jetty under suspicion, but we had this failure quite often
in October and November, when the bug was not yet reported.
-Kuli
Am 14.03.2012 12:08, schrieb Colin Howe:
After some more digging around I discovered that there was a bug reported
in jetty
After some more digging around I discovered that there was a bug reported
in jetty 6: https://jira.codehaus.org/browse/JETTY-1458
This prompted me to upgrade to Jetty 7 and things look a bit more stable
now :)
On Wed, Mar 14, 2012 at 10:26 AM, Michael Kuhlmann wrote:
> I had the same problem
I had the same problem, without auto-commit.
I never really found out what exactly the reason was, but I think it was
because commits were triggered before a previous commit had the chance
to finish.
We now commit after every minute or 1000 (quite large) documents,
whatever comes first. And
Currently using 3.4.0. We have autocommit enabled but we manually do
commits every 100 documents anyway... I can turn it off if you think that
might help.
Cheers,
Colin
On Wed, Mar 14, 2012 at 10:24 AM, Markus Jelsma
wrote:
> Are you running trunk and have auto-commit enabled? Then disable
> a
Are you running trunk and have auto-commit enabled? Then disable
auto-commit. Even if you increase ulimits it will continue to swallow
all available file descriptors.
On Wed, 14 Mar 2012 10:13:55 +, Colin Howe
wrote:
Hello,
We keep hitting the too many open files exception. Looking at l
Off the top of my head, i don't know hte answers to some of your
questions, but as to the core cause of the exception...
: 3. server.query(solrQuery) throws SolrServerException. How can concurrent
: solr queries triggers Too many open file exception?
...bear in mind that (as i understand it) t
Just pushing up the topic and look for answers.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Too-many-open-files-exception-related-to-solrj-getServer-too-often-tp2808718p2867976.html
Sent from the Solr - User mailing list archive at Nabble.com.
Other question,
Why SOLRJ d'ont close the StringWriter e OutputStreamWriter ?
thanks
2010/6/28 Anderson vasconcelos
> Thanks for responses.
> I instantiate one instance of per request (per delete query, in my case).
> I have a lot of concurrency process. Reusing the same instance (to send,
> d
Thanks for responses.
I instantiate one instance of per request (per delete query, in my case).
I have a lot of concurrency process. Reusing the same instance (to send,
delete and remove data) in solr, i will have a trouble?
My concern is if i do this, solr will commit documents with data from oth
Hi Anderson,
If you are using SolrJ, it's recommended to reuse the same instance per solr
server.
http://wiki.apache.org/solr/Solrj#CommonsHttpSolrServer
But there are other scenarios which may cause this situation:
1. Other application running in the same Solr JVM which doesn't close
properly
This probably means you're opening new readers without closing
old ones. But that's just a guess. I'm guessing that this really
has nothing to do with the delete itself, but the delete is what's
finally pushing you over the limit.
I know this has been discussed before, try searching the mail
archi
> If you had gone over 2GB of actual buffer *usage*, it would have
> broke... Guaranteed.
> We've now added a check in Lucene 2.9.1 that will throw an exception
> if you try to go over 2048MB.
> And as the javadoc says, to be on the safe side, you probably
> shouldn't go too near 2048 - perhaps 2
> > when you store raw (non
> > tokenized, non indexed) "text" value with a document (which almost
everyone
> > does). Try to store 1,000,000 documents with 1000 bytes non-tokenized
field:
> > you will need 1Gb just for this array.
>
> Nope. You shouldn't even need 1GB of buffer space for that.
to:ysee...@gmail.com] On Behalf Of Yonik
Seeley
> Sent: October-24-09 12:27 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Too many open files
>
> On Sat, Oct 24, 2009 at 12:18 PM, Fuad Efendi wrote:
> >
> > Mark, I don't understand this; of course it is u
On Sat, Oct 24, 2009 at 12:25 PM, Fuad Efendi wrote:
> This JavaDoc is incorrect especially for SOLR,
It looks correct to me... if you think it can be clarified, please
propose how you would change it.
> when you store raw (non
> tokenized, non indexed) "text" value with a document (which almost
On Sat, Oct 24, 2009 at 12:18 PM, Fuad Efendi wrote:
>
> Mark, I don't understand this; of course it is use case specific, I haven't
> seen any terrible behaviour with 8Gb
If you had gone over 2GB of actual buffer *usage*, it would have
broke... Guaranteed.
We've now added a check in Lucene 2.9.
ginal Message-
> From: Fuad Efendi [mailto:f...@efendi.ca]
> Sent: October-24-09 12:10 PM
> To: solr-user@lucene.apache.org
> Subject: RE: Too many open files
>
> Thanks for pointing to it, but it is so obvious:
>
> 1. "Buffer" is used as a RAM storage for index update
Mark, I don't understand this; of course it is use case specific, I haven't
seen any terrible behaviour with 8Gb... 32Mb is extremely small for
Nutch-SOLR -like applications, but it is acceptable for Liferay-SOLR...
Please note also, I have some documents with same IDs updated many thousands
time
Thanks for pointing to it, but it is so obvious:
1. "Buffer" is used as a RAM storage for index updates
2. "int" has 2 x Gb different values (2^^32)
3. We can have _up_to_ 2Gb of _Documents_ (stored as key->value pairs,
inverted index)
In case of 5 fields which I have, I need 5 arrays (up to 2Gb
and 1 minute update) with default SOLR settings (32Mb buffer). I
> > increased buffer to 8Gb on Master, and it triggered significant indexing
> > performance boost...
> >
> > -Fuad
> > http://www.linkedin.com/in/liferay
> >
> >
> >
> >> -
r to 8Gb on Master, and it triggered significant indexing
>>> performance boost...
>>>
>>> -Fuad
>>> http://www.linkedin.com/in/liferay
>>>
>>>
>>>
>>>
>>>
>>>> -Original Message-
>>>
uad
>> http://www.linkedin.com/in/liferay
>>
>>
>>
>>
>>> -Original Message-
>>> From: Mark Miller [mailto:markrmil...@gmail.com]
>>> Sent: October-23-09 3:03 PM
>>> To: solr-user@lucene.apache.org
>>> Subject:
performance boost...
>
> -Fuad
> http://www.linkedin.com/in/liferay
>
>
>
>> -Original Message-
>> From: Mark Miller [mailto:markrmil...@gmail.com]
>> Sent: October-23-09 3:03 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Too many ope
significant indexing
performance boost...
-Fuad
http://www.linkedin.com/in/liferay
> -Original Message-
> From: Mark Miller [mailto:markrmil...@gmail.com]
> Sent: October-23-09 3:03 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Too many open files
>
> I wouldn't
I wouldn't use a RAM buffer of a gig - 32-100 is generally a good number.
Fuad Efendi wrote:
> I was partially wrong; this is what Mike McCandless (Lucene-in-Action, 2nd
> edition) explained at Manning forum:
>
> mergeFactor of 1000 means you will have up to 1000 segments at each level.
> A level
I was partially wrong; this is what Mike McCandless (Lucene-in-Action, 2nd
edition) explained at Manning forum:
mergeFactor of 1000 means you will have up to 1000 segments at each level.
A level 0 segment means it was flushed directly by IndexWriter.
After you have 1000 such segments, they are me
> 1024
Ok, it will lower frequency of Buffer flush to disk (buffer flush happens
when it reaches capacity, due commit, etc.); it will improve performance. It
is internal buffer used by Lucene. It is not total memory of Tomcat...
> 100
It will deal with 100 Segments, and each segment wi
Make it 10:
10
-Fuad
> -Original Message-
> From: Ranganathan, Sharmila [mailto:sranganat...@library.rochester.edu]
> Sent: October-23-09 1:08 PM
> To: solr-user@lucene.apache.org
> Subject: Too many open files
>
> Hi,
>
> I am getting too many open files error.
>
> Usually I test on
you may try to put true in that useCompoundFile entry; this way indexing
should use far less file descriptors, but it will slow down indexing, see
http://issues.apache.org/jira/browse/LUCENE-888.
Try to see if the reason of lack of descriptors is related only on solr. How
are you using indexing, by
try ulimit -n5 or something
On Mon, Apr 6, 2009 at 6:28 PM, Jarek Zgoda wrote:
> I'm indexing a set of 50 small documents. I'm adding documents in
> batches of 1000. At the beginning I had a setup that optimized the index
> each 1 documents, but quickly I had to optimize after adding
Am Montag, den 14.07.2008, 09:50 -0400 schrieb Yonik Seeley:
> Solr uses reference counting on IndexReaders to close them ASAP (since
> relying on gc can lead to running out of file descriptors).
>
How do you force them to close ASAP? I use File and FileOutputStream
objects, I close the output st
Yonik Seeley schrieb:
On Mon, Jul 14, 2008 at 10:17 AM, Alexey Shakov <[EMAIL PROTECTED]> wrote:
Yonik Seeley schrieb:
On Mon, Jul 14, 2008 at 5:14 AM, Alexey Shakov <[EMAIL PROTECTED]>
now we have set the limt to ~1 files
but this is not the solution - the amount of open fi
On Mon, Jul 14, 2008 at 10:17 AM, Alexey Shakov <[EMAIL PROTECTED]> wrote:
> Yonik Seeley schrieb:
>>
>> On Mon, Jul 14, 2008 at 5:14 AM, Alexey Shakov <[EMAIL PROTECTED]>
>>> now we have set the limt to ~1 files
>>> but this is not the solution - the amount of open files increases
>>> permanan
Yonik Seeley schrieb:
On Mon, Jul 14, 2008 at 5:14 AM, Alexey Shakov <[EMAIL PROTECTED]> wrote:
now we have set the limt to ~1 files
but this is not the solution - the amount of open files increases
permanantly.
Earlier or later, this limit will be exhausted.
How can you tell? Are
On Mon, Jul 14, 2008 at 9:52 AM, Fuad Efendi <[EMAIL PROTECTED]> wrote:
> Even Oracle requires 65536; MySQL+MyISAM depends on number of tables,
> indexes, and Client Threads.
>
> From my experience with Lucene, 8192 is not enough; leave space for OS too.
>
> Multithreaded application (in most cases
Even Oracle requires 65536; MySQL+MyISAM depends on number of tables,
indexes, and Client Threads.
From my experience with Lucene, 8192 is not enough; leave space for OS too.
Multithreaded application (in most cases) multiplies number of files
to a number of threads (each thread needs own ha
Solr uses reference counting on IndexReaders to close them ASAP (since
relying on gc can lead to running out of file descriptors).
-Yonik
On Mon, Jul 14, 2008 at 9:15 AM, Brian Carmalt <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I have a similar problem, not with Solr, but in Java. From what I have
>
Hello,
I have a similar problem, not with Solr, but in Java. From what I have
found, it is a usage and os problem: comes from using to many files, and
the time it takes the os to reclaim the fds. I found the recomendation
that System.gc() should be called periodically. It works for me. May not
be
On Mon, Jul 14, 2008 at 5:14 AM, Alexey Shakov <[EMAIL PROTECTED]> wrote:
> now we have set the limt to ~1 files
> but this is not the solution - the amount of open files increases
> permanantly.
> Earlier or later, this limit will be exhausted.
How can you tell? Are you seeing descriptor use
now we have set the limt to ~1 files
but this is not the solution - the amount of open files increases
permanantly.
Earlier or later, this limit will be exhausted.
Fuad Efendi schrieb:
Have you tried [ulimit -n 65536]? I don't think it relates to files
marked for deletion...
Have you tried [ulimit -n 65536]? I don't think it relates to files
marked for deletion...
==
http://www.linkedin.com/in/liferay
Earlier or later, the system crashes with message "Too many open files"
I'm not sure what "large db" you are referring to (indexing a RDBMS to Solr?),
but the first thing to do is run ulimit -a (or some flavour of it, depending on
the OS) and increase the open file descriptors limit if the one you see there
is just very low (e.g. 1024). If that limit is not low, ma
thanks Yonik
- Original Message -
From: "Yonik Seeley" <[EMAIL PROTECTED]>
To:
Sent: Monday, March 03, 2008 9:20 AM
Subject: Re: too many open files , is it a leak of handler there?
> 2008/3/2 陈亮亮 <[EMAIL PROTECTED]>:
>> Yonik, can i ask where can i get
hould have it.
-Yonik
> - Original Message -
> From: "Yonik Seeley" <[EMAIL PROTECTED]>
> To:
>
>
> Sent: Saturday, March 01, 2008 10:53 AM
> Subject: Re: too many open files , is it a leak of handler there?
>
>
> >I just committ
Yonik, can i ask where can i get the fixed code? or the patch? i can not find
it in jira? i am a new here.thank you ^_^
- Original Message -
From: "Yonik Seeley" <[EMAIL PROTECTED]>
To:
Sent: Saturday, March 01, 2008 10:53 AM
Subject: Re: too many open files , is it a
I just committed a fix for this.
Thanks for tracking this down!
-Yonik
2008/2/29 陈亮亮 <[EMAIL PROTECTED]>:
> ok i have compared the DirectSolrConnection .java and
> SolrDispatchFilter.java, and found that the DirecSolrConnection really do not
> call the req.colse() as SolrDispatchFilter do, whic
i think i have just fix the problem, and close method in Directsolrconnection,
and now the number of handler keeps stable. hope it would help other solr
users ^_^
- Original Message -
From: "陈亮亮" <[EMAIL PROTECTED]>
To:
Sent: Saturday, March 01, 2008 9:33 AM
Subject: R
ok i have compared the DirectSolrConnection .java and SolrDispatchFilter.java,
and found that the DirecSolrConnection really do not call the req.colse() as
SolrDispatchFilter do, which is said to free the resources. i guess it is the
leak of handlers,i will try and see.^_^
- Original Message
s section:
http://www.onjava.com/pub/a/onjava/2003/03/05/lucene.html#indexing_speed
Thanks,
Stu
-Original Message-
From: Ard Schrijvers <[EMAIL PROTECTED]>
Sent: Thu, August 9, 2007 10:52 am
To: solr-user@lucene.apache.org
Subject: RE: Too many open files
Hello,
useCompoundFile set
On 9-Aug-07, at 7:52 AM, Ard Schrijvers wrote:
ulimit -n 8192
Unless you have an old, creaky box, I highly recommend simply upping
your filedesc cap.
-Mike
s section:
http://www.onjava.com/pub/a/onjava/2003/03/05/lucene.html#indexing_speed
Thanks,
Stu
-Original Message-
From: Ard Schrijvers <[EMAIL PROTECTED]>
Sent: Thu, August 9, 2007 10:52 am
To: solr-user@lucene.apache.org
Subject: RE: Too many open files
Hello,
useCompoundFile set to true, sho
Hello,
useCompoundFile set to true, should avoid the problem. You could also try to
set maximum open files higher, something like (I assume linux)
ulimit -n 8192
Ard
>
> You're a gentleman and a scholar. I will donate the M&Ms to
> myself :).
> Can you tell me from this snippet of my solrc
You're a gentleman and a scholar. I will donate the M&Ms to myself :).
Can you tell me from this snippet of my solrconfig.xml what I might
tweak to make this more betterer?
-KH
false
10
1000
2147483647
1
1000
1
You could try committing updates more frequently, or maybe optimising the
index beforehand (and even during!). I imagine you could also change the
Solr config, if you have access to it, to tweak indexing (or index creation)
parameters - http://wiki.apache.org/solr/SolrConfigXml should be of use to
55 matches
Mail list logo