ed. It has been increased to 65536 from 4096. This
> number can be found by ulimit -Hn, ulimit -Sn
>
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelIm
On 5/8/2017 11:40 AM, Satya Marivada wrote:
> Started getting below errors/exceptions. I have listed the resolution
> inline. Could you please see if I am headed right?
>
> java.lang.OutOfMemoryError: unable to create new native thread
> java.io.IOException: Too many open files
I
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
The below error basically says that there are no more files can be opened
as the limit has reached. It has been increased to 65536 from 4096. This
number can be found by ulimit -Hn, ulimit -Sn
java.io.IOException: Too many open files
at
Mads Tomasgård Bjørgan wrote:
> That's true, but I was hoping there would be another way to solve this issue
> as it's not considered preferable in our situation.
What you are looking for might be
https://cwiki.apache.org/confluence/display/solr/IndexConfig+in+SolrConfig#IndexConfiginSolrConfig
solr-user@lucene.apache.org
> Subject: RE: Solr node crashes while indexing - Too many open files
>
> That's true, but I was hoping there would be another way to solve this issue
> as it's not considered preferable in our situation.
>
> Is it normal behavior for Solr to open
ig.xml for forcing Solr to close the files?
Any help is appreciated :-)
-Original Message-
From: Markus Jelsma [mailto:markus.jel...@openindex.io]
Sent: torsdag 30. juni 2016 11.41
To: solr-user@lucene.apache.org
Subject: RE: Solr node crashes while indexing - Too many open files
node crashes while indexing - Too many open files
>
> Hello,
> We're indexing a large set of files using Solr 6.1.0, running a SolrCloud by
> utilizing ZooKeeper 3.4.8.
>
> We have two ensembles - and both clusters are running on three of their own
> respective VMs (Cen
1_replica1]
o.a.s.u.StreamingSolrClients error
java.net.SocketException: Too many open files
(...)
2016-06-30 08:23:12.461 ERROR (qtp314337396-18) [c:DIPS s:shard1 r:core_node1
x:DIPS_shard1_replica1] o.a.s.h.RequestHandlerBase
org.apache.solr.update.processor.DistributedUpdat
Hello everyone
I use ManifoldCF (File Crawler) to crawl and index file contents into
Solr3.6.
ManifoldCF uses ExtractingRequestHandler to extract contents from files.
Somehow IOFileUploadException occurs and tells there are too many open
files.
Does Solr open temporary files under /var/tmp/ a
at 10:47 PM, Gopal Patwa wrote:
>> I am using Solr 4.0 nightly build with NRT and I often get this
>> error during auto commit "Too many open files". I have search this forum
>> and what I found it is related to OS ulimit setting, please see below my
>> ulimit
at 7:47 PM, Gopal Patwa wrote:
> I am using Solr 4.0 nightly build with NRT and I often get this
> error during auto commit "Too many open files". I have search this forum
> and what I found it is related to OS ulimit setting, please see below my
> ulimit settings. I am not
more details
about what you're doing and how you're using Solr
Best
Erick
On Sun, Apr 1, 2012 at 10:47 PM, Gopal Patwa wrote:
> I am using Solr 4.0 nightly build with NRT and I often get this
> error during auto commit "Too many open files". I have search this forum
> an
I am using Solr 4.0 nightly build with NRT and I often get this
error during auto commit "Too many open files". I have search this forum
and what I found it is related to OS ulimit setting, please see below my
ulimit settings. I am not sure what ulimit setting I should have for open
fi
Colin:
FYI, you might consider just setting up the autocommit (or commitWithin if
you're using SolrJ) for some reasonable interval (I often use 10 minutes or so).
Even though you've figured it is a Tomcat issue, each
commit causes searcher re-opens, perhaps replication in a master/slave
setup, in
Ah, good to know! Thank you!
I already had Jetty under suspicion, but we had this failure quite often
in October and November, when the bug was not yet reported.
-Kuli
Am 14.03.2012 12:08, schrieb Colin Howe:
After some more digging around I discovered that there was a bug reported
in jetty
o-commit. Even if you increase ulimits it will continue to swallow all
>>> available file descriptors.
>>>
>>>
>>> On Wed, 14 Mar 2012 10:13:55 +, Colin Howe
>>> wrote:
>>>
>>> Hello,
>>>>
>>>> We keep hitt
keep hitting the too many open files exception. Looking at lsof we have
a lot (several thousand) of entries like this:
java 19339 root 1619u sock0,70t0
682291383 can't identify protocol
However, netstat -a doesn't show any of these.
Can anyon
ble
> auto-commit. Even if you increase ulimits it will continue to swallow all
> available file descriptors.
>
>
> On Wed, 14 Mar 2012 10:13:55 +, Colin Howe
> wrote:
>
>> Hello,
>>
>> We keep hitting the too many open files exception. Looking at lsof we h
Are you running trunk and have auto-commit enabled? Then disable
auto-commit. Even if you increase ulimits it will continue to swallow
all available file descriptors.
On Wed, 14 Mar 2012 10:13:55 +, Colin Howe
wrote:
Hello,
We keep hitting the too many open files exception. Looking at
Hello,
We keep hitting the too many open files exception. Looking at lsof we have
a lot (several thousand) of entries like this:
java 19339 root 1619u sock0,7 0t0
682291383 can't identify protocol
However, netstat -a doesn't show any of t
I had this problem sometime ago,
It happened on our homolog machine.
There was 3 solr instances , 1 master 2 slaves, running.
My Solution was: I stoped the slaves, deleted both data folders, runned an
optimize and than started it again.
I tried to raise the OS open file limit first, but i think i
Thanks. They are set properly. But i misspelled the tomcat6 username in
limits.conf :(
On Wednesday 29 February 2012 18:08:55 Yonik Seeley wrote:
> On Wed, Feb 29, 2012 at 10:32 AM, Markus Jelsma
>
> wrote:
> > The Linux machines have proper settings for ulimit and friends, 32k open
> > files a
On Wednesday 29 February 2012 17:52:55 Sami Siren wrote:
> On Wed, Feb 29, 2012 at 5:53 PM, Markus Jelsma
>
> wrote:
> > Sami,
> >
> > As superuser:
> > $ lsof | wc -l
> >
> > But, just now, i also checked the system handler and it told me:
> > (error executing: ulimit -n)
>
> That's odd, you
On Wed, Feb 29, 2012 at 10:32 AM, Markus Jelsma
wrote:
> The Linux machines have proper settings for ulimit and friends, 32k open files
> allowed
Maybe you can expand on this point.
cat /proc/sys/fs/file-max
cat /proc/sys/fs/nr_open
Those take precedence over ulimit. Not sure if there are othe
On Wed, Feb 29, 2012 at 5:53 PM, Markus Jelsma
wrote:
> Sami,
>
> As superuser:
> $ lsof | wc -l
>
> But, just now, i also checked the system handler and it told me:
> (error executing: ulimit -n)
That's odd, you should see something like this there:
"openFileDescriptorCount":131,
"maxFi
Sami,
As superuser:
$ lsof | wc -l
But, just now, i also checked the system handler and it told me:
(error executing: ulimit -n)
This is rather strange, it seems. lsof | wc -l is not higher than 6k right now
and ulimit -n is 32k. Is lsof not to be trusted in this case or... something
else?
T
Hi Markus,
> The Linux machines have proper settings for ulimit and friends, 32k open files
> allowed so i suspect there's another limit which i am unaware of. I also
> listed the number of open files while the errors were coming in but it did not
> exceed 11k at any given time.
How did you check
exing to a single high-end node but with SolrCloud
things go down pretty soon.
First we get a Too Many Open Files error on all nodes almost at the same time.
When shutting down the indexer the nodes won't respond anymore except for an
Internal Server Error.
First the too many open files sta
Hi together,
We're running several instances of SOLR (3.5) on Apache Tomcat (6.0) on
Ubuntu 10.xx. After adding another instance (maybe the 14th or 15th for
the developers sandboxes), tomcat rise the exception
"java.net.SocketException: Too many open files" .
After reading som
d maxTotalConnections
>>> to 100. There should be room enough.
>>>
>>> Sorry that I can't help you, we still have not solved tghe problem on
>>> our own.
>>>
>>> Greetings,
>>> Kuli
>>>
>>> Am 25.10.2011 22:03, schrieb Jo
that after few minutes it start throwing exception
java.net.SocketException: Too many open files.
It seems that it related to instance of the HttpClient. How to resolved
the
instances to a certain no. Like connection pool in dbcp etc..
I am not experienced on java so please help to resolved this problem.
solr version: 3.4
regards
Jonty
t; SolrServer server = new CommonsHttpSolrServer("
> > http://localhost:8080/solr/core0";);
> >
> > I noticed that after few minutes it start throwing exception
> > java.net.SocketException: Too many open files.
> > It seems that it related to instance of the H
w CommonsHttpSolrServer("
> http://localhost:8080/solr/core0";);
>
> I noticed that after few minutes it start throwing exception
> java.net.SocketException: Too many open files.
> It seems that it related to instance of the HttpClient. How to resolved the
> instances to a ce
you reusing the server object for all of your requests?
> By default, Solr and SolrJ use persistent connections, meaning that
> sockets are reused and new ones are not opened for every request.
>
> -Yonik
> http://www.lucidimagination.com
>
>
> > I noticed that after
Hi,
I had save problem "Too many open files" but it is logged by Tomcat
server. Please check your index directory if there are too much index
files please execute Solr optimize command. This exception is raised by
OS of server, you can google for researching it.
On 10/26/20
lhost:8080/solr/core0";);
>>
>> I noticed that after few minutes it start throwing exception
>> java.net.SocketException: Too many open files.
>> It seems that it related to instance of the HttpClient. How to resolved the
>> instances to a certain no. Lik
gt; I noticed that after few minutes it start throwing exception
> java.net.SocketException: Too many open files.
> It seems that it related to instance of the HttpClient. How to resolved the
> instances to a certain no. Like connection pool in dbcp etc..
>
> I am not experienced on jav
e server object for all of your requests?
By default, Solr and SolrJ use persistent connections, meaning that
sockets are reused and new ones are not opened for every request.
-Yonik
http://www.lucidimagination.com
> I noticed that after few minutes it start throwing exception
> java.net.Socke
Hi,
I am using solrj and for connection to server I am using instance of the
solr server:
SolrServer server = new CommonsHttpSolrServer("
http://localhost:8080/solr/core0";);
I noticed that after few minutes it start throwing exception
java.net.SocketException: Too many open files
(11/06/20 16:16), Jason, Kim wrote:
Hi, Mark
I think FileNotFoundException will be worked around by raise the ulimit.
I just want to know why segments are created more than mergeFactor.
During the googling, I found contents concerning mergeFactor:
http://web.archiveorange.com/archive/v/bH0vUQzfY
12 shards on the same machine?
> Hi, All
>
> I have 12 shards and ramBufferSizeMB=512, mergeFactor=5.
> But solr raise java.io.FileNotFoundException (Too many open files).
> mergeFactor is just 5. How can this happen?
> Below is segments of some shard. That is too many segmen
and this.
someone explain to me in more detail?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/why-too-many-open-files-tp3084407p3085172.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
did you have checked the max opened files of your OS?
see: http://lj4newbies.blogspot.com/2007/04/too-many-open-files.html
2011/6/20 Jason, Kim
> Hi, All
>
> I have 12 shards and ramBufferSizeMB=512, mergeFactor=5.
> But solr raise java.io.FileNotFoundException (Too man
Hi, All
I have 12 shards and ramBufferSizeMB=512, mergeFactor=5.
But solr raise java.io.FileNotFoundException (Too many open files).
mergeFactor is just 5. How can this happen?
Below is segments of some shard. That is too many segments over mergFactor.
What's wrong and How should I se
nd it) the limit on open files is
actually a limit on open file *descriptors* which includes network
sockets.
a google search for "java.net.SocketException: Too many open files" will
give you loads of results -- it's not specific to solr.
-Hoss
Just pushing up the topic and look for answers.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Too-many-open-files-exception-related-to-solrj-getServer-too-often-tp2808718p2867976.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
I get this solrj error in development environment.
org.apache.solr.client.solrj.SolrServerException: java.net.SocketException:
Too many open files
At the time there was no reindexing or any write to the index. There were
only different queries genrated using solrj to hit solr server
8list%29>
>> <
>> http://wiki.apache.org/solr/UsingMailingLists?highlight=%28most%29%7C%28users%29%7C%28list%29
>> >
>> >
>> > Best
>> > Erick
>> >
>> > On Mon, Jun 28, 2010 at 11:56 AM, Anderson vasconcelos <
>
.apache.org/solr/UsingMailingLists?highlight=%28most%29%7C%28users%29%7C%28list%29
> >
> >
> > Best
> > Erick
> >
> > On Mon, Jun 28, 2010 at 11:56 AM, Anderson vasconcelos <
> > anderson.v...@gmail.com> wrote:
> >
> > > Hi all
> &
hlight=%28most%29%7C%28users%29%7C%28list%29>
>
> Best
> Erick
>
> On Mon, Jun 28, 2010 at 11:56 AM, Anderson vasconcelos <
> anderson.v...@gmail.com> wrote:
>
> > Hi all
> > When i send a delete query to SOLR, using the SOLRJ i received this
> > exc
n.v...@gmail.com> wrote:
> Hi all
> When i send a delete query to SOLR, using the SOLRJ i received this
> exception:
>
> org.apache.solr.client.solrj.SolrServerException: java.net.SocketException:
> Too many open files
> 11:53:06,964 INFO [HttpMethodDirector] I/O exc
Hi all
When i send a delete query to SOLR, using the SOLRJ i received this
exception:
org.apache.solr.client.solrj.SolrServerException: java.net.SocketException:
Too many open files
11:53:06,964 INFO [HttpMethodDirector] I/O exception
(java.net.SocketException) caught when processing request
r not.
I didnt even realize that the response being sent back from the admin/ping
request was an error until I checked it out using curl... everything looked
fine using firefox.
Thanks
--
View this message in context:
http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p715070.html
Sen
8,17 0 817622
> /var/solr/home/items/conf/healthcheck.txt
> ... at it keeps going
>
> and I've see it as high as 3000. I've had to update my ulimit to 1 to
> overcome this problem however I feel this is really just a bandaid to a
> deeper problem.
>
&
On 04/11/2010 10:12 PM, Blargy wrote:
Mark,
Cool. I didn't think that was the expected behavior. Will you guys at Lucid
be rolling this patch into your 1.4 distribution?
I don't know the release plans, but I'm sure this patch would be
included in the next release.
As per your 1.5 comme
ut do you happen to know an approximate release date of 1.5 (summer,
winter, 2011)?
Thanks for the patch!
--
View this message in context:
http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p712587.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 04/11/2010 12:26 AM, Blargy wrote:
...
option httpchk GET /solr/items/admin/file?file=healthcheck.txt
...
so basically I am requesting that file to determine if that particular slave
is up or not. Is this the preferred way of doing this? I kind of like the
"Enable/Disable" feature of this hea
how many times I request
the file: http://localhost:8983/solr/items/admin/threads
18
19
15
Same goes for number of sockets:
# netstat -an | wc -l
120
--
View this message in context:
http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p712439.html
Sent from the Solr - User mailing
xt
> ... at it keeps going
>
> and I've see it as high as 3000. I've had to update my ulimit to 1 to
> overcome this problem however I feel this is really just a bandaid to a
> deeper problem.
>
> Am I doing something wrong (Solr or HAProxy) or is this
to
overcome this problem however I feel this is really just a bandaid to a
deeper problem.
Am I doing something wrong (Solr or HAProxy) or is this a possible resource
leak?
Thanks for any input!
--
View this message in context:
http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p71114
0, 2010 12:29:20 PM org.apache.solr.core.SolrCore execute
>> INFO: [items] webapp=/solr path=/admin/file params={file=healthcheck.txt}
>> status=0 QTime=0
>> Apr 10, 2010 12:29:20 PM org.apache.solr.common.SolrException log
>> SEVERE: java.io.FileNotFoundException:
>>
ore.SolrCore execute
> INFO: [items] webapp=/solr path=/admin/file params={file=healthcheck.txt}
> status=0 QTime=0
> Apr 10, 2010 12:29:20 PM org.apache.solr.common.SolrException log
> SEVERE: java.io.FileNotFoundException:
> /var/solr/home/items/conf/healthcheck.txt (Too man
ore.SolrCore execute
> INFO: [items] webapp=/solr path=/admin/file params={file=healthcheck.txt}
> status=0 QTime=0
> Apr 10, 2010 12:29:20 PM org.apache.solr.common.SolrException log
> SEVERE: java.io.FileNotFoundException:
> /var/solr/home/items/conf/healthcheck.txt (Too man
] webapp=/solr path=/admin/file params={file=healthcheck.txt}
status=0 QTime=0
Apr 10, 2010 12:29:20 PM org.apache.solr.common.SolrException log
SEVERE: java.io.FileNotFoundException:
/var/solr/home/items/conf/healthcheck.txt (Too many open files)
at java.io.FileInputStream.open(Native Method
> If you had gone over 2GB of actual buffer *usage*, it would have
> broke... Guaranteed.
> We've now added a check in Lucene 2.9.1 that will throw an exception
> if you try to go over 2048MB.
> And as the javadoc says, to be on the safe side, you probably
> shouldn't go too near 2048 - perhaps 2
> > when you store raw (non
> > tokenized, non indexed) "text" value with a document (which almost
everyone
> > does). Try to store 1,000,000 documents with 1000 bytes non-tokenized
field:
> > you will need 1Gb just for this array.
>
> Nope. You shouldn't even need 1GB of buffer space for that.
to:ysee...@gmail.com] On Behalf Of Yonik
Seeley
> Sent: October-24-09 12:27 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Too many open files
>
> On Sat, Oct 24, 2009 at 12:18 PM, Fuad Efendi wrote:
> >
> > Mark, I don't understand this; of course it is u
On Sat, Oct 24, 2009 at 12:25 PM, Fuad Efendi wrote:
> This JavaDoc is incorrect especially for SOLR,
It looks correct to me... if you think it can be clarified, please
propose how you would change it.
> when you store raw (non
> tokenized, non indexed) "text" value with a document (which almost
On Sat, Oct 24, 2009 at 12:18 PM, Fuad Efendi wrote:
>
> Mark, I don't understand this; of course it is use case specific, I haven't
> seen any terrible behaviour with 8Gb
If you had gone over 2GB of actual buffer *usage*, it would have
broke... Guaranteed.
We've now added a check in Lucene 2.9.
ginal Message-
> From: Fuad Efendi [mailto:f...@efendi.ca]
> Sent: October-24-09 12:10 PM
> To: solr-user@lucene.apache.org
> Subject: RE: Too many open files
>
> Thanks for pointing to it, but it is so obvious:
>
> 1. "Buffer" is used as a RAM storage for index update
Mark, I don't understand this; of course it is use case specific, I haven't
seen any terrible behaviour with 8Gb... 32Mb is extremely small for
Nutch-SOLR -like applications, but it is acceptable for Liferay-SOLR...
Please note also, I have some documents with same IDs updated many thousands
time
Thanks for pointing to it, but it is so obvious:
1. "Buffer" is used as a RAM storage for index updates
2. "int" has 2 x Gb different values (2^^32)
3. We can have _up_to_ 2Gb of _Documents_ (stored as key->value pairs,
inverted index)
In case of 5 fields which I have, I need 5 arrays (up to 2Gb
and 1 minute update) with default SOLR settings (32Mb buffer). I
> > increased buffer to 8Gb on Master, and it triggered significant indexing
> > performance boost...
> >
> > -Fuad
> > http://www.linkedin.com/in/liferay
> >
> >
> >
> >> -
r to 8Gb on Master, and it triggered significant indexing
>>> performance boost...
>>>
>>> -Fuad
>>> http://www.linkedin.com/in/liferay
>>>
>>>
>>>
>>>
>>>
>>>> -Original Message-
>>>
uad
>> http://www.linkedin.com/in/liferay
>>
>>
>>
>>
>>> -Original Message-
>>> From: Mark Miller [mailto:markrmil...@gmail.com]
>>> Sent: October-23-09 3:03 PM
>>> To: solr-user@lucene.apache.org
>>> Subject:
performance boost...
>
> -Fuad
> http://www.linkedin.com/in/liferay
>
>
>
>> -Original Message-
>> From: Mark Miller [mailto:markrmil...@gmail.com]
>> Sent: October-23-09 3:03 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Too many ope
significant indexing
performance boost...
-Fuad
http://www.linkedin.com/in/liferay
> -Original Message-
> From: Mark Miller [mailto:markrmil...@gmail.com]
> Sent: October-23-09 3:03 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Too many open files
>
> I wouldn't
(depending on schema)...
>
>
> mergeFactor=10 is default setting... ramBufferSizeMB=1024 means that you
> need at least double Java heap, but you have -Xmx1024m...
>
>
> -Fuad
>
>
>
>> I am getting too many open files error.
>>
>> Usually I test on a se
33784&tstart=0
So, in case of mergeFactor=100 you may have (theoretically) 1000 segments,
10-20 files each (depending on schema)...
mergeFactor=10 is default setting... ramBufferSizeMB=1024 means that you
need at least double Java heap, but you have -Xmx1024m...
-Fuad
>
> I am gettin
org
> Subject: Too many open files
>
> Hi,
>
> I am getting too many open files error.
>
> Usually I test on a server that has 4GB RAM and assigned 1GB for
> tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
> server and has following setting for Sol
Make it 10:
10
-Fuad
> -Original Message-
> From: Ranganathan, Sharmila [mailto:sranganat...@library.rochester.edu]
> Sent: October-23-09 1:08 PM
> To: solr-user@lucene.apache.org
> Subject: Too many open files
>
> Hi,
>
> I am getting too many open files er
Hi,
I am getting too many open files error.
Usually I test on a server that has 4GB RAM and assigned 1GB for
tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
server and has following setting for SolrConfig.xml
true
1024
100
2147483647
1
In
Thanks
On Tue, Jun 9, 2009 at 5:14 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> On Tue, Jun 9, 2009 at 4:32 PM, revas wrote:
>
> > Thanks ShalinWhen we use the external file dictionary (if there is
> > one),then it should work fine ,right for spell check,also is there any
>
On Tue, Jun 9, 2009 at 4:32 PM, revas wrote:
> Thanks ShalinWhen we use the external file dictionary (if there is
> one),then it should work fine ,right for spell check,also is there any
> format for this file
>
The external file should have one token per line. See
http://wiki.apache.org/s
Thanks ShalinWhen we use the external file dictionary (if there is
one),then it should work fine ,right for spell check,also is there any
format for this file
Regards
Sujatha
On Tue, Jun 9, 2009 at 3:03 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> On Tue, Jun 9, 2009 at 2:5
On Tue, Jun 9, 2009 at 2:56 PM, revas wrote:
> But the spell check componenet uses the n-gram analyzer and henc should
> work
> for any language ,is this correct ,also we can refer an extern dictionary
> for suggestions ,could this be in any language?
>
Yes it does use n-grams but there's an ana
>
>
> > 2) I have a scnenario where i have abt 20 webapps in a single
> container.We
> > get too many open files at index time /while restarting tomcat.
>
>
> Is that because of SpellCheckComponent?
>
>
> > The mergefactor is at default.
> >
>
ario where i have abt 20 webapps in a single container.We
> get too many open files at index time /while restarting tomcat.
Is that because of SpellCheckComponent?
> The mergefactor is at default.
>
> If i reduce the merge factor to 2 and optimize the index ,will the open
> files
Hi ,
1)Does the spell check component support all languages?
2) I have a scnenario where i have abt 20 webapps in a single container.We
get too many open files at index time /while restarting tomcat.
The mergefactor is at default.
If i reduce the merge factor to 2 and optimize the index
imized the index
> each 1 documents, but quickly I had to optimize after adding each batch
> of documents. Unfortunately, I'm still getting the "Too many open files" IO
> error on optimize. I went from mergeFactor of 25 down to 10, but I'm still
> unable to optimize
timize after adding each batch
> of documents. Unfortunately, I'm still getting the "Too many open files" IO
> error on optimize. I went from mergeFactor of 25 down to 10, but I'm still
> unable to optimize the index.
>
> I have configuration:
> false
>
I'm indexing a set of 50 small documents. I'm adding documents in
batches of 1000. At the beginning I had a setup that optimized the
index each 1 documents, but quickly I had to optimize after adding
each batch of documents. Unfortunately, I'm still getting the "T
he solution - the amount of open files increases
> >> permanantly.
> >> Earlier or later, this limit will be exhausted.
> >>
> >>
> >> Fuad Efendi schrieb:
> >> > Have you tried [ulimit -n 65536]? I don't think it relates to files
> >&
Yonik Seeley schrieb:
On Mon, Jul 14, 2008 at 10:17 AM, Alexey Shakov <[EMAIL PROTECTED]> wrote:
Yonik Seeley schrieb:
On Mon, Jul 14, 2008 at 5:14 AM, Alexey Shakov <[EMAIL PROTECTED]>
now we have set the limt to ~1 files
but this is not the solution - the amount of open fi
On Mon, Jul 14, 2008 at 10:17 AM, Alexey Shakov <[EMAIL PROTECTED]> wrote:
> Yonik Seeley schrieb:
>>
>> On Mon, Jul 14, 2008 at 5:14 AM, Alexey Shakov <[EMAIL PROTECTED]>
>>> now we have set the limt to ~1 files
>>> but this is not the solution - the amount of open files increases
>>> permanan
Yonik Seeley schrieb:
On Mon, Jul 14, 2008 at 5:14 AM, Alexey Shakov <[EMAIL PROTECTED]> wrote:
now we have set the limt to ~1 files
but this is not the solution - the amount of open files increases
permanantly.
Earlier or later, this limit will be exhausted.
How can you tell? Are
On Mon, Jul 14, 2008 at 9:52 AM, Fuad Efendi <[EMAIL PROTECTED]> wrote:
> Even Oracle requires 65536; MySQL+MyISAM depends on number of tables,
> indexes, and Client Threads.
>
> From my experience with Lucene, 8192 is not enough; leave space for OS too.
>
> Multithreaded application (in most cases
is limit will be exhausted.
Fuad Efendi schrieb:
> Have you tried [ulimit -n 65536]? I don't think it relates to files
> marked for deletion...
> ==
> http://www.linkedin.com/in/liferay
>
>
>> Earlier or later, the system crashes with message "Too many open files"
>
>
gt;> Fuad Efendi schrieb:
>> > Have you tried [ulimit -n 65536]? I don't think it relates to files
>> > marked for deletion...
>> > ======
>> > http://www.linkedin.com/in/liferay
>> >
>> >
>> >> Earlier or later, the system crashes with message "Too many open files"
>> >
>> >
>>
>>
>>
>
>
>
>
> Fuad Efendi schrieb:
> > Have you tried [ulimit -n 65536]? I don't think it relates to files
> > marked for deletion...
> > ==
> > http://www.linkedin.com/in/liferay
> >
> >
> >> Earlier or later, the system crashes with message "Too many open files"
> >
> >
>
>
>
1 - 100 of 133 matches
Mail list logo