rom: Erick Erickson
Sent: 09 March 2020 21:13
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory error solr 8.4.1
I’m 99% certain that something in your custom jar is the culprit, otherwise
we’d have seen a _lot_ of these. TIMED_WAITING is usually just a listener
thread, but they shouldn’t b
p(Native Method)
> org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
> java.lang.Thread.run(Thread.java:748)
>
> Thanks and Regards,
> Srinivas Kashyap
>
> -Original Message-
> From: Erick Erickson
> Sent: 06 March 2020
4
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory error solr 8.4.1
I assume you recompiled the jar file? re-using the same one compiled against 5x
is unsupported, nobody will be able to help until you recompile.
Once you’ve done that, if you still have the problem you need to take a thre
I assume you recompiled the jar file? re-using the same one compiled against 5x
is unsupported, nobody will be able to help until you recompile.
Once you’ve done that, if you still have the problem you need to take a thread
dump to see if your custom code is leaking threads, that’s my number one
Hi Erick,
We have custom code which are schedulers to run delta imports on our cores and
I have added that custom code as a jar and I have placed it on
server/solr-webapp/WEB-INF/lib. Basically we are fetching the JNDI datasource
configured in the jetty.xml(Oracle) and creating connection objec
This one can be a bit tricky. You’re not running out of overall memory, but you
are running out of memory to allocate stacks. Which implies that, for some
reason, you are creating a zillion threads. Do you have any custom code?
You can take a thread dump and see what your threads are doing, and
Hi All,
I have recently upgraded solr to 8.4.1 and have installed solr as service in
linux machine. Once I start my service, it will be up for 15-18hours and
suddenly stops without us shutting down. In solr.log I found below error. Can
somebody guide me what values should I be increasing in Lin
Hi Atita,
It would be good to consider upgrading to have the use of the better
features like better memory consumption and better authentication.
On a side note, it is also good to upgrade now in Solr 7, as Solr Indexes
can only be upgraded from the previous major release version (Solr 6) to
the
Hi Andrzej,
We're rather weighing on a lot of other stuff to upgrade our Solr for a
very long time like better authentication handling, backups using CDCR, new
Replication mode and this probably has just given us another reason to
upgrade.
Thank you so much for the suggestion, I think its good to
I know it’s not much help if you’re stuck with Solr 6.1 … but Solr 7.5 comes
with an alternative strategy for SPLITSHARD that doesn’t consume as much memory
and nearly doesn’t consume additional disk space on the leader. This strategy
can be turned on by “splitMethod=link” parameter.
> On 4 Oct
Hi Edwin,
Thanks for following up on this.
So here are the configs :
Memory - 30G - 20 G to Solr
Disk - 1TB
Index = ~ 500G
and I think that it possibly is due to the reason why this could be
happening is that during split shard, the unsplit index + split index
persists on the instance and may b
Hi Atita,
What is the amount of memory that you have in your system?
And what is your index size?
Regards,
Edwin
On Tue, 25 Sep 2018 at 22:39, Atita Arora wrote:
> Hi,
>
> I am working on a test setup with Solr 6.1.0 cloud with 1 collection
> sharded across 2 shards with no replication. When t
Hi,
I am working on a test setup with Solr 6.1.0 cloud with 1 collection
sharded across 2 shards with no replication. When triggered a SPLITSHARD
command it throws "java.lang.OutOfMemoryError: Java heap space" everytime.
I tried this with multiple heap settings of 8, 12 & 20G but every time it
doe
On 4/1/2017 4:17 PM, marotosg wrote:
> I am trying to load a big table into Solr using DataImportHandler and Mysql.
> I am getting OutOfMemory error because Solr is trying to load the full
> table. I have been reading different posts and tried batchSize="-1".
> https:
ndler and
> Mysql.
> I am getting OutOfMemory error because Solr is trying to load the full
> table. I have been reading different posts and tried batchSize="-1".
> https://wiki.apache.org/solr/DataImportHandlerFaq
>
> Do you have any idea what could be the issue?
> Comp
Hi,
I am trying to load a big table into Solr using DataImportHandler and Mysql.
I am getting OutOfMemory error because Solr is trying to load the full
table. I have been reading different posts and tried batchSize="-1".
https://wiki.apache.org/solr/DataImportHandlerFaq
Do you hav
uot; instead of creating a
> new ArrayList
Will do that, allthough I am not hunting for nano's, at least not at the moment
;)
-Ursprüngliche Nachricht-
Von: Shawn Heisey [mailto:apa...@elyograg.org]
Gesendet: Montag, 22. Februar 2016 15:57
An: solr-user@lucene.apache.org
Betreff
On 2/22/2016 1:55 AM, Clemens Wyss DEV wrote:
> SolrClient solrClient = getSolrClient( coreName, true );
> Collection batch = new ArrayList();
> while ( elements.hasNext() )
> {
> IIndexableElement elem = elements.next();
> SolrInputDocument doc = createSolrDocForElement( elem, provider, locale
> solrClient.add( documents ); // [2]
is of course:
solrClient.add( batch ); // [2]
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Montag, 22. Februar 2016 09:55
An: solr-user@lucene.apache.org
Betreff: AW: AW: OutOfMemory when batchupdating f
.:
executorService.submit( () -> {
} );
Thanks for any advices. If needed, I can also provide the OOM-heapdump ...
-Ursprüngliche Nachricht-
Von: Shawn Heisey [mailto:apa...@elyograg.org]
Gesendet: Freitag, 19. Februar 2016 18:59
An: solr-user@lucene.apache.org
Betreff: Re: AW: OutOfMemory w
On 2/19/2016 3:08 AM, Clemens Wyss DEV wrote:
> The logic is somewhat this:
>
> SolrClient solrClient = new HttpSolrClient( coreUrl );
> while ( got more elements to index )
> {
> batch = create 100 SolrInputDocuments
> solrClient.add( batch )
> }
How much data is going into each of those Sol
..@gmail.com]
> Gesendet: Freitag, 19. Februar 2016 17:23
> An: solr-user@lucene.apache.org
> Betreff: Re: OutOfMemory when batchupdating from SolrJ
>
> Clemens,
>
> First allocating higher or right amount of heap memory is not a workaround
> but becomes a requirement dependin
Thanks Susheel,
but I am having problems in and am talking about SolrJ, i.e. the "client-side
of Solr" ...
-Ursprüngliche Nachricht-
Von: Susheel Kumar [mailto:susheel2...@gmail.com]
Gesendet: Freitag, 19. Februar 2016 17:23
An: solr-user@lucene.apache.org
Betreff: Re: OutOfM
[mailto:susheel2...@gmail.com]
> Gesendet: Freitag, 19. Februar 2016 14:42
> An: solr-user@lucene.apache.org
> Betreff: Re: OutOfMemory when batchupdating from SolrJ
>
> When you run your SolrJ Client Indexing program, can you increase heap
> size similar below. I guess it may b
4:42
An: solr-user@lucene.apache.org
Betreff: Re: OutOfMemory when batchupdating from SolrJ
When you run your SolrJ Client Indexing program, can you increase heap size
similar below. I guess it may be on your client side you are running int
OOM... or please share the exact error if below doesn't wo
t;
>> -Ursprüngliche Nachricht-
>> Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
>> Gesendet: Freitag, 19. Februar 2016 11:09
>> An: solr-user@lucene.apache.org
>> Betreff: AW: OutOfMemory when batchupdating from SolrJ
>>
>> The char[] which occupies 180MB
Ursprüngliche Nachricht-
> Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
> Gesendet: Freitag, 19. Februar 2016 11:09
> An: solr-user@lucene.apache.org
> Betreff: AW: OutOfMemory when batchupdating from SolrJ
>
> The char[] which occupies 180MB has the following &quo
ndet: Freitag, 19. Februar 2016 11:09
An: solr-user@lucene.apache.org
Betreff: AW: OutOfMemory when batchupdating from SolrJ
The char[] which occupies 180MB has the following "path to root"
char[87690841] @ 0x7940ba658 shopproducts#...
|- java.lang.Thread @ 0x7321d9b80 SolrUtil execu
ngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Freitag, 19. Februar 2016 09:07
An: solr-user@lucene.apache.org
Betreff: OutOfMemory when batchupdating from SolrJ
Environment: Solr 5.4.1
I am facing OOMs when batchupdating SolrJ. I am seeing approx 30'000(!)
Environment: Solr 5.4.1
I am facing OOMs when batchupdating SolrJ. I am seeing approx 30'000(!)
SolrInputDocument instances, although my batchsize is 100. I.e. I call
solrClient.add( documents ) for every 100 documents only. So I'd expect to see
at most 100 SolrInputDocument's in memory at any
Sorry for the delay -> https://issues.apache.org/jira/browse/SOLR-7646
-Ursprüngliche Nachricht-
Von: Erick Erickson [mailto:erickerick...@gmail.com]
Gesendet: Mittwoch, 3. Juni 2015 17:39
An: solr-user@lucene.apache.org
Betreff: Re: Solr OutOfMemory but no heap and dump and oo_solr
e Nachricht-
> Von: Mark Miller [mailto:markrmil...@gmail.com]
> Gesendet: Mittwoch, 3. Juni 2015 14:23
> An: solr-user@lucene.apache.org
> Betreff: Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not
> triggered
>
> We will have to a find a way to deal with thi
On 6/3/2015 1:41 AM, Clemens Wyss DEV wrote:
>> The oom script just kills Solr with the KILL signal (-9) and logs the kill.
> I know. But my feeling is, that not even this "happens", i.e. the script is
> not being executed. At least I see no solr_oom_killer-$SOLR_PORT-$NOW.log
> file ...
>
> B
Hi Mark,
what exactly should I file? What needs to be added/appended to the issue?
Regards
Clemens
-Ursprüngliche Nachricht-
Von: Mark Miller [mailto:markrmil...@gmail.com]
Gesendet: Mittwoch, 3. Juni 2015 14:23
An: solr-user@lucene.apache.org
Betreff: Re: Solr OutOfMemory but no heap
We will have to a find a way to deal with this long term. Browsing the code
I can see a variety of places where problem exception handling has been
introduced since this all was fixed.
- Mark
On Wed, Jun 3, 2015 at 8:19 AM Mark Miller wrote:
> File a JIRA issue please. That OOM Exception is get
File a JIRA issue please. That OOM Exception is getting wrapped in a
RuntimeException it looks. Bug.
- Mark
On Wed, Jun 3, 2015 at 2:20 AM Clemens Wyss DEV
wrote:
> Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available
> for Solr.
>
> I am seeing the following OOMs:
> ERROR -
ograg.org]
Gesendet: Mittwoch, 3. Juni 2015 09:16
An: solr-user@lucene.apache.org
Betreff: Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not
triggered
On 6/3/2015 12:20 AM, Clemens Wyss DEV wrote:
> Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for
>
On 6/3/2015 12:20 AM, Clemens Wyss DEV wrote:
> Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for
> Solr.
>
> I am seeing the following OOMs:
> ERROR - 2015-06-03 05:17:13.317; [ customer-1-de_CH_1]
> org.apache.solr.common.SolrException; null:java.lang.RuntimeExcepti
Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for
Solr.
I am seeing the following OOMs:
ERROR - 2015-06-03 05:17:13.317; [ customer-1-de_CH_1]
org.apache.solr.common.SolrException; null:java.lang.RuntimeException:
java.lang.OutOfMemoryError: Java heap space
a
nly increase the time app
> > > operates prior to crash.
> > >
> > > This is purely from JVM side of things. You may want to read up more on
> > > PermGen to know various problem scenarios.
> > >
> > > pozdrawiam,
> > > LAFK
> >
> operates prior to crash.
> >
> > This is purely from JVM side of things. You may want to read up more on
> > PermGen to know various problem scenarios.
> >
> > pozdrawiam,
> > LAFK
> >
> > 2015-05-18 4:07 GMT+02:00 Zheng Lin Edwin Yeo :
> >
> pozdrawiam,
> LAFK
>
> 2015-05-18 4:07 GMT+02:00 Zheng Lin Edwin Yeo :
>
> > Hi,
> >
> > I've recently upgrade my system to 16GB RAM. While there's no more
> > OutofMemory due to the physically memory being full, I get this
> > "java.lang.O
Zheng Lin Edwin Yeo :
> Hi,
>
> I've recently upgrade my system to 16GB RAM. While there's no more
> OutofMemory due to the physically memory being full, I get this
> "java.lang.OutOfMemoryError: PermGen space". This doesn't happen previously
> as I thin
Hi,
I've recently upgrade my system to 16GB RAM. While there's no more
OutofMemory due to the physically memory being full, I get this
"java.lang.OutOfMemoryError: PermGen space". This doesn't happen previously
as I think the physical memory run out first.
This occ
Hi,
I'm using SortedMapBackedCache for my child entities. When I use this I'm
getting outofmemory exception and the records are not getting indexed. I've
increased my heap size to 3GB. but still the same result. Is there a way how
I can configure it to index 1L records and clea
Sent: Tuesday, November 18, 2014 2:45:46 PM GMT -08:00 US/Canada Pacific
Subject: Re: OutOfMemory on 28 docs with facet.method=fc/fcs
On 11/18/2014 3:06 PM, Mohsin Beg Beg wrote:
> Looking at SimpleFacets.java, doesn't fc/fcs iterate only over the DocSet for
> the fields. So assuming eac
Mohsin Beg Beg [mohsin@oracle.com] wrote:
> Looking at SimpleFacets.java, doesn't fc/fcs iterate only over the DocSet for
> the fields.
To get the seed for the concrete faceting resolving, yes. That still leaves the
mapping and the counting structures.
> So assuming each field has a unique
On 11/18/2014 3:06 PM, Mohsin Beg Beg wrote:
> Looking at SimpleFacets.java, doesn't fc/fcs iterate only over the DocSet for
> the fields. So assuming each field has a unique term across the 28 rows, a
> max of 28 * 15 unique small strings (<100bytes), should be in the order of
> 1MB. For 100 co
t...@statsbiblioteket.dk
To: solr-user@lucene.apache.org
Sent: Tuesday, November 18, 2014 12:34:08 PM GMT -08:00 US/Canada Pacific
Subject: RE: OutOfMemory on 28 docs with facet.method=fc/fcs
Mohsin Beg Beg [mohsin@oracle.com] wrote:
> I am getting OOM when faceting on numFound=28.
Mohsin Beg Beg [mohsin@oracle.com] wrote:
> I am getting OOM when faceting on numFound=28. The receiving
> solr node throws the OutOfMemoryError even though there is 7gb
> available heap before the faceting request was submitted.
fc and fcs faceting memory overhead is (nearly) independent on t
Hi,
I am getting OOM when faceting on numFound=28. The receiving solr node throws
the OutOfMemoryError even though there is 7gb available heap before the
faceting request was submitted. If a different solr node is selected that one
fails too. Any suggestions ?
1) Test setup is:-
100 collecti
Greg and I are talking about the same type of parallel.
We do the same thing - if I know there are 10,000 results, we can chunk
that up across multiple worker threads up front without having to page
through the results. We know there are 10 chunks of 1,000, so we can have
one thread process 0-100
Sorry, I meant one thread requesting records 1 - 1000, whilst the next
thread requests 1001 - 2000 from the same ordered result set. We've
observed several of our customers trying to harvest our data with
multi-threaded scripts that work like this. I thought it would not work
using cursor marks...
On Mon, Mar 17, 2014 at 7:14 PM, Greg Pendlebury
wrote:
> My suspicion is that it won't work in parallel
Deep paging with cursorMark does work with distributed search
(assuming that's what you meant by "parallel"... querying sub-shards
in parallel?).
-Yonik
http://heliosearch.org - solve Solr GC
> > > >>> instance to 3 node Solr 4.7 cluster).
> > > > >>>
> > > > >>> Part of out application does an automated traversal of all
> > documents
> > > > that
> > > > >>> match a specific query. It d
tomated traversal of all
> documents
> > > that
> > > >>> match a specific query. It does this by iterating through results
> by
> > > >>> setting the start and rows parameters, starting with start=0 and
> > > >> rows=1000,
&
ting with start=0 and
> > >> rows=1000,
> > >>> then start=1000, rows=1000, start = 2000, rows=1000, etc etc.
> > >>>
> > >>> We do this in parallel fashion with multiple workers on multiple
> nodes.
> > >>> It's easy to chu
eters, starting with start=0 and
> >> rows=1000,
> >>> then start=1000, rows=1000, start = 2000, rows=1000, etc etc.
> >>>
> >>> We do this in parallel fashion with multiple workers on multiple nodes.
> >>> It's easy to chunk up the wor
to be done by figuring out how many total
>>> results there are and then creating 'chunks' (0-1000, 1000-2000,
>> 2000-3000)
>>> and sending each chunk to a worker in a pool of multi-threaded workers.
>>>
>>> This worked well for us with a single
of multi-threaded workers.
> >
> > This worked well for us with a single server. However upon upgrading to
> > solr cloud, we've found that this quickly (within the first 4 or 5
> > requests) causes an OutOfMemory error on the coordinating node that
> > receives
s worked well for us with a single server. However upon upgrading to
> solr cloud, we've found that this quickly (within the first 4 or 5
> requests) causes an OutOfMemory error on the coordinating node that
> receives the query. I don't fully understand what's going on h
27;ve found that this quickly (within the first 4 or 5
> requests) causes an OutOfMemory error on the coordinating node that
> receives the query. I don't fully understand what's going on here, but it
> looks like the coordinating node receives the query and sends it to the
> shar
chunks' (0-1000, 1000-2000, 2000-3000)
and sending each chunk to a worker in a pool of multi-threaded workers.
This worked well for us with a single server. However upon upgrading to
solr cloud, we've found that this quickly (within the first 4 or 5
requests) causes an OutOfMemory error
Hi,
Try running jstat to see if the heap is full. 4gb is not much and could
easily be eaten by structures used for sorting, facetting, and caching.
Plug: SPM has a new feature that lets you send graphs with various metrics
to Solr mailing list. I'd personally look at the GC graphs to see if GC
ti
Hi everyone,
My SolrCloud cluster (4.3.0) has came into production a few days ago.
Docs are being indexed into Solr using "/update" requestHandler, as a POST
request, containing text/xml content-type.
The collection is sharded into 36 pieces, each shard has two replicas.
There are 36 nodes (each
evancy calculations.
>
> ~ David
>
>
>
> -
> Author:
> http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035372.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035372.html
Sent from the Solr - User mailing list archive at Nabble.com.
; wraps nearly the entire globe. Is that what you intended?
> >>
> >> If this is what you intended, then you got bitten by this unfixed bug:
> >> https://issues.apache.org/jira/browse/LUCENE-4550
> >> As a work-around, you could split that horizontal line into two equal
> >> pieces
> >> and index them as separate values for the document.
> >>
> >> ~ David
> >>
>
>
>
>
>
> -
> Author:
> http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035234.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
that horizontal line into two equal
>> pieces
>> and index them as separate values for the document.
>>
>> ~ David
>>
-
Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035234.html
Sent from the Solr - User mailing list archive at Nabble.com.
0)
> > at
> > org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:70)
> > at
> >
> org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:233)
> >
> > 21/01/2013 12:05:11 PM org.apache.solr.update.DirectUpdateHandler2
> > rollback
> > INFO: start rollback{flags=0,_version_=0}
> > 21/01/2013 12:05:11 PM org.apache.solr.update.DefaultSolrCoreState
> > newIndexWriter
> > INFO: Creating new IndexWriter...
> > 21/01/2013 12:05:11 PM org.apache.solr.update.DefaultSolrCoreState
> > newIndexWriter
> > INFO: Waiting until IndexWriter is unused... core=coreDap
> > 21/01/2013 12:05:11 PM org.apache.solr.update.DefaultSolrCoreState
> > newIndexWriter
> > INFO: Rollback old IndexWriter... core=coreDap
> > 21/01/2013 12:05:11 PM org.apache.solr.core.CachingDirectoryFactory get
> > INFO: return new directory for /DMS_dev/r01/solr/coreDap/data/index
> > forceNew:true
> > 21/01/2013 12:05:11 PM org.apache.solr.core.SolrDeletionPolicy onInit
> > INFO: SolrDeletionPolicy.onInit: commits:num=1
> >
> > commit{dir=NRTCachingDirectory(org.apache.lucene.store.MMapDirectory@
> /DMS_dev/r01/solr/coreDap/data/index
> > lockFactory=org.apache.lucene.store.NativeFSLockFactory@7ab9fcb0;
> > maxCacheMB=48.0
> >
> maxMergeSizeMB=4.0),segFN=segments_mr,generation=819,filenames=[segments_mr]
> > 21/01/2013 12:05:11 PM org.apache.solr.core.SolrDeletionPolicy
> > updateCommits
> > INFO: newest commit = 819
> > 21/01/2013 12:05:11 PM org.apache.solr.update.DefaultSolrCoreState
> > newIndexWriter
> > INFO: New IndexWriter is ready to be used.
> > 21/01/2013 12:05:11 PM org.apache.solr.update.DirectUpdateHandler2
> > rollback
> > INFO: end_rollback
>
>
>
>
>
> -
> Author:
> http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035163.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Javier Molina MACS
IT Consultant/Java Developer
(M) 0449 640 386
(e) javier.mol...@acsmail.net.au
directory for /DMS_dev/r01/solr/coreDap/data/index
> forceNew:true
> 21/01/2013 12:05:11 PM org.apache.solr.core.SolrDeletionPolicy onInit
> INFO: SolrDeletionPolicy.onInit: commits:num=1
>
> commit{dir=NRTCachingDirectory(org.apache.lucene.store.MMapDirectory@/DMS_dev/r01/solr/coreDap/data/index
> lockFactory=org.apache.lucene.store.NativeFSLockFactory@7ab9fcb0;
> maxCacheMB=48.0
> maxMergeSizeMB=4.0),segFN=segments_mr,generation=819,filenames=[segments_mr]
> 21/01/2013 12:05:11 PM org.apache.solr.core.SolrDeletionPolicy
> updateCommits
> INFO: newest commit = 819
> 21/01/2013 12:05:11 PM org.apache.solr.update.DefaultSolrCoreState
> newIndexWriter
> INFO: New IndexWriter is ready to be used.
> 21/01/2013 12:05:11 PM org.apache.solr.update.DirectUpdateHandler2
> rollback
> INFO: end_rollback
-
Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035163.html
Sent from the Solr - User mailing list archive at Nabble.com.
.
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020134.html
Sent from the Solr - User mailing list archive at Nabble.com.
8
An: solr-user@lucene.apache.org
Betreff: Re: java.io.IOException: Map failed :: OutOfMemory
today the same exception:
INFO: [] webapp=/solr path=/update
params={waitSearcher=true&commit=true&wt=javabin&waitFlush=true&version=2}
status=0 QTime=10
-
View this message in context:
http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020121.html
Sent from the Solr - User mailing list archive at Nabble.com.
M) SE Runtime Environment (build 1.6.0_33-b03) Java HotSpot(TM) 64-Bit
Server VM (build 20.8-b03, mixed mode)
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020078.html
Sent from the Solr - User mailing list archive at Nabble.com.
n't matter
if the doc exists or not.
We use here the functionality in solrj to delete a list of ids.
Always in this deletion the error occurs.
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020027.html
Sent from
OException-Map-failed-OutOfMemory-tp4019802p4019950.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
v 12, 2012 5:16:41 PM org.apache.solr.update.SolrIndexWriter finalize
SEVERE: SolrIndexWriter was not closed prior to finalize(), indicates a bug
-- POSSIBLE RESOURCE LEAK!!!
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802
is fixed your problem, but just a datapoint that
>>> might suggest that doing what you did is not such a bad thing.
>>>
>>> Michael Della Bitta
>>>
>>>
>>> Appinions, Inc. -- Where Influence Isn’t a Game.
>>> http://www.appinions.com
>>
t;
>> On Wed, Jul 11, 2012 at 4:05 AM, Bruno Mannina wrote:
>>> Hi, some news this morning...
>>>
>>> I added -Xms1024m option and now it works?! no outofmemory ?!
>>>
>>> java -jar -Xms1024m -Xmx2048m start.jar
>>>
>>> Le 11/
1, 2012 at 4:05 AM, Bruno Mannina wrote:
>> Hi, some news this morning...
>>
>> I added -Xms1024m option and now it works?! no outofmemory ?!
>>
>> java -jar -Xms1024m -Xmx2048m start.jar
>>
>> Le 11/07/2012 09:55, Bruno Mannina a écrit :
>>
>>
nnina wrote:
> Hi, some news this morning...
>
> I added -Xms1024m option and now it works?! no outofmemory ?!
>
> java -jar -Xms1024m -Xmx2048m start.jar
>
> Le 11/07/2012 09:55, Bruno Mannina a écrit :
>
>> Hi Yury,
>>
>> Thanks for your anwer.
>>
>&g
Hi, some news this morning...
I added -Xms1024m option and now it works?! no outofmemory ?!
java -jar -Xms1024m -Xmx2048m start.jar
Le 11/07/2012 09:55, Bruno Mannina a écrit :
Hi Yury,
Thanks for your anwer.
ok for to increase memory but I have a problem with that,
I have 8Go on my
Hi Yury,
Thanks for your anwer.
ok for to increase memory but I have a problem with that,
I have 8Go on my computer but the JVM accepts only 2Go max with the
option -Xmx
is it normal?
Thanks,
Bruno
Le 11/07/2012 03:42, Yury Kats a écrit :
Sorting is a memory-intensive operation indeed.
Not
Sorting is a memory-intensive operation indeed.
Not sure what you are asking, but it may very well be that your
only option is to give JVM more memory.
On 7/10/2012 8:25 AM, Bruno Mannina wrote:
> Dear Solr Users,
>
> Each time I try to do a request with &sort=pubdate+desc
>
> I get:
> GRAVE
To complete my question:
after having this error, some fields (not all) aren't reachable with the
same error.
Le 10/07/2012 14:25, Bruno Mannina a écrit :
Dear Solr Users,
Each time I try to do a request with &sort=pubdate+desc
I get:
GRAVE: java.lang.OutOfMemoryError: Java heap space
Dear Solr Users,
Each time I try to do a request with &sort=pubdate+desc
I get:
GRAVE: java.lang.OutOfMemoryError: Java heap space
I use Solr3.6, I have around 80M docs and my request gets around 160
results.
Actually for my test, i use jetty
java -jar -Xmx2g start.jar
PS: If I write 3
Unfortunately I really don't know ;) Every time I set forth to figure
things like this out I seem to learn some new way...
Maybe someone else knows?
Mike McCandless
http://blog.mikemccandless.com
On Thu, Sep 22, 2011 at 2:15 PM, Shawn Heisey wrote:
> Michael,
>
> What is the best central plac
Michael,
What is the best central place on an rpm-based distro (CentOS 6 in my
case) to raise the vmem limit for specific user(s), assuming it's not
already correct? I'm using /etc/security/limits.conf to raise the open
file limit for the user that runs Solr:
ncindex hardnofile
OK, excellent. Thanks for bringing closure,
Mike McCandless
http://blog.mikemccandless.com
On Thu, Sep 22, 2011 at 9:00 AM, Ralf Matulat wrote:
> Dear Mike,
> thanks for your your reply.
> Just a couple of minutes we found a solution or - to be honest - where we
> went wrong.
> Our failure was
Dear Mike,
thanks for your your reply.
Just a couple of minutes we found a solution or - to be honest - where
we went wrong.
Our failure was the use of ulimit. We missed, that ulimit sets the vmem
for each shell seperatly. So we set 'ulimit -v unlimited' on a shell,
thinking that we've done the
Are you sure you are using a 64 bit JVM?
Are you sure you really changed your vmem limit to unlimited? That
should have resolved the OOME from mmap.
Or: can you run "cat /proc/sys/vm/max_map_count"? This is a limit on
the total number of maps in a single process, that Linux imposes. But
the de
Good morning!
Recently we slipped into an OOME by optimizing our index. It looks like
it's regarding to the nio class and the memory-handling.
I'll try to describe the environment, the error and what we did to solve
the problem. Nevertheless, none of our approaches was successful.
The environm
Thanks Shawn, that helps explain things.
So the issue there, with using maxSearchWarmers to try and prevent out
of control RAM/CPU usage from over-lapping on-deck, combined with
replication... is if you're still pulling down replications very
frequently but using maxSearchWarmers to prevent ov
On 12/14/2010 9:02 AM, Jonathan Rochkind wrote:
1. Will the existing index searcher have problems because the files
have been changed out from under it?
2. Will a future replication -- at which NO new files are available on
master -- still trigger a future commit on slave?
I'm not really sur
And the second replication will trigger a commit even if there are in
fact no new files to be transfered over to slave, because there have been
no changes since the prior sync with failed commit?
From: Upayavira [...@odoko.co.uk]
Sent: Tuesday, December 14, 20
From: Upayavira [...@odoko.co.uk]
> Sent: Tuesday, December 14, 2010 2:23 AM
> To: solr-user@lucene.apache.org
> Subject: RE: OutOfMemory GC: GC overhead limit exceeded - Why isn't
> WeakHashMap getting collected?
>
> The second commit will bring in all changes, from both syncs.
>
&g
org
Subject: RE: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap
getting collected?
The second commit will bring in all changes, from both syncs.
Think of the sync part as a glorified rsync of files on disk. So the
files will have been copied to disk, but the in memory index on the
On Behalf Of Yonik Seeley
> [yo...@lucidimagination.com]
> Sent: Monday, December 13, 2010 10:41 PM
> To: solr-user@lucene.apache.org
> Subject: Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't
> WeakHashMap getting collected?
>
> On Mon, Dec 13, 2010 at 9:27 PM
@gmail.com] On Behalf Of Yonik Seeley
[yo...@lucidimagination.com]
Sent: Monday, December 13, 2010 10:41 PM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap
getting collected?
On Mon, Dec 13, 2010 at 9:27 PM, Jonathan
1 - 100 of 178 matches
Mail list logo