I set the heap to 8g but this doesn't have any effect and the problem is still
the same.
~# ps -eaf | grep solr
solr 3176 1 0 08:50 ?00:00:00 /lib/systemd/systemd --user
solr 3177 3176 0 08:50 ?00:00:00 (sd-pam)
solr 3238 1 0 08:50 ?
Hi All - any ideas on this? Anything I can try?
Thank you!
-Joe
On 2/26/2020 9:01 AM, Joe Obernberger wrote:
Hi All - I have several solr collections all with the same schema. If
I add a field to the schema and index it into the collection on which
I added the field, it works fine. However
Hi All,
I have recently upgraded solr to 8.4.1 and have installed solr as service in
linux machine. Once I start my service, it will be up for 15-18hours and
suddenly stops without us shutting down. In solr.log I found below error. Can
somebody guide me what values should I be increasing in Lin
As an addendum: For me it looks as if the cores are simply not loaded, although
the configuration is correct and has not been changed (apart from the
enlargement of the heap).
Torsten
-Ursprüngliche Nachricht-
Von: Bunde Torsten
Gesendet: Freitag, 6. März 2020 09:33
An: solr-user@luce
Is it still giving you OOMs? That was the original problem statement. If not,
then you need to look at your Solr logs to see what error is reported. NOTE: If
you’re still getting OOMs, then there won’t be anything obvious in the logs.
Best,
Erick
> On Mar 6, 2020, at 06:44, Bunde Torsten wrot
This one can be a bit tricky. You’re not running out of overall memory, but you
are running out of memory to allocate stacks. Which implies that, for some
reason, you are creating a zillion threads. Do you have any custom code?
You can take a thread dump and see what your threads are doing, and
Didn’t we talk about reloading the collections that share the schema after the
schema change via the collections API RELOAD command?
Best,
Erick
> On Mar 6, 2020, at 05:34, Joe Obernberger
> wrote:
>
> Hi All - any ideas on this? Anything I can try?
>
> Thank you!
>
> -Joe
>
>> On 2/26/2
Hi Erick,
We have custom code which are schedulers to run delta imports on our cores and
I have added that custom code as a jar and I have placed it on
server/solr-webapp/WEB-INF/lib. Basically we are fetching the JNDI datasource
configured in the jetty.xml(Oracle) and creating connection objec
I assume you recompiled the jar file? re-using the same one compiled against 5x
is unsupported, nobody will be able to help until you recompile.
Once you’ve done that, if you still have the problem you need to take a thread
dump to see if your custom code is leaking threads, that’s my number one
Thank you Erick - I have no record of that, but will absolutely give the
API RELOAD a shot! Thank you!
-Joe
On 3/6/2020 10:26 AM, Erick Erickson wrote:
Didn’t we talk about reloading the collections that share the schema after the
schema change via the collections API RELOAD command?
Best,
We are looking at upgrading our Solrcoud instances from 7.2 to the most recent
version of solr 8.4.1 at this time. The last time we upgraded a major solr
release we were able to upgrade the index files to the newer version, this
prevented us from having an outage. Subsequently we've reindexed a
Hi Webster,
When we upgraded from 7.5 to 8.1 we ran into a very strange issue:
https://lucene.472066.n3.nabble.com/Stored-field-values-don-t-update-after-7-gt-8-upgrade-td4442934.html
We ended up having to do a full re-index to solve this issue, but if you're
going to do this upgrade I would lov
You best off doing a full reindex to a single solr cloud 8.x node and then when
done start taking down 7.x nodes, upgrade them to 8.x and add them to the new
cluster. upgrading indexes has so many potential issues,
> On Mar 6, 2020, at 9:21 PM, lstusr 5u93n4 wrote:
>
> Hi Webster,
>
> When
When you say “reindexed”, how exactly was that done? Because if you didn’t
start from an empty index, you will have to re-index from scratch to use 8x.
Starting with 6.x, a marker is written whenever a segment is created indicating
what version was used. Whenever two or more segments are merged,
14 matches
Mail list logo