ning - http://sematext.com/
> On 21 Feb 2020, at 06:54, Akreeti Agarwal wrote:
>
> Hi All,
>
>
>
> I am using SOLR 7.5 version with master slave architecture.
>
> I am getting :
>
>
>
> "PERFORMANCE WARNING: Overlapping onDeckSearchers=2"
&g
Hi All,
I am using SOLR 7.5 version with master slave architecture.
I am getting :
"PERFORMANCE WARNING: Overlapping onDeckSearchers=2"
continuously on my master logs for all cores. Please help me to resolve this.
Thanks & Regards,
Akreeti Agarwal
Thanks , its worked for me.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
specify a custom Solr
> > home, copy my config files and custom Solr plugins to container and boot
> in
> > SolrCloud mode).
> > All things same, if I just change version from 8.2.0-slim to 8.1.1-slim
> > then I do not get any such warning.
> >
> > On Tue, Aug 2
ns to container and boot in
> SolrCloud mode).
> All things same, if I just change version from 8.2.0-slim to 8.1.1-slim
> then I do not get any such warning.
>
> On Tue, Aug 20, 2019 at 5:01 AM Furkan KAMACI
> wrote:
>
>> Hi Arnold,
>>
>> Such errors ma
-slim to 8.1.1-slim
then I do not get any such warning.
On Tue, Aug 20, 2019 at 5:01 AM Furkan KAMACI
wrote:
> Hi Arnold,
>
> Such errors may arise due to file permission issues. I can run latest
> version without of Solr via docker image without any errors. Could you
> write whi
Hi,
>
> I am getting following warning in Solr admin UI logs. I did not get this
> warning in Solr 8.1.1
> Please note that I am using Solr docker slim image from here -
> https://hub.docker.com/_/solr/
>
> Unable to load jetty, not starting JettyAdminServer
>
Hi,
I am getting following warning in Solr admin UI logs. I did not get this
warning in Solr 8.1.1
Please note that I am using Solr docker slim image from here -
https://hub.docker.com/_/solr/
Unable to load jetty, not starting JettyAdminServer
The warning isn’t what’s affecting performance, it’s just an indication that
you’re committing too often. Technically you’re opening searchers too often.
Searchers are opened for several reasons;
1> your autocommit with openSearcher=true interval expires
2> your soft commit interval exp
Hello,
We have Apache Solr version 6.2.1 installed on server and we are getting
this warning on Apache Solr log from few days which has affected
performance of solr queries and put latency on our App:
SolrCore [user_details] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
So we have followed
Our application runs on Tomcat. We found that when we deploy to Tomcat using
Jenkins or Ansible--a "hot" deployment--the ZK log problem starts. The only
solution we've been able to find was to bounce Tomcat.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Has anyone found the reason of THADC's case? I'm having the same issue with
my app deployed on WildFly. The log files are going crazy exactly the way
THADC has described.
(Actually I was having a worse situation before I put
`-Dzookeeper.sasl.client=false` system property into WildFly. Prior to tha
: Open file limit warning when starting solr
Hello,
How did you installed Solr?, have you followed this instructions?:
https://urldefense.proofpoint.com/v2/url?u=https-3A__lucene.apache.org_solr_guide_7-5F0_taking-2Dsolr-2Dto-2Dproduction.html-23taking-2Dsolr-2Dto-2Dproduction&d=DwIF
solr-7.6.0$ bin/solr start
*** [WARN] *** Your open file limit is currently 1024.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in
your profile or solr.in.sh
Waiting up to 180 seconds to see Solr runn
he link that you sent.
> Should I uninstall and re-install?
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 5:45 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting s
Subject: Re: Open file limit warning when starting solr
Hello,
Strange... Solr user is created during the installation... What user is your
Solr running?
> cat /etc/init.d/solr |grep -i "RUNAS="
>
Have you followed all the info in the link I've sent?, because they talk a
>
> The result is the same
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 5:00 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
>
> I me
00
* hard nofile 65000
The result is the same
-Original Message-
From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
Sent: Wednesday, December 12, 2018 5:00 PM
To: solr-user@lucene.apache.org
Subject: Re: Open file limit warning when starting solr
Hello,
I mean change to solr user using
-a|grep -i fs.file-max
> fs.file-max = 810202
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 4:04 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
@lucene.apache.org
Subject: Re: Open file limit warning when starting solr
Hello,
The *su solr* command is important, because you change to Solr user before
check the limits again, then it shows its limits. Are you running the daemon as
solr user?
Other command to check is:
> # sysctl -a|grep
nesday, December 12, 2018 3:31 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
>
> What output you get with this commands?:
>
> > root@solr-temp01:/# ulimit -n
> > 1024
> > root@solr-temp01:/# su solr
&g
rony@rony-VirtualBox:~$ ulimit -n
1024
rony@rony-VirtualBox:~/solr-7.5.0$ ulimit -n
1024
-Original Message-
From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
Sent: Wednesday, December 12, 2018 3:31 PM
To: solr-user@lucene.apache.org
Subject: Re: Open file limit warning when starting
. I tried that but I'm still
> getting the file limit warning.
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 12:14 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when star
Hi Daniel and thanks for the prompt reply. I tried that but I'm still getting
the file limit warning.
-Original Message-
From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
Sent: Wednesday, December 12, 2018 12:14 PM
To: solr-user@lucene.apache.org
Subject: Re: Open file limit wa
If you no longer wish to see this warning, set
> SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
> * [WARN] *** Your Max Processes Limit is currently 15058.
> It should be set to 65000 to avoid operational disruption.
> If you no longer wish to see this wa
Hello, When launching solr (Ubuntu 16.04) I'm getting:
* [WARN] *** Your open file limit is currently 1024.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set
SOLR_ULIMIT_CHECKS to false in
Hello,
we have a solr server set up with a pair of replicated solr servers and a
three-node zookeeper front end. This configuration is replicated in several
environments. In one environment, we frequently receive the following
zookeeper-related warning against each of the three webapps that have
Hi,
Thank you for your response. I managed to find the solution. At client side
I have to set -Dzookeeper.sasl.client=false system property to disable SASL
authentication.
On Wed, Jan 31, 2018 at 6:15 PM, Shawn Heisey wrote:
> On 1/31/2018 9:07 AM, Tamás Barta wrote:
>
>> I'm using Solr 6.6.2 a
On 1/31/2018 9:07 AM, Tamás Barta wrote:
I'm using Solr 6.6.2 and I use Zookeeper too handle Solr cloud. In Java
client I use SolrJ this way:
*client = new CloudSolrClient.Builder().withZkHost(zkHostString).build();*
In the log I see the followings:
*WARN [org.apache.zookeeper.SaslClientCall
Hi,
I'm using Solr 6.6.2 and I use Zookeeper too handle Solr cloud. In Java
client I use SolrJ this way:
*client = new CloudSolrClient.Builder().withZkHost(zkHostString).build();*
In the log I see the followings:
*WARN [org.apache.zookeeper.SaslClientCallbackHandler] Could not login:
the Clie
Hi,Can someone respond on this please?Or, can you direct me to the right
contact who may know about these issues.Regards,RiteshFrom:
"Ritesh"<rved...@rediffmail.com>Sent: Tue, 19 Dec 2017
18:06:13To: <solr-user@lucene.apache.org>Subject: Re: recurring Solr
warning message
On 12/19/2017 5:36 AM, Ritesh wrote:
Hello,Can you help on the below issue please?My solr box keep on giving
warnings about every 30 seconds:
WARN null ServletHandler /solr/sitecore/select
org.apache.solr.common.SolrException: application/x-www-form-urlencoded
invalid: missing key
It looks li
on this?Regards,RiteshFrom:
GitHub Staff <supp...@github.com>Sent: Thu, 14 Dec 2017 00:05:54To:
Ritesh <rved...@rediffmail.com>Subject: Re: recurring Solr warni
I see the same issue with Firefox so it's not strictly browser dependent. I
also have one installation that doesn't have the problem.
The JSON in the endpoint clearly has some duplicates
"jmx":{
"bootclasspath":"/usr/java/jdk1.8.0_51/jre/lib/resources.jar:/usr/java/jdk1.8.0_51/jre/lib/rt.jar:
So, from looking at those errors + a bit of Googling, it's complaining that
there are duplicate values in the Args list:
- Repeater: arg in commandLineArgs, Duplicate key:
string:-XX:+UseGCLogFileRotation,
Duplicate
value: -XX:+UseGCLogFileRotation
- Repeater: arg in commandLineArgs, Duplicate key
I found that my boss's solr admin console did display the Args the only
install I have that does...
I do see errors in both Consoles. I see more errors on the ones that don't
display Args
Here are the errors that only show up when Args doesn't:
Error: [ngRepeat:dupes] Duplicates in a repeater are n
For what it's worth, all of our solr installations install solr as a service
On Tue, Nov 14, 2017 at 12:43 PM, Webster Homer
wrote:
> I am using chrome Version 62.0.3202.94 (Official Build) (64-bit)
>
> I only see a little icon and the word "Args" with nothing displayed. I
> just checked with F
I am using chrome Version 62.0.3202.94 (Official Build) (64-bit)
I only see a little icon and the word "Args" with nothing displayed. I
just checked with Firefox (version 56.0.2) and I see the same thing.
Args is not clickable
On Tue, Nov 14, 2017 at 9:53 AM, Erick Erickson
wrote:
> Webster:
On 5/17/2017 9:15 AM, Jason Gerlowski wrote:
> A strawman new message could be: "Performance warning: Overlapping
> onDeskSearchers=2; consider reducing commit frequency if performance
> problems encountered"
>
> Happy to create a JIRA/patch for this; just wanted to get
>>
Subject: Re: Performance warning: Overlapping onDeskSearchers=2 solr
Also, what is your autoSoftCommit setting? That also opens up a new searcher.
On Wed, May 17, 2017 at 8:15 AM, Jason Gerlowski
mailto:gerlowsk...@gmail.com>> wrote:
> Hey Shawn, others.
>
> This i
documents.
Below are some more config details in solrconfig.xml
20
200
false
2
Thanks and Regards,
Srinivas Kashyap
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 17 May 2017 08:51 PM
To: solr-user
Subject: Re: Performance warning: Overlapping
+1 to change to new message
A strawman new message could be: "Performance warning: Overlapping
onDeskSearchers=2; consider reducing commit frequency if performance
problems encountered"
On Wed, May 17, 2017 at 1:15 PM, Mike Drob wrote:
> You're committing too frequentl
will help because I don't know
what your current settings are. Or if you are using manual commits.
Mike
On Wed, May 17, 2017, 4:58 AM Srinivas Kashyap
wrote:
> Hi All,
>
> We are using Solr 5.2.1 version and are currently experiencing below
> Warning in Solr Logging Conso
ne their commit settings. Is
> there any other possible cause for those messages? If not, can we
> consider changing the log/exception error message to be more explicit
> about the cause?
>
> A strawman new message could be: "Performance warning: Overlapping
> onDeskSearchers=
n we
> consider changing the log/exception error message to be more explicit
> about the cause?
>
> A strawman new message could be: "Performance warning: Overlapping
> onDeskSearchers=2; consider reducing commit frequency if performance
> problems encountered"
>
> Happy t
error messages is to examine their commit settings. Is
there any other possible cause for those messages? If not, can we
consider changing the log/exception error message to be more explicit
about the cause?
A strawman new message could be: "Performance warning: Overlapping
onDeskSearchers=2
On 5/17/2017 5:57 AM, Srinivas Kashyap wrote:
> We are using Solr 5.2.1 version and are currently experiencing below Warning
> in Solr Logging Console:
>
> Performance warning: Overlapping onDeskSearchers=2
>
> Also we encounter,
>
> org.apache.solr.common.SolrExce
Hi All,
We are using Solr 5.2.1 version and are currently experiencing below Warning in
Solr Logging Console:
Performance warning: Overlapping onDeskSearchers=2
Also we encounter,
org.apache.solr.common.SolrException: Error opening new searcher. exceeded
limit of maxWarmingSearchers=2, try
for a language specific field we will always search both the user's
language AND English
We have a lot of testing to do, but I noticed this warning in the Solr logs
that I don't understand.
CommandHandler
Query: +(+((+search_en_root_name:tris +search_en_root_name:缓
+search
in original startup. The
> original starup, if I have started without ssl enabled and then startup on
> the same port with ssl enabled, it is when this warning is happening. But I
> really need to use the original port that I had. Any suggestion for getting
> aro
Dave and All,
The below exception is not happening anymore when I change the startup port
to something else apart from that I had in original startup. The original
starup, if I have started without ssl enabled and then startup on the same
port with ssl enabled, it is when this warning is
“Brittle” means that if the node fails, your cluster cannot change until the
node is back up. Single point of failure.
The whole point of Zookeeper is redundant protection against failure. So
running a single Zookeeper node is just wrong.
A Zookeeper ensemble needs to be an odd number of insta
It's a brittle ZK configuration. A typical ZK quorum is three nodes for
most production systems. One is fine, though, for development provided the
system it's on is not overloaded.
On Mon, Feb 27, 2017 at 6:43 PM, Rick Leir wrote:
> Hi Mike
> We are using a single ZK node, I think. What problems
Hi Mike
We are using a single ZK node, I think. What problems should we expect?
Thanks -- Rick
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
When you transition to an external zookeeper, you'll need at least 3 ZK
nodes. One is insufficient outside of a development environment. That's a
general requirement for any system that uses ZK.
On Sun, Feb 26, 2017 at 7:14 PM, Satya Marivada
wrote:
> May I ask about the port scanner running? Ca
Hi All,
I have configured solr with SSL and enabled http authentication. It is all
working fine on the solr admin page, indexing and querying process. One
bothering thing is that it is filling up logs every second saying no
authority, I have configured host name, port and authentication parameters
I don't know about your network setup but a port scanner sometimes can be an it
security device that, well, scans ports looking to see if they're open.
> On Feb 26, 2017, at 7:14 PM, Satya Marivada wrote:
>
> May I ask about the port scanner running? Can you please elaborate?
> Sure, will try
May I ask about the port scanner running? Can you please elaborate?
Sure, will try to move out to external zookeeper
On Sun, Feb 26, 2017 at 7:07 PM Dave wrote:
> You shouldn't use the embedded zookeeper with solr, it's just for
> development not anywhere near worthy of being out in production.
You shouldn't use the embedded zookeeper with solr, it's just for development
not anywhere near worthy of being out in production. Otherwise it looks like
you may have a port scanner running. In any case don't use the zk that comes
with solr
> On Feb 26, 2017, at 6:52 PM, Satya Marivada wrote
Hi All,
I have configured solr with SSL and enabled http authentication. It is all
working fine on the solr admin page, indexing and querying process. One
bothering thing is that it is filling up logs every second saying no
authority, I have configured host name, port and authentication parameters
Thanks EricK
Regards,
Prateek Jain
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 21 November 2016 04:32 PM
To: solr-user
Subject: Re: solr | performance warning
_when_ are you seeing this? I see this on startup upon occasion, and I _think_
there
requently. "Too frequently" means your autowarm interval is longer than
your commit interval. It's usually best to just let autocommit handle this BTW.
This is totally on a per-core basis. You won't get this warning if you commit
to coreA and coreB simultaneously, only if y
Hi All,
I am observing following error in logs, any clues about this:
2016-11-06T23:15:53.066069+00:00@solr@@ org.apache.solr.core.SolrCore:1650 -
[my_custom_core] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
Slight web search suggests that it could be a case of too-frequent commits. I
On 9/28/2016 8:27 AM, KRIS MUSSHORN wrote:
> My solr 5.4.1 solrconfig.xml is set up thus:
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
> ${solr.lock.type:native}
> false
>
> yet i get a warning on starting the core...
> 2016-09-2
My solr 5.4.1 solrconfig.xml is set up thus:
${solr.lock.type:native}
false
yet i get a warning on starting the core...
2016-09-28 14:24:06.049 WARN (coreLoadExecutor-6-thread-1) [ ] o.a.s.c.Config
Solr no longer supports forceful unlocking via the 'unlockOnStartup' option.
> When I made the change outlined in the patch on SOLR-8145 to my bin/solr
> script, the warning disappeared. That was not the intended effect of
> the patch, but I'm glad to have the mystery solved.
>
> Thank you for mentioning the problem so we could track it down.
You
ttps://github.com/eclipse/jetty.project/blob/ac24196b0d341534793308d585161381d5bca4ac/jetty-start/src/main/java/org/eclipse/jetty/start/Main.java#L446
>
> Doesn't look like there's an immediate workaround. Darn.
After a discussion on the jetty-user list, I have learned that although
March 22, 2016 10:41 AM
To: solr-user@lucene.apache.org
Subject: Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
On 3/22/2016 11:32 AM, Aswath Srinivasan (TMS) wrote:
> Thank you Shawn for taking time and responding.
>
> Unfortunately, this is not the case. My heap is not even going p
only trying out these,
>
> • Install a standalone solr 5.3.2 in my PC
> • Indexed some 10 db records
> • Hit core reload/call commit frequently in quick internals
> • Seeing the o.a.s.c.SolrCore [db] PERFORMANCE WARNING: Overlapping
> onDeckSearche
g past 50% and I
have a heap of 10 GB on a instance that I just installed as a standalone
version and was only trying out these,
• Install a standalone solr 5.3.2 in my PC
• Indexed some 10 db records
• Hit core reload/call commit frequently in quick internals
• Se
On 22/03/16 15:16, Shawn Heisey wrote:
> This message is not coming from Solr. It's coming from Jetty. Solr
> uses Jetty, but uses it completely unchanged.
Ah you're right. Here's the offending code:
https://github.com/eclipse/jetty.project/blob/ac24196b0d341534793308d585161381d5bca4ac/jetty-st
On 3/22/2016 6:57 AM, Bram Van Dam wrote:
> Hey folks,
>
> When I start 5.5.0 (on RHEL), the following entry is added to
> server/logs/solr-8983-console.log:
>
> WARNING: System properties and/or JVM args set. Consider using
> --dry-run or --exec
>
> I can't quit
Hey folks,
When I start 5.5.0 (on RHEL), the following entry is added to
server/logs/solr-8983-console.log:
WARNING: System properties and/or JVM args set. Consider using
--dry-run or --exec
I can't quite figure out what's causing this. Any clues on how to get
rid of it?
Thanks,
- Bram
On 3/21/2016 6:49 PM, Aswath Srinivasan (TMS) wrote:
>>> Thank you for the responses. Collection crashes as in, I'm unable to open
>>> the core tab in Solr console. Search is not returning. None of the page
>>> opens in solr admin dashboard.
>>>
>>> I do understand how and why this issue occurs a
ion or delete the data folder.
Again, I know how to avoid this issue but if it still happens then what can be
done to avoid a complete reindexing.
Thank you,
Aswath NS
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Monday, March 21, 2016 4:19 PM
T
On 3/21/2016 12:52 PM, Aswath Srinivasan (TMS) wrote:
> Fellow developers,
>
> PERFORMANCE WARNING: Overlapping onDeckSearchers=2
>
> I'm seeing this warning often and whenever I see this, the collection
> crashes. The only way to overcome this is by deleting the data f
If you're seeing a crash, then that's a distinct problem from the WARN -- it
might be related tothe warning, but it's not identical -- Solr doesn't always
(or even normally) crash in the "Overlapping onDeckSearchers"
situation
That is what I hoped for. But I cou
: What I'm wondering is, what should one do to fix this issue when it
: happens. Is there a way to recover? after the WARN appears.
It's just a warning that you have a sub-optimal situation from a
performance standpoint -- either committing too fast, or warming too much.
It'
al Message-
From: Aswath Srinivasan (TMS) [mailto:aswath.sriniva...@toyota.com]
Sent: Monday, March 21, 2016 11:52 AM
To: solr-user@lucene.apache.org
Subject: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
Fellow developers,
PERFORMANCE WARNING: Overlapping onDeckSearchers=2
I'm seei
Fellow developers,
PERFORMANCE WARNING: Overlapping onDeckSearchers=2
I'm seeing this warning often and whenever I see this, the collection crashes.
The only way to overcome this is by deleting the data folder and reindexing.
In my observation, this WARN comes when I hit frequent hard co
On 3/7/2016 3:15 PM, Steven White wrote:
> In Solr's solr-8983-console.log I see the following (about 50 in a span of
> 24 hours when index is on going):
>
> WARNING: Couldn't flush user prefs:
> java.util.prefs.BackingStoreException: Couldn't get file lock.
T
Re-posting. Anyone has any idea about this question? Thanks.
Steve
On Mon, Mar 7, 2016 at 5:15 PM, Steven White wrote:
> Hi folks,
>
> In Solr's solr-8983-console.log I see the following (about 50 in a span of
> 24 hours when index is on going):
>
> WARNING: Co
Hi folks,
In Solr's solr-8983-console.log I see the following (about 50 in a span of
24 hours when index is on going):
WARNING: Couldn't flush user prefs:
java.util.prefs.BackingStoreException: Couldn't get file lock.
What does it mean? Should I wary about it?
What
Hello,
I am sharing warning image, please find/check this
<http://lucene.472066.n3.nabble.com/file/n4252110/abc.png>
could anyone have an idea of above warning
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrcloud-getting-warning-missed-update-tp4251556p425211
Hello,
could anyone explain why i am getting below warning in my solrcloud system.
WARN null PeerSync "no frame of reference to tell if we've missed updates"
Is there any issue or may create probelm and how to resolve it.
Thanks
Mugeesh
--
View this message in context:
Perfect, I'll remove the block and check if the warning will be gone.
Thanks.
--
Gian Maria Ricci
Cell: +39 320 0136949
-Original Message-
From: Alessandro Benedetti [mailto:abenede...@apache.org]
Sent: martedì 12 gennaio 2016 10:43
To: solr-user@lucene.apache.org
Subjec
inal Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: martedì 12 gennaio 2016 02:33
> To: solr-user
> Subject: Re: WArning in SolrCloud logs
>
> Just show us the solrconfig.xml file, particularly any thing referring to
> replication, it's easier t
00:00:10
2
64
--
Gian Maria Ricci
Cell: +39 320 0136949
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: martedì 12 gennaio 2016 02:33
To: solr-user
Subject: Re: WArning in
o Zookeeper a
> configuration I used for single node, with a replication handler activated to
> backup the core. I did not send any master/slave config actually, I just
> created the collection using collection API and the warning is immediately
> there.
>
> --
> Gian M
Actually that is a collection I've created uploading into Zookeeper a
configuration I used for single node, with a replication handler activated to
backup the core. I did not send any master/slave config actually, I just
created the collection using collection API and the warning is immedi
On 1/11/2016 8:08 AM, Gian Maria Ricci - aka Alkampfer wrote:
>
> I've configured three node in solrcloud, everything seems ok, but in
> the log I see this kind of warning
>
>
>
> SolrCloud is enabled for core xxx_shard3_replica1 but so is old-style
> replicati
used) .
But this supposes to happen automatically .
Strip of code that cause the warning :
if (enableMaster || enableSlave) {
> if (core.getCoreDescriptor().getCoreContainer().getZkController() != null)
> {
> LOG.warn("SolrCloud is enabled for core " + core.getName() + &quo
I’ve configured three node in solrcloud, everything seems ok, but in the log I
see this kind of warning
SolrCloud is enabled for core xxx_shard3_replica1 but so is old-style
replication. Make sure you intend this behavior, it usually indicates a
mis-configuration. Master setting is true
On Thu, Dec 17, 2015 at 8:00 AM, Midas A wrote:
>
> org.apache.solr.update.CommitTracker._scheduleCommitWithinIfNeeded(CommitTracker.java:118)
>
I seems like you specifies commitWithin that's legal but seems unusual and
doubtful with DIH.
> > rejected from java.util.concurrent.ScheduledThreadPo
exhaustion of some sort of pool related to Commit
> within,
> > > > which I assume you are using.
> > > >
> > > > Regards,
> > > > Alex
> > > > On 16 Dec 2015 4:11 pm, "Midas A" wrote:
> > > >
> > > > >
falovitch <
> > arafa...@gmail.com>
> > wrote:
> >
> > > Are you sending documents from one client or many?
> > >
> > > Looks like an exhaustion of some sort of pool related to Commit within,
> > > which I assume you are using.
> > >
related to Commit within,
> > which I assume you are using.
> >
> > Regards,
> > Alex
> > On 16 Dec 2015 4:11 pm, "Midas A" wrote:
> >
> > > Getting following warning while indexing ..
>
> Regards,
> Alex
> On 16 Dec 2015 4:11 pm, "Midas A" wrote:
>
> > Getting following warning while indexing ..Anybody please tell me the
> > reason .
> >
> >
> > java.util.concurrent.RejectedExecutionException: Task
> >
> >
Are you sending documents from one client or many?
Looks like an exhaustion of some sort of pool related to Commit within,
which I assume you are using.
Regards,
Alex
On 16 Dec 2015 4:11 pm, "Midas A" wrote:
> Getting following warning while indexing ..Anybody please tell me
Getting following warning while indexing ..Anybody please tell me the reason .
java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@9916a67
rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@79f8b5f[Terminated
1 - 100 of 199 matches
Mail list logo