(the VPC
is private), so is not a connection problem.
Is there any way to sync the leader info between solr and ZK?, also I want
to know if exists a way to force to change the leader (FORCELEADER don't
work when the solr denies to change the leader, because it say that a
leader exists).
Th
nd the cluster was created just creating a Zookeeper
cluster, connecting the Solr nodes to that Zk cluster, importing the
collections and adding réplicas manually to every collection.
Also I've upgraded that cluster from Solr 6 to Solr 7.1 and later to Solr
7.2.1.
Thanks and greetings!
--
Hi,
El mar., 23 oct. 2018 a las 10:18, Charlie Hull ()
escribió:
> On 23/10/2018 02:57, Daniel Carrasco wrote:
> > annoyingHello,
> >
> > I've a Solr Cluster that is created with 7 machines on AWS instances. The
> > Solr version is 7.2.1 (b2b6438b37073bee1
ng, and searches, the cluster would become
> unresponsive due to solr waiting for numerous I/O operations which we being
> throttled. Solr can be very I/O intensive, especially when you can't cache
> the entire index in memory.
>
> Thanks,
> Chris
>
>
> On Tue, Oct 23, 2018 at 5:40
ssion. The data is stored
on SSD disks with XFS (faster than EXT4).
I'll take a look to the links tomorrow at work.
Thanks!!
Greetings!!
El mar., 23 oct. 2018 23:48, Shawn Heisey escribió:
> On 10/23/2018 7:15 AM, Daniel Carrasco wrote:
> > Hello,
> >
> > Thanks for
> -XX:MaxGCPauseMillis=200 \
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \
> "
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/ (my blog)
>
> > On Oct 23, 2018, at 9:51 PM, Daniel Carrasco
> wrote:
> >
> &g
as without
control (it added even 12 replicas on the new node), and the test2
collection keeps on just one replica. If I delete the test collection and
repeat the process of add the new node, then the test2 have the same
behavior and creates a lot of replicas on new node.
Delete trigger works just
"replica": "<2", "shard": "#EACH", "node": "#ANY"}]
}'
Greetings!
El lun., 26 nov. 2018 a las 18:24, Daniel Carrasco ()
escribió:
> Hello,
>
> I'm trying to create an autoscaling cluster with node_added_trigger
&g
L?
Thanks!!
--
_____
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_
recovering times respecting to NRT, so
we're thinking about to keep some NRT instead TLOG.
To have two NRT nodes and the rest TLOG is a good setup, or better to think
in TOG nodes?
Thanks!
--
_____
Daniel Carrasco Marín
Ingeniería para la Innov
he
> > leader, or about the increase in recovering times respecting to NRT, so
> > we're thinking about to keep some NRT instead TLOG.
> >
> > To have two NRT nodes and the rest TLOG is a good setup, or better to
> think
> > in TOG nodes?
> >
> > Thanks!
> >
> > --
> &g
> to our clients any opinions or advice contained in this email are subject
> to the terms and conditions expressed in the governing KPMG client
> engagement letter.
> ***
>
--
_
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_
. I tried that but I'm still
> getting the file limit warning.
>
> -----Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 12:14 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when star
ase it.
Greetings!
El mié., 12 dic. 2018 a las 14:44, Armon, Rony () escribió:
> rony@rony-VirtualBox:~$ ulimit -n
> 1024
> rony@rony-VirtualBox:~/solr-7.5.0$ ulimit -n
> 1024
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wed
-a|grep -i fs.file-max
> fs.file-max = 810202
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 4:04 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
terisk, but you
> can put solr instead asterisk to change the limit to solr user only, just
> like my first message.
>
> Greetings!
>
>
>
> El mié., 12 dic. 2018 a las 15:47, Armon, Rony ()
> escribió:
>
> > Tried it as well...
> >
> > rony@rony-Virtual
he link that you sent.
> Should I uninstall and re-install?
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 5:45 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting s
to uncheck that
box, then all data is cleared and I've to import all again.
Thanks!!
--
_____
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_
uld just change it. There's no need to
> rebuild Solr etc
>
> BTW, the mail server is pretty aggressive about stripping attachments,
> your (presumed) screenshot is blank
>
> Best,
> Erick
>
> On Mon, Jan 15, 2018 at 2:30 AM, Daniel Carrasco
> wrote:
>
sponses and get only
if is healthy or recovering?. Of course if is dead I've got no response, so
that's easy.
Thanks and greetings!!
--
_____
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_
node died unexpectedly its replicas
> can't set the state when shutting down. So you also have to check
> whether the replica's node is in the "live_nodes" znode.
>
> Best,
> Erick
>
> On Thu, Feb 1, 2018 at 4:34 AM, Daniel Carrasco
wrote:
>> Hello,
to be synced again).
My configuration is: 8 Solr nodes using v7.1.0 and zookeeper v3.4.11. All
nodes are standalone (I'm not using hadoop).
Thanks and greetings!
--
_________
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
T
t; Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 12 Feb 2018, at 11:16, Daniel Carrasco wrote:
> >
> > Hello,
> >
> > We're usin
updates will be avaible on all nodes
at same time, right?
Thanks!!
>
> Regards,
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 12 Feb
> > On 12 Feb 2018, at 13:13, Daniel Carrasco wrote:
> >
> > Hello,
> >
> > 2018-02-12 12:32 GMT+01:00 Emir Arnautović <mailto:emir.arnauto...@sematext.com>>:
> >
> >> Hi Daniel,
> >> Maybe it is Monday and I am still not warmed up, but your
Solr are Java, so they use
> > HttpSolrClient/HttpSolrServer from SolrJ, connecting to the load
> balancer.
> >
> > For SolrCloud, if your clients are Java, you don't need a load balancer,
> > because the client (CloudSolrClient in SolrJ) talks to the entire cluster
&
onsume.run(ExecuteProduceConsume.java:136)\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)\n\tat
java.lang.Thread.run(Thread.java:748)\n",
"code":500}}
After re
15b94 - janhoy - 2019-05-28 23:37:48),
I've 11 nodes in NTR and all looks healthy.
Thanks, and greetings.
--
_
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_
28 matches
Mail list logo