Re: Solr architecture diagram

2011-04-07 Thread David MARTIN
Hi,

Thank you for this contribution. Such a diagram could be useful in the
official documentation.

David

On Thu, Apr 7, 2011 at 12:15 PM, Jeffrey Chang  wrote:

> This is awesome; thank you!
>
> On Thu, Apr 7, 2011 at 6:09 PM, Jan Høydahl  wrote:
>
> > Hi,
> >
> > Glad you liked it. You'd like to model the inner architecture of SolrJ as
> > well, do you? Perhaps that should be a separate diagram.
> >
> > --
> > Jan Høydahl, search solution architect
> > Cominvent AS - www.cominvent.com
> >
> >  On 6. apr. 2011, at 12.06, Stevo Slavić wrote:
> >
> > > Nice, thank you!
> > >
> > > Wish there was something similar or extra to this one depicting where
> > > do SolrJ's CommonsHttpSolrServer and EmbeddedSolrServer fit in.
> > >
> > > Regards,
> > > Stevo.
> > >
> > > On Wed, Apr 6, 2011 at 11:44 AM, Jan Høydahl 
> > wrote:
> > >> Hi,
> > >>
> > >> At Cominvent we've often had the need to visualize the internal
> > architecture of Apache Solr in order to explain both the relationships of
> > the components as well as the flow of data and queries. The result is a
> > conceptual architecture diagram, clearly showing how Solr relates to the
> > app-server, how cores relate to a Solr instance, how documents enter
> through
> > an UpdateRequestHandler, through an UpdateChain and Analysis and into the
> > Lucene index etc.
> > >>
> > >> The drawing is created using Google draw, and the original is shared
> on
> > Google Docs. We have licensed the diagram under the permissive Creative
> > Commons "CC-by" license which lets you use, modify and re-distribute the
> > diagram, even commercially, as long as you attribute us with a link.
> > >>
> > >> Check it out at http://ow.ly/4sOTm
> > >> We'd love your comments
> > >>
> > >> --
> > >> Jan Høydahl, search solution architect
> > >> Cominvent AS - www.cominvent.com
> > >>
> > >>
> >
> >
>


Correctly importing and producing null in search results

2012-08-27 Thread David Martin
Smart Folks:

I use JDBC to produce simple XML entities such as this one:


  AWARDTYPE
  0
  31
  1
  awardtypes::31:1


The XML entities are stored in file and loaded by the
FileListEntityProcessor.

In this case, the "movie_id" element has a value of zero because the JDBC
getString("movie_id") method returned null.  I can search Solr for
entities of this type (i.e. query on "entity_type:AWARDTYPE") and get back
the appropriate result set.  Then, I want to transform the result set into
JSON objects with fields that map to XML elements.

Today, I have to teach the JSON mapping that it should convert 0 to
JSONObject.NULL on a case-by-case basis -- I actually keep a mapping
document around that dictates whether a zero should be handled this way.

In some cases though, a zero may be legitimate where null values are also
legit.  Sure, I could always change the zero to a less likely integer or
such... 

===
But doesn't Solr and the Data Import Handler have a better way to read a
null value from an XML entity during import, AND to represent it in search
results?  Do I need a different approach depending on my field's type?
===

I apologize if this is an asked and answered question.  None of my web
searches turned up an answer.

Thanks,

David



Large XML file sizes error out parsing the file size as an Integer

2012-08-29 Thread David Martin
Folks:

One of our files of XML entities for import is almost 7GB in size.

When trying to import, we error out with the exception below.  6845266984 is 
the exact size of the input file in bytes.

Shouldn't the file size be a long?  Has anybody else experienced this problem?

We plan on dividing this file into smaller pieces, but if there's another 
solution I'd love to hear it.

Thanks,

David Martin

From: Desktop mailto:dmar...@netflix.com>>
Date: Wednesday, August 29, 2012 3:17 PM
Subject: contract item assets exception

Aug 29, 2012 10:04:03 PM org.apache.solr.handler.dataimport.SolrWriter upload
WARNING: Error creating document : 
SolrInputDocument[{fileSize=fileSize(1.0)={6845266984}, 
created_by=created_by(1.0)={CHILO}, 
id=id(1.0)={movie::70018848:country_code-NO:contract_id-9979:ccm_asset_id-369161014},
 movie_id=movie_id(1.0)={70018848}, is_required=is_required(1.0)={0}, 
bcp_47_code=bcp_47_code(1.0)={nn}, 
element_category_id=element_category_id(1.0)={3}, 
updated_by=updated_by(1.0)={SYSADMIN}, 
last_updated=last_updated(1.0)={2012-08-29T19:25:21.585Z}, 
entity_type=entity_type(1.0)={CONTRACT_ITEM_ASSET}, 
country_code=country_code(1.0)={NO}, 
ccm_asset_id=ccm_asset_id(1.0)={369161014}}]
org.apache.solr.common.SolrException: ERROR: 
[doc=movie::70018848:country_code-NO:contract_id-9979:ccm_asset_id-369161014] 
Error adding field 'fileSize'='6845266984'
at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:333)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:60)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:115)
at org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:66)
at 
org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:293)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:723)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:709)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:619)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:327)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:225)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:375)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:445)
at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:426)
Caused by: java.lang.NumberFormatException: For input string: "6845266984"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Integer.parseInt(Integer.java:461)
at java.lang.Integer.parseInt(Integer.java:499)
at org.apache.solr.schema.TrieField.createField(TrieField.java:407)
at org.apache.solr.schema.SchemaField.createField(SchemaField.java:103)
at org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:203)
at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:286)
... 12 more



Re: Correctly importing and producing null in search results

2012-08-30 Thread David Martin
Erick:

Thanks for your reply. Simply omitting the null fields is an intriguing
idea, and I will test this out.

Why not use the JSON response writer?  Two reasons:  our clients dictate a
particular JSON schema that changes on a query-by-query basis.  The
schemas can be quite complex.  Also, we roll our responses as a JAXB DTO
so that our web service can supply responses in either JSON or XML.  I
think either requirement means having to do some "manual" post-processing
of the Solrj responses, right?

Thanks,

David

On 8/29/12 6:15 PM, "Erick Erickson"  wrote:

>If I'm reading this right, you're kind of stuck. Solr/DIH don't have any
>way to reach out to your mapping file and "do the right thing"
>
>A couple of things come to mind.
>Use a Transformer in DIH to simply remove the field from the document
>you're indexing. Then the absence of the field in the result set is NULL,
>and 0 is 0. You could also do this in SolrJ.
>
>And I have to ask why you transform output into JSON when you could
>use the JSON response writer.
>
>Best
>Erick
>
>On Mon, Aug 27, 2012 at 6:04 PM, David Martin  wrote:
>> Smart Folks:
>>
>> I use JDBC to produce simple XML entities such as this one:
>>
>> 
>>   AWARDTYPE
>>   0
>>   31
>>   1
>>   awardtypes::31:1
>> 
>>
>> The XML entities are stored in file and loaded by the
>> FileListEntityProcessor.
>>
>> In this case, the "movie_id" element has a value of zero because the
>>JDBC
>> getString("movie_id") method returned null.  I can search Solr for
>> entities of this type (i.e. query on "entity_type:AWARDTYPE") and get
>>back
>> the appropriate result set.  Then, I want to transform the result set
>>into
>> JSON objects with fields that map to XML elements.
>>
>> Today, I have to teach the JSON mapping that it should convert 0 to
>> JSONObject.NULL on a case-by-case basis -- I actually keep a mapping
>> document around that dictates whether a zero should be handled this way.
>>
>> In some cases though, a zero may be legitimate where null values are
>>also
>> legit.  Sure, I could always change the zero to a less likely integer or
>> such...
>>
>> ===
>> But doesn't Solr and the Data Import Handler have a better way to read a
>> null value from an XML entity during import, AND to represent it in
>>search
>> results?  Do I need a different approach depending on my field's type?
>> ===
>>
>> I apologize if this is an asked and answered question.  None of my web
>> searches turned up an answer.
>>
>> Thanks,
>>
>> David
>>
>



Re: Encountering a roadblock with my Solr schema design...use dedupe?

2010-01-16 Thread David MARTIN
I'm really interested in reading the answer to this thread as my problem is
rather the same. Maybe my main difference is the huge SKU number per product
I may have.


David

On Thu, Jan 14, 2010 at 2:35 AM, Kelly Taylor  wrote:

>
> Hoss,
>
> Would you suggest using dedup for my use case; and if so, do you know of a
> working example I can reference?
>
> I don't have an issue using the patched version of Solr, but I'd much
> rather
> use the GA version.
>
> -Kelly
>
>
>
> hossman wrote:
> >
> >
> > : Dedupe is completely the wrong word. Deduping is something else
> > : entirely - it is about trying not to index the same document twice.
> >
> > Dedup can also certainly be used with field collapsing -- that was one of
> > the initial use cases identified for the SignatureUpdateProcessorFactory
> > ... you can compute an 'expensive' signature when adding a document,
> index
> > it, and then FieldCollapse on that signature field.
> >
> > This gives you "query time deduplication" based on a value computed when
> > indexing (the canonical example is multiple urls refrenceing the "same"
> > content but with slightly differnet boilerplate markup.  You can use a
> > Signature class that recognizes the boilerplate and computes an identical
> > signature value for each URL whose content is "the same" but still index
> > all of the URLs and their content as distinct documents ... so use cases
> > where people only "distinct" URLs work using field collapse but by
> default
> > all matching documents can still be returned and searches on text in the
> > boilerplate markup also still work.
> >
> >
> > -Hoss
> >
> >
> >
>
> --
> View this message in context:
> http://old.nabble.com/Encountering-a-roadblock-with-my-Solr-schema-design...use-dedupe--tp27118977p27155115.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>


Re: tomcat support

2010-01-21 Thread David MARTIN
I haven't got any information about the tomcat/solr compatibility
matrix, but you can easily have instances of tomcat running together,
each having a different version (with different jvm versions too). It
may be simpler to fine tune such a dedicated instance.

David

2010/1/21, Adamsky, Robert :
>
> Have been running solr 1.3 on tomcat 5.0.28 without issue.
>
> Went to use 1.4 and it doesn't load - causing server not to start.
> It does show a few solr log messages along the way but only info.
>
> Does Solr 1.4 still support tomcat 5.0.28?
>
> I did try it with tomcat 5.5.28 without issue but the upgrade path is not
> the direction I can easily take (have prod systems already with other
> apps using that version).
>
> Any ideas?


Re: Beyond Basic Faceted Search (SOLR-236|SOLR-64|SOLR-792)

2010-01-25 Thread David MARTIN
Hi Kelly,

Did you succeed in using these patches? It seems I've got the same need as
you : be able to collapse all products variations (SKU) under a single line
(the product).

As I'm beginning today with the field collapse patch, I'm still looking for
the best solution for this need.

Maybe someone here can give some tips to solve this - I suppose - common
need ?

David

On Thu, Jan 21, 2010 at 7:00 PM, Kelly Taylor  wrote:

>
> I'm currently using the latest SOLR-236 patch (12/24/2009) and
> field-collapsing seems to be giving me the desired results, but I'm
> wondering if I should focus more on a tree view of my catalog data instead,
> as described in "Beyond Basic Faceted Search"
>
> Is it possible that either or both of the patches for SOLR-792 or SOLR-64
> provide something like this? Below is a link to the paper, followed by an
> excerpt under the "CORRELATED FACETS" section.
>
> http://nadav.harel.org.il/papers/p33-ben-yitzhak.pdf
>
> Excerpt:
> "...model each product as a tree, in which the leaves represent specific
> instantiations, and where the attributes corresponding to each leaf are the
> union of attributes on the unique path from the root of the tree to the
> leaf. In other words, each node of the tree shares its attributes (text and
> associated metadata) with all its descendants. When we factor out common
> attributes of leaf nodes to intermediate nodes, this representation avoids
> significant duplication of text and metadata that are common to many
> variations of each product."
>
> -Kelly
> --
> View this message in context:
> http://old.nabble.com/Beyond-Basic-Faceted-Search-%28SOLR-236%7CSOLR-64%7CSOLR-792%29-tp27262017p27262017.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>


Re: solr1.5

2010-01-27 Thread David MARTIN
Good question indeed : I'm waiting as many others I guess for the patch 236
(the collapse thing :) ).

David

On Tue, Jan 26, 2010 at 4:24 PM, Matthieu Labour wrote:

> Hi
> quick question:
> Is there any release date scheduled for solr 1.5 with all the wonderful
> patches (StreamingUpdateSolrServer etc ...)?
> Thank you !
>


[solr-user] : Web B2C interface integration with Solr

2010-04-12 Thread David MARTIN
Hi everyone,

Some of you may have an idea about the best solution to achieve this simple
(in theory at least) goal :
I want a B2C website that uses Solr as its search engine service, but :
- it shouldn't expose Solr in an explicit manner (I mean clear URL pointing
to the Solr instance)
- keep control on SEO (be able to format facets in order to have clean URL
for robots, search results with clean URL too)
- be able to build a web page based on several different responses coming
from different requests to Solr (let's imagine a page having some
contextualized propositions, a default request result, and a collection of
contextualized promotions)
- be able to

My question is : what's the most elegant and efficient solution to build
that kind of presentation layer in front of Solr. My technical constraints
are : Java rules here :)

Thanks for your suggestions.

David


RE: How to raise open file limits

2020-11-04 Thread DAVID MARTIN NIETO
Hi,

You must have to change the ulimit -a parameters on your SO config.
I believe the problem that you have is in:
max user processes  (-u) 4096

Kind regards.

David Martín Nieto
Analista Funcional
Calle Cabeza Mesada 5
28031, Madrid
T: +34 667 414 432
T: +34 91 779 56 98| Ext. 3198
E-mail: dmart...@viewnext.com | Web: www.viewnext.com

[https://mail.google.com/mail/u/0?ui=2&ik=72317294cd&attid=0.0.2&permmsgid=msg-f:1662155651369049897&th=171129c229429f29&view=fimg&sz=s0-l75-ft&attbid=ANGjdJ_o0Ds8_P8d7W-csq2mmc6mBGQy9hSjXsGEv15RXUutalCYzg3HQB3CByE2swcJkH3yRaLwrXkr1G81F9FpfqcPlbpRoZcainmsJjviLoypusuKOxCnOw97zuo&disp=emb]




De: James Rome 
Enviado: miércoles, 4 de noviembre de 2020 16:03
Para: solr-user@lucene.apache.org 
Asunto: How to raise open file limits

I am new to solr. I have solr installed in my home directory
(/home/jar/solr).

But when I start the tutorial, I get an open files limit error.

  $ ./bin/solr start -e cloud
*** [WARN] *** Your open file limit is currently 1024.
  It should be set to 65000 to avoid operational disruption.
  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to
false in your profile or solr.in.sh

I made a limits file as follows:

/etc/security/limits.d # cat solr.conf
solr softnofile  65000
solr hardnofile  65000
solr softnproc   65000
solr hardnproc   65000
jar softnofile  65000
jar hardnofile  65000
jar softnproc   65000
jar hardnproc   65000

But this does not seem to solve the issue.

Also, my ultimate goal is to only index one directory and to serve it to
my Drupal site. Is there a way to run solr as a service so that it
restarts on boot? Can you please point me to how to do this?

--
James A. Rome
https://jamesrome.net



RE: How to raise open file limits

2020-11-04 Thread DAVID MARTIN NIETO
And too this:

open files  (-n) 1024

At least.

David Martín Nieto
Analista Funcional
Calle Cabeza Mesada 5
28031, Madrid
T: +34 667 414 432
T: +34 91 779 56 98| Ext. 3198
E-mail: dmart...@viewnext.com | Web: www.viewnext.com

[https://mail.google.com/mail/u/0?ui=2&ik=72317294cd&attid=0.0.2&permmsgid=msg-f:1662155651369049897&th=171129c229429f29&view=fimg&sz=s0-l75-ft&attbid=ANGjdJ_o0Ds8_P8d7W-csq2mmc6mBGQy9hSjXsGEv15RXUutalCYzg3HQB3CByE2swcJkH3yRaLwrXkr1G81F9FpfqcPlbpRoZcainmsJjviLoypusuKOxCnOw97zuo&disp=emb]



________
De: DAVID MARTIN NIETO 
Enviado: miércoles, 4 de noviembre de 2020 16:20
Para: solr-user@lucene.apache.org 
Asunto: RE: How to raise open file limits

Hi,

You must have to change the ulimit -a parameters on your SO config.
I believe the problem that you have is in:
max user processes  (-u) 4096

Kind regards.

David Martín Nieto
Analista Funcional
Calle Cabeza Mesada 5
28031, Madrid
T: +34 667 414 432
T: +34 91 779 56 98| Ext. 3198
E-mail: dmart...@viewnext.com | Web: www.viewnext.com<http://www.viewnext.com>

[https://mail.google.com/mail/u/0?ui=2&ik=72317294cd&attid=0.0.2&permmsgid=msg-f:1662155651369049897&th=171129c229429f29&view=fimg&sz=s0-l75-ft&attbid=ANGjdJ_o0Ds8_P8d7W-csq2mmc6mBGQy9hSjXsGEv15RXUutalCYzg3HQB3CByE2swcJkH3yRaLwrXkr1G81F9FpfqcPlbpRoZcainmsJjviLoypusuKOxCnOw97zuo&disp=emb]




De: James Rome 
Enviado: miércoles, 4 de noviembre de 2020 16:03
Para: solr-user@lucene.apache.org 
Asunto: How to raise open file limits

I am new to solr. I have solr installed in my home directory
(/home/jar/solr).

But when I start the tutorial, I get an open files limit error.

  $ ./bin/solr start -e cloud
*** [WARN] *** Your open file limit is currently 1024.
  It should be set to 65000 to avoid operational disruption.
  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to
false in your profile or solr.in.sh

I made a limits file as follows:

/etc/security/limits.d # cat solr.conf
solr softnofile  65000
solr hardnofile  65000
solr softnproc   65000
solr hardnproc   65000
jar softnofile  65000
jar hardnofile  65000
jar softnproc   65000
jar hardnproc   65000

But this does not seem to solve the issue.

Also, my ultimate goal is to only index one directory and to serve it to
my Drupal site. Is there a way to run solr as a service so that it
restarts on boot? Can you please point me to how to do this?

--
James A. Rome
https://jamesrome.net



RE: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread DAVID MARTIN NIETO
I believe Solr dont have this configuration, you need a load balancer with that 
configuration mode for that.

Kind regards.



De: Kaushal Shriyan 
Enviado: lunes, 11 de enero de 2021 11:32
Para: solr-user@lucene.apache.org 
Asunto: Apache Solr in High Availability Primary and Secondary node.

Hi,

We are running Apache Solr 8.7.0 search service on CentOS Linux release
7.9.2009 (Core).

Is there a way to set up the Solr search service in High Availability Mode
in the Primary and Secondary node? For example, if the primary node is down
secondary node will take care of the service.

Best Regards,

Kaushal


RE: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread DAVID MARTIN NIETO
Hi again,

I dont know about those products but, with Apache something like that can works:

https://stackoverflow.com/questions/6381749/apache-httpd-mod-proxy-balancer-with-active-passive-setup/11083458
https://httpd.apache.org/docs/2.4/mod/mod_proxy_balancer.html

Kind regards.



David Martín Nieto
Analista Funcional
Calle Cabeza Mesada 5
28031, Madrid
T: +34 667 414 432
T: +34 91 779 56 98| Ext. 3198
E-mail: dmart...@viewnext.com | Web: www.viewnext.com

[https://mail.google.com/mail/u/0?ui=2&ik=72317294cd&attid=0.0.2&permmsgid=msg-f:1662155651369049897&th=171129c229429f29&view=fimg&sz=s0-l75-ft&attbid=ANGjdJ_o0Ds8_P8d7W-csq2mmc6mBGQy9hSjXsGEv15RXUutalCYzg3HQB3CByE2swcJkH3yRaLwrXkr1G81F9FpfqcPlbpRoZcainmsJjviLoypusuKOxCnOw97zuo&disp=emb]




De: Kaushal Shriyan 
Enviado: lunes, 11 de enero de 2021 12:02
Para: solr-user@lucene.apache.org 
Asunto: Re: Apache Solr in High Availability Primary and Secondary node.

On Mon, Jan 11, 2021 at 4:11 PM DAVID MARTIN NIETO 
wrote:

> I believe Solr dont have this configuration, you need a load balancer with
> that configuration mode for that.
>
> Kind regards.
>
>
Thanks, David for the quick response. Is there any use-case to use HAProxy
or Nginx webserver or any other application to load balance both Solr
primary and secondary nodes?

Best Regards,

Kaushal


Dynamic starting or stoping of zookeepers in a cluster

2021-02-18 Thread DAVID MARTIN NIETO


Hi all,

We've a solr cluster with 4 solr servers and 5 zookeepers in HA mode.
We've tested about if our cluster can mantain the service with only the half of 
the cluster, in case of disaster os similar, and we've a problem with the 
zookepers config and its static configuration.

In the start script of the 4 solrs servers there are a list of 5 ip:port of the 
5 zookeepers of the cluster, so when we "lost" the half of machines (we've 2 
zoos in one machine and 3 on another) in the worst case we lost 3 of these 5 
zookeepers. We can start a sixth zookeeper (to have 3 with the half of cluster 
stopped) but to add in the solr server we need to stop and restart with a new 
list of ip:port adding it and that's not an automatic or dynamic thing.

¿Somebody knows another configuration or workaround to have a dynamic list of 
zoos and start or stop some of thems without changes in the config and 
start/stop the solr server?

Kind regards and thanks a lot.


---
Este mensaje va dirigido únicamente a la(s) persona(s) y/o entidad(es) arriba 
relacionada(s). Puede contener información confidencial o legalmente protegida. 
Si no es usted el destinatario señalado, le rogamos borre del sistema 
inmediatamente el mensaje y sus copias. Asimismo le informamos que cualquier 
copia, divulgación, distribución o uso de los contenidos está prohibida.
---
This message is addressed only to the person (people) and / or entities listed 
above. It may contain confidential or legally protected information. If you are 
not the recipient indicated, please delete the message and its copies 
immediately from the system. We also inform that any copy, disclosure, 
distribution or use of the contents is forbidden
---
Viewnext, S.A. Domicilio Social: Avda. de Burgos 8-A 28036 de Madrid. telf: 
913834060, Fax: 913834090. Reg. M. Madrid: Tomo 3238, Libro:0, Folio: 78, 
Seccion: 8ª, Hoja M-55112, N.I.F.: A-80157746


RE: Dynamic starting or stoping of zookeepers in a cluster

2021-02-24 Thread DAVID MARTIN NIETO
One doubt about it:

In order to have a highly available zookeeper, you must have at least
three separate physical servers for ZK.  Running multiple zookeepers on
one physical machine gains you nothing ... because if the whole machine
fails, you lose all of those zookeepers.  If you have three physical
servers, one can fail with no problems.  If you have five separate
physical servers running ZK, then two of the machines can fail without
taking the cluster down.

If I'm not mistaken the number of zookeepers must be odd. Having 3 zoos on 3 
different machines, if we temporarily lost one of the three machines, we would 
have only two running and it would be an even number.Would it be advisable in 
this case to raise a third party in one of the 2 active machines or with only 
two zookeepers there would be no blockages in their internal votes?

About the dynamic reconfiguration many thanks we've 8.2 but the zoos are in 
3.4.2 version, we're going to test with 3.5 version and the dynamic 
configuration of zookeepers to avoid this problem.

Many thanks.
Kind regards.



De: Joe Lerner 
Enviado: viernes, 19 de febrero de 2021 18:56
Para: solr-user@lucene.apache.org 
Asunto: Re: Dynamic starting or stoping of zookeepers in a cluster

This is solid information. *How about the application, which uses
SOLR/Zookeeper?*

Do we have to follow this guidance, to make the application ZK config aware:

https://zookeeper.apache.org/doc/r3.5.5/zookeeperReconfig.html#ch_reconfig_rebalancing


Or, could we leave it as is, and as long as the ZK Ensemble has the same
IPs?

Thanks!

Joe




--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html