Ok, you can set it as a sysvar when starting solr. Or you can change your
solrconfig.xml to either use classic schema (schema.xml) or take out the
add-unknown-fields... from the update processor chain. You can also set a
cluster property IIRC. Better to use one of the supported options...
On Fri,
Hi Jörn/Erick/Shawn thanks for your responses.
@Jörn - much apprecaited for the heads up on Kerberos authentication its
something we havent really considered at the moment, more production this may
well be the case. With regards to the Solr Nodes 3 is something we are looking
as a minimum, when
If you have a properly secured cluster eg with Kerberos then you should not
update files in ZK directly. Use the corresponding Solr REST interfaces then
you also less likely to mess something up.
If you want to have HA you should have at least 3 Solr nodes and replicate the
collection to all t
Having custom core.properties files is “fraught”. First of all, that file can
be re-written. Second, the collections ADDREPLICA command will create a new
core.properties file. Third, any mistakes you make when hand-editing the file
can have grave consequences.
What change exactly do you want to
On 9/3/2019 7:22 AM, Porritt, Ian wrote:
We have a schema which I have managed to upload to Zookeeper along with
the Solrconfig, how do I get the system to recognise both a lib/.jar
extension and a custom core.properties file? I bypassed the issue of the
core.properties by amending the update.a
Hi,
I am relatively new to Solr especially Solr Cloud and have been using it for
a few days now. I think I have setup Solr Cloud correctly however would like
some guidance to ensure I am doing it correctly. I ideally want to be able
to process 40 million documents on production via Solr Cloud.