Hi, I wanted to hear if anyone successfully got solr cloud running on
kubernetes and can share challenges and limitations.
Can't find much uptodate github projects, would be great if you can point
out blogposts or other useful links.
Thanks in advance.
Hi Lars,
we are running Solr in kubernetes and after some initial problems we are
running quite stable now.
Here is the setup we choose for solr:
- separate service for external traffic to solr (called “solr”)
- statefulset for solr with 3 replicas with another service (called
“solr-discovery”
Thanks for pointing out the mistake. The script can run after I correct the
":" to ";".
However, I am getting the following error now.
Merging...
Exception in thread "main" org.apache.lucene.index.IndexNotFoundException:
no se
gments* file found in HardlinkCopyDirectoryWrapper(MMapDirectory@C
:\s
Setting loadOnStartup=false won't work for you in the long run,
although it does provide something of a hint. Setting this to false
means the core at that location simply has its coreDescriptor read and
stashed away in memory. The first time you _use_ that core an attempt
will be made to load it an
Approach 2 is sufficient. You do have to insure that you don't copy
over the write.lock file however as you may not be able to start
replicas if that's there.
There's a relatively little-known third option. You an (ab)use the
replication API "fetchindex" command, see:
https://cwiki.apache.org/conf
Thanks Erick. Can you explain a bit more on the write.lock file? So far I
have been copying it over from B to A and haven't seen issue starting the
replica.
On Sat, Aug 26, 2017 at 9:25 AM, Erick Erickson
wrote:
> Approach 2 is sufficient. You do have to insure that you don't copy
> over the wri
write.lock is used whenever a core(replica) wants to, well, write to
the index. Each individual replica is sure to only write to the index
with one thread. If two threads were to write to an index, there's a
very good chance the index will be corrupt, so it's a safeguard
against two or more threads