Hi Katherine, we had exact the same issue, we need to protect our password.
Anyone who can access to solr server can do "ps -elf|grep java" to grep the
solr commandline, and it has all the password in plain text.
The /bin/solr shell will set 10 related system property:
SOLR_SSL_OPTS=" -Dsolr.jett
We have being following this wiki to enable ZooKeeper ACL control
https://cwiki.apache.org/confluence/display/solr/ZooKeeper+Access+Control#ZooKeeperAccessControl-AboutZooKeeperACLs
It works fine for Solr service itself, but when you try to
use scripts/cloud-scripts/zkcli.sh to put a zNode, it thr
Hi all,
we are using solr4.7 on top of IBM JVM J9 Java7, max heap to 32G, system
RAM 64G.
JVM parameters: -Xgcpolicy:balanced -verbose:gc -Xms12228m -Xmx32768m
-XX:PermSize=128m -XX:MaxPermSize=512m
We faced one issue here: we set zkClient timeout value to 30 seconds. By
using the balanced GC po
We had solr server 4.7 recently reported such WARN message, and come with a
long GC pause after that. Sometime it will force Solr server disconnect
with ZK server.
Solr 4.7.0, got this warning message:
WARN - 2015-10-19 02:23:24.503;
org.apache.solr.search.grouping.CommandHandler; Query: +(+owner
y releases the lease, so that other cores may claim it.
>
> Perhaps that explains the confusion?
>
> Shai
>
> On Mon, Sep 21, 2015 at 4:36 PM, Jeff Wu wrote:
>
> > Hi Shalin, thank you for the response.
> >
> > We waited longer enough than the ZK session timeout t
ore it tells
us "tlog replay"
2015-09-21 9:07 GMT-04:00 Shalin Shekhar Mangar :
> Hi Jeff,
>
> Comments inline:
>
> On Mon, Sep 21, 2015 at 6:06 PM, Jeff Wu wrote:
> > Our environment ran in Solr4.7. Recently hit a core recovery failure and
> > then it retries to r
imeout to a
> lower value but then it makes the cluster more sensitive to GC pauses
> which can also trigger new leader elections.
>
> On Mon, Sep 21, 2015 at 5:55 PM, Jeff Wu wrote:
> > Our environment still run with Solr4.7. Recently we noticed in a test.
> When
> > we
Our environment ran in Solr4.7. Recently hit a core recovery failure and
then it retries to recover from tlog.
We noticed after 20:05:22 said Recovery failed, Solr server waited a long
time before it started tlog replay. During that time, we have about 32
cores doing such tlog relay. The service
Our environment still run with Solr4.7. Recently we noticed in a test. When
we stopped 1 solr server(solr02, which did OS shutdown), all the cores of
solr02 are shown as "down", but remains a few cores still as leaders. After
that, we quickly seeing all other servers are still sending requests to
t