Re: Authorization Non- Admin user - SOLR

2018-08-27 Thread Jan Høydahl
Hi,

The mailing list does not accept attachments, please copy/paste or use a file 
sharing service.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 27. aug. 2018 kl. 05:05 skrev Rathor, Piyush (US - Philadelphia) 
> :
> 
> Hi Jan,
> 
> Please find attached security.json file.
> Please let me know if you need anything else.
> 
> Thanks & Regards
> Piyush Rathor
> Consultant
> Please consider the environment before printing.
> 
> -Original Message-
> From: Jan Høydahl  
> Sent: Friday, August 24, 2018 7:45 PM
> To: solr-user@lucene.apache.org
> Subject: [EXT] Re: Authorization Non- Admin user - SOLR
> 
> Please share your security.json for us to be able to tell whether you 
> configured something wrong
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
>> 24. aug. 2018 kl. 21:28 skrev Rathor, Piyush (US - Philadelphia) 
>> :
>> 
>> Hi Team,
>> 
>> We are implementing Authorization in SOLR version 7.3.0. We are able to 
>> create non-admin user but the user still has admin access (access to core, 
>> access to create fields).
>> Can you please let us know how can we remove access to core, access to 
>> create fields from non-admin user using Authorization.
>> 
>> Also can you please let me know where can I check latest updates on the 
>> issue.
>> 
>> Thanks & Regards
>> Piyush Rathor
>> Consultant
>> 
>> This message (including any attachments) contains confidential information 
>> intended for a specific individual and purpose, and is protected by law. If 
>> you are not the intended recipient, you should delete this message and any 
>> disclosure, copying, or distribution of this message, or the taking of any 
>> action based on it, by you is strictly prohibited.
>> 
>> v.E.1
> 



Re: Multiple solr instances per host vs Multiple cores in same solr instance

2018-08-27 Thread Jan Høydahl
Hi,

I would start with one instance per host and add more shards to that one. As 
long as you stay below 32G heap this would be a preferred setup.
It is a common mistake to think that you need more JVM heap than necessary. In 
fact you should try to minimize your heap and leave more free RAM for OS.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 27. aug. 2018 kl. 05:09 skrev Wei :
> 
> Thanks Shawn. When using multiple Solr instances per host, is there any way
> to prevent solrcloud from putting multiple replicas of the same shard on
> same host?
> I see it makes sense if we can splitting into multiple instances with
> smaller heap size. Besides that, do you think multiple instances will be
> able to get better CPU utilization on multi-core server?
> 
> Thanks,
> Wei
> 
> On Sun, Aug 26, 2018 at 4:37 AM Shawn Heisey  wrote:
> 
>> On 8/26/2018 12:00 AM, Wei wrote:
>>> I have a question about the deployment configuration in solr cloud.  When
>>> we need to increase the number of shards in solr cloud, there are two
>>> options:
>>> 
>>> 1.  Run multiple solr instances per host, each with a different port and
>>> hosting a single core for one shard.
>>> 
>>> 2.  Run one solr instance per host, and have multiple cores(shards) in
>> the
>>> same solr instance.
>>> 
>>> Which would be better performance wise? For the first option I think JVM
>>> size for each solr instance can be smaller, but deployment is more
>>> complicated? Are there any differences for cpu utilization?
>> 
>> My general advice is to only have one Solr instance per machine.  One
>> Solr instance can handle many indexes, and usually will do so with less
>> overhead than two or more instances.
>> 
>> I can think of *ONE* exception to this -- when a single Solr instance
>> would require a heap that's extremely large. Splitting that into two or
>> more instances MIGHT greatly reduce garbage collection pauses.  But
>> there's a caveat to the caveat -- in my strong opinion, if your Solr
>> instance is so big that it requires a huge heap and you're considering
>> splitting into multiple Solr instances on one machine, you very likely
>> need to run each of those instances on *separate* machines, so that each
>> one can have access to all the resources of the machine it's running on.
>> 
>> For SolrCloud, when you're running multiple instances per machine, Solr
>> will consider those to be completely separate instances, and you may end
>> up with all of the replicas for a shard on a single machine, which is a
>> problem for high availability.
>> 
>> Thanks,
>> Shawn
>> 
>> 



Re: SOLR zookeeper connection timeout during startup is hardcoded to 10000ms

2018-08-27 Thread Dominique Bejean
Hi,

We also experimenting time-out issues from time to time.

I sent this message one month ago, by mistake in the dev list.

Why use hardcoded values just in ZkClientClusterStateProvider.java file
while there are existing parameters for these time-out ?

Regards

Dominique



We are experimenting an issue related to Zk Timeout

Stacktrace is :

ERROR 19 juin 2018 06:24:07,152 - h.concurrent.ConcurrentService:67   -
Erreur dans l'attente de la fin de l'exécution d'un thread
ERROR 19 juin 2018 06:24:07,152 - h.concurrent.ConcurrentService:68   -
org.apache.solr.common.SolrException:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper
xxx.xxx.xxx.xxx  :2181 within 1 ms
ERROR 19 juin 2018 06:24:07,152 -   api.batch.Lanceur:98   -
org.apache.solr.common.SolrException:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper
xxx.xxx.xxx.xxx  :2181 within 1 ms
java.util.concurrent.ExecutionException:
org.apache.solr.common.SolrException:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper
xxx.xxx.xxx.xxx:2181 within 1 ms
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 ...
Caused by: org.apache.solr.common.SolrException:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper
xxx.xxx.xxx.xxx:2181 within 1 ms
 at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
 at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
 at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:106)
 at
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:226)
 at
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:121)
...


In solr.xml, we have :
${zkClientTimeout:3}

In Solr.in.sh , we have :
#ZK_CLIENT_TIMEOUT="15000"
or
ZK_CLIENT_TIMEOUT="3"

So zkClientTimeout  should be 3.

In source code of ZkClientClusterStateProvider.java, I see zkClientTimeout
is hardcoded to 1 ! Is it normal that configuration is not used ?

lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/impl/ZkClientClusterStateProvider.java

int zkConnectTimeout = 1;
int zkClientTimeout = 1;

...

zk = new ZkStateReader(zkHost, zkClientTimeout, zkConnectTimeout);


Regards.

Le ven. 24 août 2018 à 20:15, dshih  a écrit :

> Sorry, yes 10,000 ms.
>
> We have a single test cluster (out of probably hundreds) where one node
> hits
> this consistently.  I'm not sure what kind of issues (network?) that node
> is
> having.
>
> Generally though, we ship SOLR as part of our product, and we cannot
> control
> our customers' hardware and setup besides listing minimum requirements.
> While I think this issue will probably be extremely rare, we would
> definitely prefer to be able to say: "well, if you can't fix your hardware
> issue, try increasing this timeout setting".
>
> Thanks,
> Danny
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


Re: An exception when running Solr on HDFS,why a solr server can not recognize the write.lock file is created by itself before?

2018-08-27 Thread zhenyuan wei
Thanks for your answer! @Erick Erickson 
So, It's not recommended to run Solr on NFS ( like HDFS) now?  Maybe
because of crash error or performance problem.
I have a look at SOLR-8335&SOLR-8169, there is no good solution for this
now, And maybe manual removal is the best option?


Erick Erickson  于2018年8月27日周一 上午11:41写道:

> Because HDFS doesn't follow the file semantics that Solr expects.
>
> There's quite a bit of background here:
> https://issues.apache.org/jira/browse/SOLR-8335
>
> Best,
> Erick
> On Sun, Aug 26, 2018 at 6:47 PM zhenyuan wei  wrote:
> >
> > Hi all,
> > I found an exception when running Solr on HDFS。The detail is:
> > Running solr on HDFS,and update doc was running always,
> > then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart
> all.
> > The exception  appears like:
> >
> > 2018-08-26 22:23:12.529 ERROR
> >
> (coreContainerWorkExecutor-2-thread-1-processing-n:cluster-node001:8983_solr)
> > [   ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on
> > startup
> > org.apache.solr.common.SolrException: Unable to create core
> > [collection002_shard56_replica_n110]
> > at
> >
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1061)
> > at
> > org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:640)
> > at
> >
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > at
> >
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> > at java.lang.Thread.run(Thread.java:834)
> > Caused by: org.apache.solr.common.SolrException: Index dir
> > 'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
> > 'collection002_shard56_replica_n110' is already locked. The most likely
> > cause is another Solr server (or another solr core in this server) also
> > configured to use this directory; other possible causes may be specific
> to
> > lockType: hdfs
> > at org.apache.solr.core.SolrCore.(SolrCore.java:1009)
> > at org.apache.solr.core.SolrCore.(SolrCore.java:864)
> > at
> >
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1040)
> > ... 7 more
> > Caused by: org.apache.lucene.store.LockObtainFailedException: Index dir
> > 'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
> > 'collection002_shard56_replica_n110' is already locked. The most likely
> > cause is another Solr server (or another solr core in this server) also
> > configured to use this directory; other possible causes may be specific
> to
> > lockType: hdfs
> > at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:746)
> > at org.apache.solr.core.SolrCore.(SolrCore.java:955)
> > ... 9 more
> >
> >
> > In fact, a print out a hdfs api level exception stack, it reports like:
> >
> > Caused by: org.apache.hadoop.fs.FileAlreadyExistsException:
> > /solr/collection002/core_node17/data/index/write.lock for client
> > 192.168.0.12 already exists
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2563)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:623)
> > at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
> > at
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:422)
> > at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1727)
> > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
> >
> > at
> sun.reflect.GeneratedConstructorAccessor140.newInstance(Unknown
> > Source)
> > at
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(

Re: Multiple solr instances per host vs Multiple cores in same solr instance

2018-08-27 Thread Bernd Fehling

My tests with many combinations (instance, node, core) on a 3 server cluster
with SolrCloud pointed out that highest performance is with multiple solr
instances and shards and replicas placed by rules so that you get advantage
from preferLocalShards=true.

The disadvantage ist the handling of the system, which means setup, starting
and stopping, setting up the shards and replicas with rules and so on.

I tested with 3x3 SolrCloud (3 shards, 3 replicas).
A 3x3 system with one instance and 3 cores per host could handle up to 30QPS.
A 3x3 system with multi instance (different ports, single core and shard per
instance) could handle 60QPS on same hardware with same data.

Also, the single instance per server setup has spikes in the response time graph
which are not seen with a multi instance setup.

Tested about 2 month ago with SolCloud 6.4.2.

Regards,
Bernd


Am 26.08.2018 um 08:00 schrieb Wei:

Hi,

I have a question about the deployment configuration in solr cloud.  When
we need to increase the number of shards in solr cloud, there are two
options:

1.  Run multiple solr instances per host, each with a different port and
hosting a single core for one shard.

2.  Run one solr instance per host, and have multiple cores(shards) in the
same solr instance.

Which would be better performance wise? For the first option I think JVM
size for each solr instance can be smaller, but deployment is more
complicated? Are there any differences for cpu utilization?

Thanks,
Wei



Re: Multiple solr instances per host vs Multiple cores in same solr instance

2018-08-27 Thread Jan Høydahl
What was your bottleneck when maxing on 30QPS on 3 node cluster?
I expect such tests to vary quite much between use cases, so a good approach is 
to do just as you did: benchmark on your specific data and usage.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 27. aug. 2018 kl. 10:45 skrev Bernd Fehling :
> 
> My tests with many combinations (instance, node, core) on a 3 server cluster
> with SolrCloud pointed out that highest performance is with multiple solr
> instances and shards and replicas placed by rules so that you get advantage
> from preferLocalShards=true.
> 
> The disadvantage ist the handling of the system, which means setup, starting
> and stopping, setting up the shards and replicas with rules and so on.
> 
> I tested with 3x3 SolrCloud (3 shards, 3 replicas).
> A 3x3 system with one instance and 3 cores per host could handle up to 30QPS.
> A 3x3 system with multi instance (different ports, single core and shard per
> instance) could handle 60QPS on same hardware with same data.
> 
> Also, the single instance per server setup has spikes in the response time 
> graph
> which are not seen with a multi instance setup.
> 
> Tested about 2 month ago with SolCloud 6.4.2.
> 
> Regards,
> Bernd
> 
> 
> Am 26.08.2018 um 08:00 schrieb Wei:
>> Hi,
>> I have a question about the deployment configuration in solr cloud.  When
>> we need to increase the number of shards in solr cloud, there are two
>> options:
>> 1.  Run multiple solr instances per host, each with a different port and
>> hosting a single core for one shard.
>> 2.  Run one solr instance per host, and have multiple cores(shards) in the
>> same solr instance.
>> Which would be better performance wise? For the first option I think JVM
>> size for each solr instance can be smaller, but deployment is more
>> complicated? Are there any differences for cpu utilization?
>> Thanks,
>> Wei



Re: An exception when running Solr on HDFS,why a solr server can not recognize the write.lock file is created by itself before?

2018-08-27 Thread Walter Underwood
I accidentally put my Solr indexes on NFS once about ten years ago.
It was 100X slower. I would not recommend that.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Aug 27, 2018, at 1:39 AM, zhenyuan wei  wrote:
> 
> Thanks for your answer! @Erick Erickson 
> So, It's not recommended to run Solr on NFS ( like HDFS) now?  Maybe
> because of crash error or performance problem.
> I have a look at SOLR-8335&SOLR-8169, there is no good solution for this
> now, And maybe manual removal is the best option?
> 
> 
> Erick Erickson  于2018年8月27日周一 上午11:41写道:
> 
>> Because HDFS doesn't follow the file semantics that Solr expects.
>> 
>> There's quite a bit of background here:
>> https://issues.apache.org/jira/browse/SOLR-8335
>> 
>> Best,
>> Erick
>> On Sun, Aug 26, 2018 at 6:47 PM zhenyuan wei  wrote:
>>> 
>>> Hi all,
>>>I found an exception when running Solr on HDFS。The detail is:
>>> Running solr on HDFS,and update doc was running always,
>>> then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart
>> all.
>>> The exception  appears like:
>>> 
>>> 2018-08-26 22:23:12.529 ERROR
>>> 
>> (coreContainerWorkExecutor-2-thread-1-processing-n:cluster-node001:8983_solr)
>>> [   ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on
>>> startup
>>> org.apache.solr.common.SolrException: Unable to create core
>>> [collection002_shard56_replica_n110]
>>>at
>>> 
>> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1061)
>>>at
>>> org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:640)
>>>at
>>> 
>> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>>>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>at
>>> 
>> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>>>at
>>> 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
>>>at
>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
>>>at java.lang.Thread.run(Thread.java:834)
>>> Caused by: org.apache.solr.common.SolrException: Index dir
>>> 'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
>>> 'collection002_shard56_replica_n110' is already locked. The most likely
>>> cause is another Solr server (or another solr core in this server) also
>>> configured to use this directory; other possible causes may be specific
>> to
>>> lockType: hdfs
>>>at org.apache.solr.core.SolrCore.(SolrCore.java:1009)
>>>at org.apache.solr.core.SolrCore.(SolrCore.java:864)
>>>at
>>> 
>> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1040)
>>>... 7 more
>>> Caused by: org.apache.lucene.store.LockObtainFailedException: Index dir
>>> 'hdfs://hdfs-cluster/solr/collection002/core_node113/data/index/' of core
>>> 'collection002_shard56_replica_n110' is already locked. The most likely
>>> cause is another Solr server (or another solr core in this server) also
>>> configured to use this directory; other possible causes may be specific
>> to
>>> lockType: hdfs
>>>at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:746)
>>>at org.apache.solr.core.SolrCore.(SolrCore.java:955)
>>>... 9 more
>>> 
>>> 
>>> In fact, a print out a hdfs api level exception stack, it reports like:
>>> 
>>> Caused by: org.apache.hadoop.fs.FileAlreadyExistsException:
>>> /solr/collection002/core_node17/data/index/write.lock for client
>>> 192.168.0.12 already exists
>>>at
>>> 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2563)
>>>at
>>> 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)
>>>at
>>> 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)
>>>at
>>> 
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:623)
>>>at
>>> 
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
>>>at
>>> 
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>at
>>> 
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>>>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>>>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>>>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>>>at java.security.AccessController.doPrivileged(Native Method)
>>>at javax.security.auth.Subject.doAs(Subject.java:422)
>>>at
>>> 
>> org.apache.hadoop.security.UserGroup

Issue with adding an extra Solr Slave

2018-08-27 Thread Zafar Khurasani
Hi,

I'm running Solr 5.3 in one of our applications. Currently, we have one Solr 
Master and one Solr slave running on AWS EC2 instances. I'm trying to add an 
additional Solr slave. I'm using an Elastic LoadBalancer (ELB) in front of my 
Slaves. I see the following error in the logs after adding the second slave,


java version "1.8.0_121"

Solr version: 5.3.0 1696229


org.apache.solr.common.SolrException: Core with core name [xxx-xxx-] does 
not exist.
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:770)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:240)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:194)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:675)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:443)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)


Also, when I hit the Solr Admin UI, I'm able to see my core infrequently. I 
have to refresh the page multiple times to be able to see it.  What's the right 
way to add a slave to my existing setup?

FYI - the Solr Replication section in solrconfig.xml is exactly the same for 
both the Slaves.

Thanks,
Zafar Khurasani



Re: [help!] java.lang.NoSuchMethodError: org.apache.solr.client.solrj.request.schema.SchemaRequest

2018-08-27 Thread yx zhou
Hi Steve

Thank your reply.
My solr server is from http://archive.apache.org/dist/lucene/solr/7.0.1/,
 and my SolrJ client is


org.apache.solr
solr-solrj
7.0.1



org.slf4j
jcl-over-slf4j



org.noggit
noggit





Thanks


On Fri, Aug 24, 2018 at 4:50 PM Steve Rowe  wrote:

> Hi,
>
> SchemaRequestJSONWriter class was removed in SOLR-12455[1], but this
> change has not been released yet (will be released with Solr 7.5).  I’m
> guessing you’re using code built against branch_7x or master?  If so, then
> one solution is to build against the released source for any 7.x version.
>
> [1] https://issues.apache.org/jira/browse/SOLR-12455
>
> --
> Steve
> www.lucidworks.com
>
> > On Aug 24, 2018, at 6:47 PM, yx zhou  wrote:
> >
> > Got the following errow when i try to delete a field with Schema API,  on
> > Solr 7.0.1,  cloud model
> >
> > java.lang.NoSuchMethodError:
> >
> org.apache.solr.client.solrj.request.schema.SchemaRequest$SchemaRequestJSONWriter.writeString(Ljava/lang/String;)V
> >
> > at
> >
> org.apache.solr.client.solrj.request.schema.SchemaRequest$SchemaRequestJSONWriter.write(SchemaRequest.java:824)
> > at
> >
> org.apache.solr.client.solrj.request.schema.SchemaRequest$Update.getContentStreams(SchemaRequest.java:711)
> > at
> >
> org.apache.solr.client.solrj.request.RequestWriter.getContentStreams(RequestWriter.java:51)
> > at
> >
> org.apache.solr.client.solrj.impl.BinaryRequestWriter.getContentStreams(BinaryRequestWriter.java:53)
> > at
> >
> org.apache.solr.client.solrj.impl.HttpSolrClient.createMethod(HttpSolrClient.java:330)
> > at
> >
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> > at
> >
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
> > at
> >
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
> > at
> >
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
> > at
> >
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
> > at
> >
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
> > at
> >
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
> > at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
> > at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
> > at
> org.janusgraph.diskstorage.solr.SolrUtil.deleteField(SolrUtil.java:286)
> > at org.janusgraph.diskstorage.solr.SolrUtil.fieldsSync(SolrUtil.java:249)
> >
> > My code is
> >
> > public static void deleteField(CloudSolrClient client, String
> > collection, String fieldName)
> >throws IOException, SolrServerException {
> >SchemaRequest.DeleteField deleteFieldRequest = new
> > SchemaRequest.DeleteField(fieldName);
> >
> >client.setDefaultCollection(collection);
> >SchemaResponse.UpdateResponse deleteFieldResponse =
> > deleteFieldRequest.process(client);
> > }
>
>


Re: [help!] java.lang.NoSuchMethodError: org.apache.solr.client.solrj.request.schema.SchemaRequest

2018-08-27 Thread yx zhou
i found the root cause, it is my setting


org.noggit
noggit


It's has duplicate (version conflict) in my dependency.


On Mon, Aug 27, 2018 at 11:22 AM yx zhou  wrote:

> Hi Steve
>
> Thank your reply.
> My solr server is from
> http://archive.apache.org/dist/lucene/solr/7.0.1/,   and my SolrJ client
> is
>
> 
> org.apache.solr
> solr-solrj
> 7.0.1
> 
> 
> 
> org.slf4j
> jcl-over-slf4j
> 
> 
> 
> org.noggit
> noggit
> 
> 
> 
>
>
> Thanks
>
>
> On Fri, Aug 24, 2018 at 4:50 PM Steve Rowe  wrote:
>
>> Hi,
>>
>> SchemaRequestJSONWriter class was removed in SOLR-12455[1], but this
>> change has not been released yet (will be released with Solr 7.5).  I’m
>> guessing you’re using code built against branch_7x or master?  If so, then
>> one solution is to build against the released source for any 7.x version.
>>
>> [1] https://issues.apache.org/jira/browse/SOLR-12455
>>
>> --
>> Steve
>> www.lucidworks.com
>>
>> > On Aug 24, 2018, at 6:47 PM, yx zhou  wrote:
>> >
>> > Got the following errow when i try to delete a field with Schema API,
>> on
>> > Solr 7.0.1,  cloud model
>> >
>> > java.lang.NoSuchMethodError:
>> >
>> org.apache.solr.client.solrj.request.schema.SchemaRequest$SchemaRequestJSONWriter.writeString(Ljava/lang/String;)V
>> >
>> > at
>> >
>> org.apache.solr.client.solrj.request.schema.SchemaRequest$SchemaRequestJSONWriter.write(SchemaRequest.java:824)
>> > at
>> >
>> org.apache.solr.client.solrj.request.schema.SchemaRequest$Update.getContentStreams(SchemaRequest.java:711)
>> > at
>> >
>> org.apache.solr.client.solrj.request.RequestWriter.getContentStreams(RequestWriter.java:51)
>> > at
>> >
>> org.apache.solr.client.solrj.impl.BinaryRequestWriter.getContentStreams(BinaryRequestWriter.java:53)
>> > at
>> >
>> org.apache.solr.client.solrj.impl.HttpSolrClient.createMethod(HttpSolrClient.java:330)
>> > at
>> >
>> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>> > at
>> >
>> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
>> > at
>> >
>> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>> > at
>> >
>> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>> > at
>> >
>> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
>> > at
>> >
>> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
>> > at
>> >
>> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
>> > at
>> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
>> > at
>> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
>> > at
>> org.janusgraph.diskstorage.solr.SolrUtil.deleteField(SolrUtil.java:286)
>> > at
>> org.janusgraph.diskstorage.solr.SolrUtil.fieldsSync(SolrUtil.java:249)
>> >
>> > My code is
>> >
>> > public static void deleteField(CloudSolrClient client, String
>> > collection, String fieldName)
>> >throws IOException, SolrServerException {
>> >SchemaRequest.DeleteField deleteFieldRequest = new
>> > SchemaRequest.DeleteField(fieldName);
>> >
>> >client.setDefaultCollection(collection);
>> >SchemaResponse.UpdateResponse deleteFieldResponse =
>> > deleteFieldRequest.process(client);
>> > }
>>
>>


Re: An exception when running Solr on HDFS,why a solr server can not recognize the write.lock file is created by itself before?

2018-08-27 Thread Shawn Heisey

On 8/26/2018 7:47 PM, zhenyuan wei wrote:

 I found an exception when running Solr on HDFS。The detail is:
Running solr on HDFS,and update doc was running always,
then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart all.


If you use "kill -9" to stop a Solr instance, the lockfile will get left 
behind and you may have difficulty starting Solr back up on ANY kind of 
filesystem until you delete the file in each core's data directory.  The 
filename defaults to "write.lock" if you don't change it.


Thanks,
Shawn



Re: Permission Denied when trying to connect to Solr running on a different server

2018-08-27 Thread cyndefromva
It was a config issue. The SELinux on the machine was not allowing apache to
talk to port 8983. I verified this by temporarily turning off the
enforcement (setenforce 0). Once I did this I was able to run search as
expected. I then turned the enforcement back on (setenforce 1) and added a
rule for port 8983:

semanage port -a -t http_port_t -p tcp 8983





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


RE: Authorization Non- Admin user - SOLR

2018-08-27 Thread Rathor, Piyush (US - Philadelphia)
Please find attached below :


--

{
  "authentication":{
"blockUnknown":true,
"class":"solr.BasicAuthPlugin",
"credentials":{
  "solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= 
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c=",
  "tom":"a5fzuaXihDcKzl2W8Q26NtPlyhQL2gKxsOThYUfa9/U= 
23iTm91z9aGXGcdjSJAMUnLoVglY40J8GGEE5jt+Gsg="},
"":{"v":0}},
  "authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[
  {
"name":"security-edit",
"role":"admin",
"index":1},
  {
"name":"collection-mgr",
"path":"/person/update",
"params":{"action":"CREATE"},
"role":"xz",
"index":2},
  {
"name":"update",
"role":"dev",
"index":3}],
"user-role":{
  "solr":[
"admin",
"dev"],
  "harry":"dev",
  "tom":"xz"},
"":{"v":0}}}



---
Thanks
Piyush


From: Jan Høydahl 
Sent: 27 August 2018 12:52
To: solr-user@lucene.apache.org
Subject: [EXT] Re: Authorization Non- Admin user - SOLR

Hi,

The mailing list does not accept attachments, please copy/paste or use a file 
sharing service.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 27. aug. 2018 kl. 05:05 skrev Rathor, Piyush (US - Philadelphia) 
> :
>
> Hi Jan,
>
> Please find attached security.json file.
> Please let me know if you need anything else.
>
> Thanks & Regards
> Piyush Rathor
> Consultant
> Please consider the environment before printing.
>
> -Original Message-
> From: Jan Høydahl 
> Sent: Friday, August 24, 2018 7:45 PM
> To: solr-user@lucene.apache.org
> Subject: [EXT] Re: Authorization Non- Admin user - SOLR
>
> Please share your security.json for us to be able to tell whether you 
> configured something wrong
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
>> 24. aug. 2018 kl. 21:28 skrev Rathor, Piyush (US - Philadelphia) 
>> :
>>
>> Hi Team,
>>
>> We are implementing Authorization in SOLR version 7.3.0. We are able to 
>> create non-admin user but the user still has admin access (access to core, 
>> access to create fields).
>> Can you please let us know how can we remove access to core, access to 
>> create fields from non-admin user using Authorization.
>>
>> Also can you please let me know where can I check latest updates on the 
>> issue.
>>
>> Thanks & Regards
>> Piyush Rathor
>> Consultant
>>
>> This message (including any attachments) contains confidential information 
>> intended for a specific individual and purpose, and is protected by law. If 
>> you are not the intended recipient, you should delete this message and any 
>> disclosure, copying, or distribution of this message, or the taking of any 
>> action based on it, by you is strictly prohibited.
>>
>> v.E.1
>



unique() in the JSON facets doesn’t count all the different values in a field

2018-08-27 Thread Alfonso Muñoz-Pomer Fuentes
Hi all,

We’re running a SolrCloud 7.1 instance in our service and I’ve come across at a 
disagreement when trying to find out the different values a field has:

Using the JSON facets API with unique():
3385

Using the JSON facets API with terms:
3388

Using the stats component:
countDistinct   3388
cardinality 3356

My biggest surprise is that the unique function using the JSON facets doesn’t 
get the value correctly. Is this to be expected, in the same way that 
cardinality is an approximation?

If this is a bug (unless there’s something very basic I’m missing, I think it 
is), how should I report it? It’s the fist time I’ve seen a disagreement 
between unique and terms, and I don’t know how to reproduce it unless it’s with 
our specific collection.

Many thanks in advance.

--
Alfonso Muñoz-Pomer Fuentes
Senior Lead Software Engineer @ Expression Atlas Team
European Bioinformatics Institute (EMBL-EBI)
European Molecular Biology Laboratory
Tel:+ 44 (0) 1223 49 2633
Skype: amunozpomer



Re: Multiple solr instances per host vs Multiple cores in same solr instance

2018-08-27 Thread Wei
Thanks Bernd.  Do you have preferLocalShards=true in both cases? Do you
notice CPU/memory utilization difference between the two deployments? How
many servers did you use in total?  I am curious what's the bottleneck for
the one instance and 3 cores configuration.

Thanks,
Wei

On Mon, Aug 27, 2018 at 1:45 AM Bernd Fehling <
bernd.fehl...@uni-bielefeld.de> wrote:

> My tests with many combinations (instance, node, core) on a 3 server
> cluster
> with SolrCloud pointed out that highest performance is with multiple solr
> instances and shards and replicas placed by rules so that you get advantage
> from preferLocalShards=true.
>
> The disadvantage ist the handling of the system, which means setup,
> starting
> and stopping, setting up the shards and replicas with rules and so on.
>
> I tested with 3x3 SolrCloud (3 shards, 3 replicas).
> A 3x3 system with one instance and 3 cores per host could handle up to
> 30QPS.
> A 3x3 system with multi instance (different ports, single core and shard
> per
> instance) could handle 60QPS on same hardware with same data.
>
> Also, the single instance per server setup has spikes in the response time
> graph
> which are not seen with a multi instance setup.
>
> Tested about 2 month ago with SolCloud 6.4.2.
>
> Regards,
> Bernd
>
>
> Am 26.08.2018 um 08:00 schrieb Wei:
> > Hi,
> >
> > I have a question about the deployment configuration in solr cloud.  When
> > we need to increase the number of shards in solr cloud, there are two
> > options:
> >
> > 1.  Run multiple solr instances per host, each with a different port and
> > hosting a single core for one shard.
> >
> > 2.  Run one solr instance per host, and have multiple cores(shards) in
> the
> > same solr instance.
> >
> > Which would be better performance wise? For the first option I think JVM
> > size for each solr instance can be smaller, but deployment is more
> > complicated? Are there any differences for cpu utilization?
> >
> > Thanks,
> > Wei
> >
>


Re: unique() in the JSON facets doesn’t count all the different values in a field

2018-08-27 Thread Alfonso Muñoz-Pomer Fuentes
I just found out reading the Solr ref. guide for 7.1 that:
> • JSON Facet API now uses hyper-log-log for numBuckets cardinality 
> calculation and calculates cardinality

Unfortunately the HTML ref. guide for version 7.1 didn’t contain any 
documentation regarding the JSON facet API. I’ve checked in later versions that 
unique returns approximate values for cardinalities higher than 100 
(https://lucene.apache.org/solr/guide/7_2/json-facet-api.html#AggregationFunctions).

Please dismiss the previous email!



> On 27 Aug 2018, at 21:38, Alfonso Muñoz-Pomer Fuentes  
> wrote:
> 
> Hi all,
> 
> We’re running a SolrCloud 7.1 instance in our service and I’ve come across at 
> a disagreement when trying to find out the different values a field has:
> 
> Using the JSON facets API with unique():
> 3385
> 
> Using the JSON facets API with terms:
> 3388
> 
> Using the stats component:
> countDistinct 3388
> cardinality   3356
> 
> My biggest surprise is that the unique function using the JSON facets doesn’t 
> get the value correctly. Is this to be expected, in the same way that 
> cardinality is an approximation?
> 
> If this is a bug (unless there’s something very basic I’m missing, I think it 
> is), how should I report it? It’s the fist time I’ve seen a disagreement 
> between unique and terms, and I don’t know how to reproduce it unless it’s 
> with our specific collection.
> 
> Many thanks in advance.
> 
> --
> Alfonso Muñoz-Pomer Fuentes
> Senior Lead Software Engineer @ Expression Atlas Team
> European Bioinformatics Institute (EMBL-EBI)
> European Molecular Biology Laboratory
> Tel:+ 44 (0) 1223 49 2633
> Skype: amunozpomer
> 

--
Alfonso Muñoz-Pomer Fuentes
Senior Lead Software Engineer @ Expression Atlas Team
European Bioinformatics Institute (EMBL-EBI)
European Molecular Biology Laboratory
Tel:+ 44 (0) 1223 49 2633
Skype: amunozpomer



cloud disk space utilization

2018-08-27 Thread Kudrettin Güleryüz
Hi,

We have six Solr nodes with ~1TiB disk space on each mounted as ext4. The
indexers sometimes update the collections and create new ones if update
wouldn't be faster than scratch indexing. (up to around 5 million documents
are indexed for each collection) On average there are around 130
collections on this SolrCloud. Collection sizes vary from 1GiB to 150GiB.

Preferences set:

  "cluster-preferences":[{
  "maximize":"freedisk",
  "precision":10}
,{
  "minimize":"cores",
  "precision":1}
,{
  "minimize":"sysLoadAvg",
  "precision":3}],

* Is it be possible to run out of disk space on one of the nodes while
others would have plenty? I observe some are getting close to ~80%
utilization while others stay at ~60%
* Would this difference be due to collection index size differences or due
to error on my side to come up with a useful policy/preferences?

Thank you


Re: [help!] java.lang.NoSuchMethodError: org.apache.solr.client.solrj.request.schema.SchemaRequest

2018-08-27 Thread Steve Rowe
Thanks for letting us know about the source of the problem.

--
Steve
www.lucidworks.com

> On Aug 27, 2018, at 3:15 PM, yx zhou  wrote:
> 
> i found the root cause, it is my setting
> 
>
>org.noggit
>noggit
>
> 
> It's has duplicate (version conflict) in my dependency.
> 
> 
> On Mon, Aug 27, 2018 at 11:22 AM yx zhou  wrote:
> 
>> Hi Steve
>> 
>>Thank your reply.
>>My solr server is from
>> http://archive.apache.org/dist/lucene/solr/7.0.1/,   and my SolrJ client
>> is
>> 
>> 
>>org.apache.solr
>>solr-solrj
>>7.0.1
>>
>>
>>
>>org.slf4j
>>jcl-over-slf4j
>>
>>
>>
>>org.noggit
>>noggit
>>
>>
>> 
>> 
>> 
>> Thanks
>> 
>> 
>> On Fri, Aug 24, 2018 at 4:50 PM Steve Rowe  wrote:
>> 
>>> Hi,
>>> 
>>> SchemaRequestJSONWriter class was removed in SOLR-12455[1], but this
>>> change has not been released yet (will be released with Solr 7.5).  I’m
>>> guessing you’re using code built against branch_7x or master?  If so, then
>>> one solution is to build against the released source for any 7.x version.
>>> 
>>> [1] https://issues.apache.org/jira/browse/SOLR-12455
>>> 
>>> --
>>> Steve
>>> www.lucidworks.com
>>> 
 On Aug 24, 2018, at 6:47 PM, yx zhou  wrote:
 
 Got the following errow when i try to delete a field with Schema API,
>>> on
 Solr 7.0.1,  cloud model
 
 java.lang.NoSuchMethodError:
 
>>> org.apache.solr.client.solrj.request.schema.SchemaRequest$SchemaRequestJSONWriter.writeString(Ljava/lang/String;)V
 
 at
 
>>> org.apache.solr.client.solrj.request.schema.SchemaRequest$SchemaRequestJSONWriter.write(SchemaRequest.java:824)
 at
 
>>> org.apache.solr.client.solrj.request.schema.SchemaRequest$Update.getContentStreams(SchemaRequest.java:711)
 at
 
>>> org.apache.solr.client.solrj.request.RequestWriter.getContentStreams(RequestWriter.java:51)
 at
 
>>> org.apache.solr.client.solrj.impl.BinaryRequestWriter.getContentStreams(BinaryRequestWriter.java:53)
 at
 
>>> org.apache.solr.client.solrj.impl.HttpSolrClient.createMethod(HttpSolrClient.java:330)
 at
 
>>> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
 at
 
>>> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
 at
 
>>> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
 at
 
>>> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
 at
 
>>> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
 at
 
>>> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
 at
 
>>> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
 at
>>> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
 at
>>> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
 at
>>> org.janusgraph.diskstorage.solr.SolrUtil.deleteField(SolrUtil.java:286)
 at
>>> org.janusgraph.diskstorage.solr.SolrUtil.fieldsSync(SolrUtil.java:249)
 
 My code is
 
 public static void deleteField(CloudSolrClient client, String
 collection, String fieldName)
   throws IOException, SolrServerException {
   SchemaRequest.DeleteField deleteFieldRequest = new
 SchemaRequest.DeleteField(fieldName);
 
   client.setDefaultCollection(collection);
   SchemaResponse.UpdateResponse deleteFieldResponse =
 deleteFieldRequest.process(client);
 }
>>> 
>>> 



Re: An exception when running Solr on HDFS,why a solr server can not recognize the write.lock file is created by itself before?

2018-08-27 Thread zhenyuan wei
@Shawn Heisey  Yeah, delete "write.lock" files manually is ok finally。
@Walter Underwood  Have some performace evaluation about Solr on HDFS vs
LocalFS  recently?

Shawn Heisey  于2018年8月28日周二 上午4:10写道:

> On 8/26/2018 7:47 PM, zhenyuan wei wrote:
> >  I found an exception when running Solr on HDFS。The detail is:
> > Running solr on HDFS,and update doc was running always,
> > then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart
> all.
>
> If you use "kill -9" to stop a Solr instance, the lockfile will get left
> behind and you may have difficulty starting Solr back up on ANY kind of
> filesystem until you delete the file in each core's data directory.  The
> filename defaults to "write.lock" if you don't change it.
>
> Thanks,
> Shawn
>
>


“solr.data.dir” can only config a single directory

2018-08-27 Thread zhenyuan wei
Hi all,
 I found the  “solr.data.dir” can only config a single directory.  I
think it is necessary to be config  multi dirs,such as
”solr.data.dir:/mnt/disk1,/mnt/disk2,/mnt/disk3" , due to one disk overload
or capacity limitation.  Any reason to support why not do so?


Best,
TinsWzy


Re: “solr.data.dir” can only config a single directory

2018-08-27 Thread Shawn Heisey

On 8/27/2018 8:29 PM, zhenyuan wei wrote:

  I found the  “solr.data.dir” can only config a single directory.  I
think it is necessary to be config  multi dirs,such as
”solr.data.dir:/mnt/disk1,/mnt/disk2,/mnt/disk3" , due to one disk overload
or capacity limitation.  Any reason to support why not do so?


Nobody has written the code to support it.  It would very likely not be 
easy code to write.  Supporting one directory for that setting is pretty 
easy ... it would require changing a LOT of existing code to support 
more than one.


Thanks,
Shawn



Re: “solr.data.dir” can only config a single directory

2018-08-27 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Shawn,

On 8/27/18 22:37, Shawn Heisey wrote:
> On 8/27/2018 8:29 PM, zhenyuan wei wrote:
>> I found the  “solr.data.dir” can only config a single directory.
>> I think it is necessary to be config  multi dirs,such as 
>> ”solr.data.dir:/mnt/disk1,/mnt/disk2,/mnt/disk3" , due to one
>> disk overload or capacity limitation.  Any reason to support why
>> not do so?
> 
> Nobody has written the code to support it.  It would very likely
> not be easy code to write.  Supporting one directory for that
> setting is pretty easy ... it would require changing a LOT of
> existing code to support more than one.

Also, there are better ways to do this:

- - multi-node Solr with sharding
- - LVM or similar with multi-disk volumes
- - ZFS surely has something for this
- - buy a bigger disk (disk is cheap!)
- - etc.

- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAluEvn8ACgkQHPApP6U8
pFgTTg//ayed4AXtocVrB6e/ZK0eWz5/E1Q7Oa7kF0c34l0MH6BIe4iOHDmrR+J9
A+t6SzVQqURMrDE8plg/xbPTlyGF8wGrEjZUZF4fpWlgnY/qNYxl5S9zJ1hPgBh7
fCKkb+LuLGdZMM4oORfCYtMgpDjOnLihHmDTfkrvZzyZwOQGeFpgEZDZKFYAjcur
wqIGTMTTWfSCoPQgQzvI8Husq7Rs75BEc+mAkaPOL0LvT9PQDEPEXXt3Kf5vXgM+
Eet1ymltZM/Xz+V/em/oeumCoCE18uxi9seuDhTpHRLjS9tCBbPWA0NmobriY3ct
GskwCnsFDAeGjG/7dcA/zmB8BK4t6JpUvI+OcJU5dvQczpQbhB9WT4GQUiME9Tvr
RjBES53HoEEKA8gb0kiuPN1pE2MSX8vO3uKpQtzVS2MOmuOeV/IebrnP/zLTll18
awtWWbPmzaAGAUfXL2ExK0+ism0o31i46CNfLfBBM8jh3lkc2HNdz5TLe8YfN3Sp
Tj0HfmYynhtH1CggOAcI1M4PIEbIGfoywX/ICSGHnLwfQoDUnBmjqXhGkFUIstWk
Dcntx+4E4NRny6zDZfg5UMjWYyo+fOVSoaDf6dfgBWIB1I3xPn5Dt0In7+oRtZ9i
Xlkw6DSaSZZ5caBqjaF278xj7IwEw2zipLPWB7hVCcUhKuJBbDY=
=rbrT
-END PGP SIGNATURE-


Spring Content Error in Plugin

2018-08-27 Thread Zimmermann, Thomas
Hi,

We have a custom java plugin that leverages the UpdateRequestProcessorFactory 
to push data to multiple cores when a single core is written to. We are 
building the plugin with maven, deploying it to /solr/lib and sourcing the jar 
via a lib directive in our solr config. It currently works correctly in our 
Solr 5.x cluster.

In Solr 7 when attempting to create the core, the plugin is failing with the 
long stack trace later in this post, but it seems to boil down to solr not 
finding a the Spring Context jar (Caused by: java.lang.ClassNotFoundException: 
org.springframework.context.ConfigurableApplicationContext

The jar is imported in the file, and maven has a dependency to bring it in. The 
maven build works perfectly, the dependency is built and the jar is generated.

Any ideas on a starting point for tracking this down? I’ve dug through a bunch 
of stack overflow’s with the same issue but not directly tied to solr and had 
no luck.

Thanks!

POM



org.springframework

org.springframework.context

3.2.2.RELEASE




Error

ERROR - 2018-08-28 03:15:54.253; [c:vignette_de s:shard1 r:core_node5 
x:vignette_de_shard1_replica_n2] org.apache.solr.handler.RequestHandlerBase; 
org.apache.solr.common.SolrException: Error CREATEing SolrCore 
'vignette_de_shard1_replica_n2': Unable to create core 
[vignette_de_shard1_replica_n2] Caused by: 
org.springframework.context.ConfigurableApplicationContext

at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1084)

at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:94)

at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:380)

at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)

at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)

at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)

at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)

at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)

at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)

at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)

at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)

at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)

at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)

at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)

at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)

at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)

at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)

at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)

at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)

at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)

at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)

at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)

at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)

at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)

at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)

at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)

at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)

at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)

at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)

at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)

at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)

at org.eclipse.jetty.server.Server.handle(Server.java:531)

at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)

at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)

at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)

at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)

at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)

at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)

at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)

at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)

at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)

at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(Re

Re: “solr.data.dir” can only config a single directory

2018-08-27 Thread zhenyuan wei
Is there any issue exists already?my all solr instances (30 or more in the
future ) maybe using this feature because all host will be 4 or more
mounted disks.
I am interesting to watching this feature, and make a contribution to that
 in some way  if possiable.

Shawn Heisey  于2018年8月28日周二 上午10:38写道:

> On 8/27/2018 8:29 PM, zhenyuan wei wrote:
> >   I found the  “solr.data.dir” can only config a single directory.  I
> > think it is necessary to be config  multi dirs,such as
> > ”solr.data.dir:/mnt/disk1,/mnt/disk2,/mnt/disk3" , due to one disk
> overload
> > or capacity limitation.  Any reason to support why not do so?
>
> Nobody has written the code to support it.  It would very likely not be
> easy code to write.  Supporting one directory for that setting is pretty
> easy ... it would require changing a LOT of existing code to support
> more than one.
>
> Thanks,
> Shawn
>
>


Re: “solr.data.dir” can only config a single directory

2018-08-27 Thread zhenyuan wei
@Christopher Schultz
So  you mean that  one 4TB disk is the same as  four  1TB disks ?
HDFS、cassandra、ES can do so, multi data path maybe maximize indexing
throughput   in same cases.
click links
 with
some explain


Christopher Schultz  于2018年8月28日周二 上午11:16写道:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Shawn,
>
> On 8/27/18 22:37, Shawn Heisey wrote:
> > On 8/27/2018 8:29 PM, zhenyuan wei wrote:
> >> I found the  “solr.data.dir” can only config a single directory.
> >> I think it is necessary to be config  multi dirs,such as
> >> ”solr.data.dir:/mnt/disk1,/mnt/disk2,/mnt/disk3" , due to one
> >> disk overload or capacity limitation.  Any reason to support why
> >> not do so?
> >
> > Nobody has written the code to support it.  It would very likely
> > not be easy code to write.  Supporting one directory for that
> > setting is pretty easy ... it would require changing a LOT of
> > existing code to support more than one.
>
> Also, there are better ways to do this:
>
> - - multi-node Solr with sharding
> - - LVM or similar with multi-disk volumes
> - - ZFS surely has something for this
> - - buy a bigger disk (disk is cheap!)
> - - etc.
>
> - -chris
> -BEGIN PGP SIGNATURE-
> Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAluEvn8ACgkQHPApP6U8
> pFgTTg//ayed4AXtocVrB6e/ZK0eWz5/E1Q7Oa7kF0c34l0MH6BIe4iOHDmrR+J9
> A+t6SzVQqURMrDE8plg/xbPTlyGF8wGrEjZUZF4fpWlgnY/qNYxl5S9zJ1hPgBh7
> fCKkb+LuLGdZMM4oORfCYtMgpDjOnLihHmDTfkrvZzyZwOQGeFpgEZDZKFYAjcur
> wqIGTMTTWfSCoPQgQzvI8Husq7Rs75BEc+mAkaPOL0LvT9PQDEPEXXt3Kf5vXgM+
> Eet1ymltZM/Xz+V/em/oeumCoCE18uxi9seuDhTpHRLjS9tCBbPWA0NmobriY3ct
> GskwCnsFDAeGjG/7dcA/zmB8BK4t6JpUvI+OcJU5dvQczpQbhB9WT4GQUiME9Tvr
> RjBES53HoEEKA8gb0kiuPN1pE2MSX8vO3uKpQtzVS2MOmuOeV/IebrnP/zLTll18
> awtWWbPmzaAGAUfXL2ExK0+ism0o31i46CNfLfBBM8jh3lkc2HNdz5TLe8YfN3Sp
> Tj0HfmYynhtH1CggOAcI1M4PIEbIGfoywX/ICSGHnLwfQoDUnBmjqXhGkFUIstWk
> Dcntx+4E4NRny6zDZfg5UMjWYyo+fOVSoaDf6dfgBWIB1I3xPn5Dt0In7+oRtZ9i
> Xlkw6DSaSZZ5caBqjaF278xj7IwEw2zipLPWB7hVCcUhKuJBbDY=
> =rbrT
> -END PGP SIGNATURE-
>


Re: “solr.data.dir” can only config a single directory

2018-08-27 Thread Erick Erickson
Every _replica_ can point to a different disk. When you do an
ADDREPLICA, then you can supply whatever path to the data
you desire. And you can have as many replicas per Solr instance
as makes sense.

Best,
Erick
On Mon, Aug 27, 2018 at 8:48 PM zhenyuan wei  wrote:
>
> @Christopher Schultz
> So  you mean that  one 4TB disk is the same as  four  1TB disks ?
> HDFS、cassandra、ES can do so, multi data path maybe maximize indexing
> throughput   in same cases.
> click links
>  with
> some explain
>
>
> Christopher Schultz  于2018年8月28日周二 上午11:16写道:
>
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA256
> >
> > Shawn,
> >
> > On 8/27/18 22:37, Shawn Heisey wrote:
> > > On 8/27/2018 8:29 PM, zhenyuan wei wrote:
> > >> I found the  “solr.data.dir” can only config a single directory.
> > >> I think it is necessary to be config  multi dirs,such as
> > >> ”solr.data.dir:/mnt/disk1,/mnt/disk2,/mnt/disk3" , due to one
> > >> disk overload or capacity limitation.  Any reason to support why
> > >> not do so?
> > >
> > > Nobody has written the code to support it.  It would very likely
> > > not be easy code to write.  Supporting one directory for that
> > > setting is pretty easy ... it would require changing a LOT of
> > > existing code to support more than one.
> >
> > Also, there are better ways to do this:
> >
> > - - multi-node Solr with sharding
> > - - LVM or similar with multi-disk volumes
> > - - ZFS surely has something for this
> > - - buy a bigger disk (disk is cheap!)
> > - - etc.
> >
> > - -chris
> > -BEGIN PGP SIGNATURE-
> > Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
> >
> > iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAluEvn8ACgkQHPApP6U8
> > pFgTTg//ayed4AXtocVrB6e/ZK0eWz5/E1Q7Oa7kF0c34l0MH6BIe4iOHDmrR+J9
> > A+t6SzVQqURMrDE8plg/xbPTlyGF8wGrEjZUZF4fpWlgnY/qNYxl5S9zJ1hPgBh7
> > fCKkb+LuLGdZMM4oORfCYtMgpDjOnLihHmDTfkrvZzyZwOQGeFpgEZDZKFYAjcur
> > wqIGTMTTWfSCoPQgQzvI8Husq7Rs75BEc+mAkaPOL0LvT9PQDEPEXXt3Kf5vXgM+
> > Eet1ymltZM/Xz+V/em/oeumCoCE18uxi9seuDhTpHRLjS9tCBbPWA0NmobriY3ct
> > GskwCnsFDAeGjG/7dcA/zmB8BK4t6JpUvI+OcJU5dvQczpQbhB9WT4GQUiME9Tvr
> > RjBES53HoEEKA8gb0kiuPN1pE2MSX8vO3uKpQtzVS2MOmuOeV/IebrnP/zLTll18
> > awtWWbPmzaAGAUfXL2ExK0+ism0o31i46CNfLfBBM8jh3lkc2HNdz5TLe8YfN3Sp
> > Tj0HfmYynhtH1CggOAcI1M4PIEbIGfoywX/ICSGHnLwfQoDUnBmjqXhGkFUIstWk
> > Dcntx+4E4NRny6zDZfg5UMjWYyo+fOVSoaDf6dfgBWIB1I3xPn5Dt0In7+oRtZ9i
> > Xlkw6DSaSZZ5caBqjaF278xj7IwEw2zipLPWB7hVCcUhKuJBbDY=
> > =rbrT
> > -END PGP SIGNATURE-
> >


Re: Multiple solr instances per host vs Multiple cores in same solr instance

2018-08-27 Thread Bernd Fehling

There was no real bottleneck.
I just started with 30QPS and after that just doubled the QPS.
But as you mentioned I used my specific data and analysis, and also
used SWD (german keyword norm data) dictionary for querying.

Regards,
Bernd


Am 27.08.2018 um 15:41 schrieb Jan Høydahl:

What was your bottleneck when maxing on 30QPS on 3 node cluster?
I expect such tests to vary quite much between use cases, so a good approach is 
to do just as you did: benchmark on your specific data and usage.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


27. aug. 2018 kl. 10:45 skrev Bernd Fehling :

My tests with many combinations (instance, node, core) on a 3 server cluster
with SolrCloud pointed out that highest performance is with multiple solr
instances and shards and replicas placed by rules so that you get advantage
from preferLocalShards=true.

The disadvantage ist the handling of the system, which means setup, starting
and stopping, setting up the shards and replicas with rules and so on.

I tested with 3x3 SolrCloud (3 shards, 3 replicas).
A 3x3 system with one instance and 3 cores per host could handle up to 30QPS.
A 3x3 system with multi instance (different ports, single core and shard per
instance) could handle 60QPS on same hardware with same data.

Also, the single instance per server setup has spikes in the response time graph
which are not seen with a multi instance setup.

Tested about 2 month ago with SolCloud 6.4.2.

Regards,
Bernd


Am 26.08.2018 um 08:00 schrieb Wei:

Hi,
I have a question about the deployment configuration in solr cloud.  When
we need to increase the number of shards in solr cloud, there are two
options:
1.  Run multiple solr instances per host, each with a different port and
hosting a single core for one shard.
2.  Run one solr instance per host, and have multiple cores(shards) in the
same solr instance.
Which would be better performance wise? For the first option I think JVM
size for each solr instance can be smaller, but deployment is more
complicated? Are there any differences for cpu utilization?
Thanks,
Wei





Re: “solr.data.dir” can only config a single directory

2018-08-27 Thread zhenyuan wei
But this is not a common way to do so, I mean, nobody want to ADDREPLICA
after collection was created.

Erick Erickson  于2018年8月28日周二 下午1:24写道:

> Every _replica_ can point to a different disk. When you do an
> ADDREPLICA, then you can supply whatever path to the data
> you desire. And you can have as many replicas per Solr instance
> as makes sense.
>
> Best,
> Erick
> On Mon, Aug 27, 2018 at 8:48 PM zhenyuan wei  wrote:
> >
> > @Christopher Schultz
> > So  you mean that  one 4TB disk is the same as  four  1TB disks ?
> > HDFS、cassandra、ES can do so, multi data path maybe maximize indexing
> > throughput   in same cases.
> > click links
> > 
> with
> > some explain
> >
> >
> > Christopher Schultz  于2018年8月28日周二
> 上午11:16写道:
> >
> > > -BEGIN PGP SIGNED MESSAGE-
> > > Hash: SHA256
> > >
> > > Shawn,
> > >
> > > On 8/27/18 22:37, Shawn Heisey wrote:
> > > > On 8/27/2018 8:29 PM, zhenyuan wei wrote:
> > > >> I found the  “solr.data.dir” can only config a single directory.
> > > >> I think it is necessary to be config  multi dirs,such as
> > > >> ”solr.data.dir:/mnt/disk1,/mnt/disk2,/mnt/disk3" , due to one
> > > >> disk overload or capacity limitation.  Any reason to support why
> > > >> not do so?
> > > >
> > > > Nobody has written the code to support it.  It would very likely
> > > > not be easy code to write.  Supporting one directory for that
> > > > setting is pretty easy ... it would require changing a LOT of
> > > > existing code to support more than one.
> > >
> > > Also, there are better ways to do this:
> > >
> > > - - multi-node Solr with sharding
> > > - - LVM or similar with multi-disk volumes
> > > - - ZFS surely has something for this
> > > - - buy a bigger disk (disk is cheap!)
> > > - - etc.
> > >
> > > - -chris
> > > -BEGIN PGP SIGNATURE-
> > > Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
> > >
> > > iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAluEvn8ACgkQHPApP6U8
> > > pFgTTg//ayed4AXtocVrB6e/ZK0eWz5/E1Q7Oa7kF0c34l0MH6BIe4iOHDmrR+J9
> > > A+t6SzVQqURMrDE8plg/xbPTlyGF8wGrEjZUZF4fpWlgnY/qNYxl5S9zJ1hPgBh7
> > > fCKkb+LuLGdZMM4oORfCYtMgpDjOnLihHmDTfkrvZzyZwOQGeFpgEZDZKFYAjcur
> > > wqIGTMTTWfSCoPQgQzvI8Husq7Rs75BEc+mAkaPOL0LvT9PQDEPEXXt3Kf5vXgM+
> > > Eet1ymltZM/Xz+V/em/oeumCoCE18uxi9seuDhTpHRLjS9tCBbPWA0NmobriY3ct
> > > GskwCnsFDAeGjG/7dcA/zmB8BK4t6JpUvI+OcJU5dvQczpQbhB9WT4GQUiME9Tvr
> > > RjBES53HoEEKA8gb0kiuPN1pE2MSX8vO3uKpQtzVS2MOmuOeV/IebrnP/zLTll18
> > > awtWWbPmzaAGAUfXL2ExK0+ism0o31i46CNfLfBBM8jh3lkc2HNdz5TLe8YfN3Sp
> > > Tj0HfmYynhtH1CggOAcI1M4PIEbIGfoywX/ICSGHnLwfQoDUnBmjqXhGkFUIstWk
> > > Dcntx+4E4NRny6zDZfg5UMjWYyo+fOVSoaDf6dfgBWIB1I3xPn5Dt0In7+oRtZ9i
> > > Xlkw6DSaSZZ5caBqjaF278xj7IwEw2zipLPWB7hVCcUhKuJBbDY=
> > > =rbrT
> > > -END PGP SIGNATURE-
> > >
>