Enabling SSL in solr server. (Single mode or Cloud mode) Getting Errors & How to add parameters to service script.

2017-01-02 Thread Behera, Pranaya P
Hi,
 I have followed the documentation and executed in a fresh machine to 
enable the ssl in the server. It is an ec2 instance of centos 7. I have 
installed solr which is working fine. But as soon as I modify 
/etc/default/solr.in.sh file to incorporate the ssl related variables, the 
server never starts. Here is the command used to get it up and running but alas 
no result till now.

[centos@ip-xx-xxx-xx-xxx ~]$ sudo bash ./install_solr_service.sh solr-6.2.1.tgz

Extracting solr-6.2.1.tgz to /opt


Installing symlink /opt/solr -> /opt/solr-6.2.1 ...


Installing /etc/init.d/solr script ...


Installing /etc/default/solr.in.sh ...

Waiting up to 30 seconds to see Solr running on port 8983 [/]
Started Solr server on port 8983 (pid=6683). Happy searching!

Found 1 Solr nodes:

Solr process 6683 running on port 8983
{
  "solr_home":"/var/solr/data",
  "version":"6.2.1 43ab70147eb494324a1410f7a9f16a896a59bc6f - shalin - 
2016-09-15 05:20:53",
  "startTime":"2017-01-02T07:56:25.414Z",
  "uptime":"0 days, 0 hours, 0 minutes, 10 seconds",
  "memory":"82.3 MB (%16.8) of 490.7 MB"}

Service solr installed.
[centos@ip-xx-xxx-xx-xxx ~]$ ps -ef | grep solr
solr  6683 1 15 01:56 ?00:00:02 java -server -Xms512m -Xmx512m 
-XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 
-XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark 
-XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly 
-XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000 
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -verbose:gc 
-XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
-XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution 
-XX:+PrintGCApplicationStoppedTime -Xloggc:/var/solr/logs/solr_gc.log 
-Djetty.port=8983 -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=UTC 
-Djetty.home=/opt/solr/server -Dsolr.solr.home=/var/solr/data 
-Dsolr.install.dir=/opt/solr 
-Dlog4j.configuration=file:/var/solr/log4j.properties -Xss256k 
-XX:OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh 8983 /var/solr/logs -jar 
start.jar --module=http
centos6856  1837  0 01:56 pts/000:00:00 grep --color=auto solr
[centos@ip-xx-xxx-xx-xxx ~]$ cd /opt/solr
[centos@ip-xx-xxx-xx-xxx solr]$ cd server/etc/
[centos@ip-xx-xxx-xx-xxx etc]$ ls
jetty-https.xml  jetty-http.xml  jetty-ssl.xml  jetty.xml  webdefault.xml
[centos@ip-xx-xxx-xx-xxx etc]$ ls
jetty-https.xml  jetty-http.xml  jetty-ssl.xml  jetty.xml  webdefault.xml
[centos@ip-xx-xxx-xx-xxx etc]$ sudo keytool -genkeypair -alias solr-ssl -keyalg 
RSA -keysize 2048 -keypass secret -storepass secret -validity  -keystore 
solr-ssl.keystore.jks -ext SAN=DNS:localhost,IP:xx.xxx.xxx.xxx,IP:127.0.0.1 
-dname "CN=zksolr, OU=Search, O=OK, L=Newyork, ST=Newyork, C=USA"
[centos@ip-xx-xxx-xx-xxx etc]$ ls -al
total 60
drwxr-xr-x.  2 root docker  4096 Jan  2 02:02 .
drwxr-xr-x. 11 root docker  4096 Jan  2 01:56 ..
-rw-r--r--.  1 root docker  3055 Sep 13 20:26 jetty-https.xml
-rw-r--r--.  1 root docker  2684 Sep 13 20:26 jetty-http.xml
-rw-r--r--.  1 root docker  2449 Jul 14 12:13 jetty-ssl.xml
-rw-r--r--.  1 root docker  9389 Sep 14 14:26 jetty.xml
-rw---.  1 root docker  2258 Jan  2 02:02 solr-ssl.keystore.jks
-rw-r--r--.  1 root docker 24425 Jul 14 12:13 webdefault.xml
[centos@ip-xx-xxx-xx-xxx etc]$ sudo keytool -importkeystore -srckeystore 
solr-ssl.keystore.jks -destkeystore solr-ssl.keystore.p12 -srcstoretype jks 
-deststoretype pkcs12
Enter destination keystore password:
Re-enter new password:
Enter source keystore password:
Entry for alias solr-ssl successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or 
cancelled
[centos@ip-xx-xxx-xx-xxx etc]$ sudo openssl pkcs12 -in solr-ssl.keystore.p12 
-out solr-ssl.pem
Enter Import Password:
MAC verified OK
Enter PEM pass phrase:
[centos@ip-xx-xxx-xx-xxx etc]$ ls -al
total 68
drwxr-xr-x.  2 root docker  4096 Jan  2 02:03 .
drwxr-xr-x. 11 root docker  4096 Jan  2 01:56 ..
-rw-r--r--.  1 root docker  3055 Sep 13 20:26 jetty-https.xml
-rw-r--r--.  1 root docker  2684 Sep 13 20:26 jetty-http.xml
-rw-r--r--.  1 root docker  2449 Jul 14 12:13 jetty-ssl.xml
-rw-r--r--.  1 root docker  9389 Sep 14 14:26 jetty.xml
-rw---.  1 root docker  2258 Jan  2 02:02 solr-ssl.keystore.jks
-rw---.  1 root docker  2608 Jan  2 02:02 solr-ssl.keystore.p12
-rw---.  1 root docker  1662 Jan  2 02:03 solr-ssl.pem
-rw-r--r--.  1 root docker 24425 Jul 14 12:13 webdefault.xml
[centos@ip-xx-xxx-xx-xxx etc]$ vi /etc/default/solr.in.sh
[centos@ip-xx-xxx-xx-xxx etc]$ sudo vi /etc/default/solr.in.sh
[centos@ip-xx-xxx-xx-xxx etc]$ sudo service solr stop
Sending stop command to Solr running on port 8983 ... waiting 5 seconds to 
allow Jetty process 6683 to stop gracefully.
[centos@ip-xx-xxx-xx-xxx etc]$ sudo service solr start
Waiting up to 30 seconds to see Solr running on port 8983 [-]  Stil

Re: Solr MapReduce Indexer Tool is failing for empty core name.

2017-01-02 Thread Erick Erickson
MRIT has never worked by specifying cores on the command line,
it's always been about collections. It reads the data from ZK and then
tries to create temporary cores to index into. Eventually, these
temporary cores are merged into the final index for the shard.

So a couple of wild shots in the dark.

- there was an issue a while ago about naming conventions. While the
period in your collection name _should_ be OK, what happens if you
use a name without a period?

- It's actually not recommended to just try to plop 4x configs into 6x
Solr's, rather start with 6x configs and add your modifications. In particular,
if you still have 4x as your luceneMatchVersion I don't quite know what
happens. It's vaguely possible that this error is what's bubbled back up
due to something wonky like this. This is unlikely frankly since you say that
all the rest of your operations are OK, but maybe worth trying.

- Failing all that I wonder if you have some old jars in your classpath.

- Finally, I'd try to start with a fresh dummy collection and do the simplest
config I could and build up from there.

Best,
Erick



On Sun, Jan 1, 2017 at 11:51 PM, Manan Sheth  wrote:
> Hi All,
>
> Please help me out if anyone has executed solr map reduce indexer tool with 
> solr 6. This is still failing and no hints for the error shown in below mail 
> thread.
>
> Thanks,
> Manan Sheth
> 
> From: Manan Sheth
> Sent: Friday, December 16, 2016 2:52 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr MapReduce Indexer Tool is failing for empty core name.
>
> Thats what I presume and it should start utilizing the collection only. The 
> collection param has already been specified and it should take all details 
> from there only. also, core to collection change was happed in solr 4. The 
> map reduce inderexer for solr 4.10 is working correctly with this, but not 
> for solr 6.
> 
> From: Reth RM 
> Sent: Friday, December 16, 2016 12:45 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr MapReduce Indexer Tool is failing for empty core name.
>
> The primary difference has been solr to solr-cloud in later version,
> starting from solr4.0  And what happens if you try starting solr in stand
> alone mode, solr cloud does not consider 'core' anymore, it considers
> 'collection' as param.
>
>
> On Thu, Dec 15, 2016 at 11:05 PM, Manan Sheth 
> wrote:
>
>> Thanks Reth. As noted this is the same map reduce based indexer tool that
>> comes shipped with the solr distribution by default.
>>
>> It only take the zk_host details and extracts all required information
>> from there only. It does not have core specific configurations. The same
>> tool released with solr 4.10 distro is working correctly, it seems to be
>> some issue/ changes from solr 5 onwards. I have tested it for both solr 5.5
>> & solr 6.2.1 and the behaviour remains same for both.
>>
>> Thanks,
>> Manan Sheth
>> 
>> From: Reth RM 
>> Sent: Friday, December 16, 2016 12:21 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Solr MapReduce Indexer Tool is failing for empty core name.
>>
>> It looks like command line tool that you are using to initiate index
>> process,  is expecting some name to solr-core with respective command line
>> param. use -help on the command line tool that you are using and check the
>> solr-core-name parameter key, pass that also with some value.
>>
>>
>> On Tue, Dec 13, 2016 at 5:44 AM, Manan Sheth 
>> wrote:
>>
>> > Hi All,
>> >
>> >
>> > While working on a migration project from Solr 4 to Solr 6, I need to
>> > reindex my data using Solr map reduce Indexer tool in offline mode with
>> > avro data.
>> >
>> > While executing the map reduce indexer tool shipped with solr 6.2.1, it
>> is
>> > throwing error of cannot create core with empty name value. The solr
>> > instances are running fine with new indexed are being added and modified
>> > correctly. Below is the command that was being fired:
>> >
>> >
>> > hadoop --config /etc/hadoop/conf jar /home/impadmin/solr-6.2.1/
>> dist/solr-map-reduce-*.jar
>> > -D 'mapred.child.java.opts=-Xmx500m' \
>> >-libjars `echo /home/impadmin/solr6lib/*.jar | sed 's/ /,/g'`
>> > --morphline-file /home/impadmin/app_quotes_morphline_actual.conf \
>> >--zk-host 172.26.45.71:9984 --output-dir hdfs://
>> > impetus-i0056.impetus.co.in:8020/user/impadmin/
>> > MapReduceIndexerTool/output5 \
>> >--collection app.quotes --log4j src/test/resources/log4j.
>> properties
>> > --verbose \
>> >  "hdfs://impetus-i0056.impetus.co.in:8020/user/impadmin/
>> > MapReduceIndexerTool/5d63e0f8-afc1-483e-bd3f-d508c885d794-00"
>> >
>> >
>> > Below is the complete snapshot of error trace:
>> >
>> >
>> > Failed to initialize record writer for org.apache.solr.hadoop.
>> > MapReduceIndexerTool/MorphlineMapper, attempt_1479795440861_0343_r_
>> > 00_0
>> > at org.apache.solr.hadoop.SolrRec

Lib Directives in SolrConfig when using SolrCloud

2017-01-02 Thread Bob Cook
I've create some custom plugin that I put in a lib directory at the same
level
as the conf directory for the core which is loaded via the lib directives
in solrconfig.xml

/solrdata/CoreName/conf
/solrdata/CoreName/lib
/solrdata/CoreName/core.properties

This works great when using Solr.  But when I use SolrCloud, I get an error
when creating the collection as it can't find the plugin libraries.  Now
that Zookeeper is managing the conf directory (configuration files, i.e.
solrconfig.xml), what is the relative path in solrconfig.xml pointing to?

how do I specify the relative path, or do I need to specify an absolute
path?
Or do I need to put the libraries somewhere else?

I did get this to work if I put them in the "sharedLib" directory as
defined by solr.xml, but
I don't want other collections to reference the sharedLib, they need to
reference their own.

Thanks,

Bob


Re: Facet Null Pointer Exception with upgraded indexes

2017-01-02 Thread Damien Kamerman
Try docValues="false" in your v6 schema.xml. You will may need to upgrade
again.

On 31 December 2016 at 07:59, Mikhail Khludnev  wrote:

> Hi Andreu,
>
> I think it can't facet text field anymore per se
> https://issues.apache.org/jira/browse/SOLR-8362.
>
> On Fri, Dec 30, 2016 at 5:07 PM, Andreu Marimon  wrote:
>
> > Hi,
> >
> > I'm trying to update from solr 4.3 to 6.3. We are doing a two step
> > migration and, during the first step, we upgraded the indexes from 4.3 to
> > 5.5, which is the newest version we can get without errors using the
> Lucene
> > IndexUpgrader tool. As far as I know, 6.30 should be able to read indexes
> > generated with 5.5.
> >
> > Our problem is that, despite loading the cores and data correctly, every
> > query returns a NullPointerException during the facet counting. We can
> get
> > the results anyways, but the facets are not properly set and this error
> > appears in the response:
> >
> > "error":{
> > "metadata":[
> >   "error-class","org.apache.solr.common.SolrException",
> >   "root-error-class","java.lang.NullPointerException"],
> > "msg":"Exception during facet.field: fa_source",
> > "trace":"org.apache.solr.common.SolrException: Exception during
> > facet.field: fa_source\n\tat
> > org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(
> > SimpleFacets.java:766)\n\tat
> > java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat
> > org.apache.solr.request.SimpleFacets$2.execute(
> > SimpleFacets.java:699)\n\tat
> > org.apache.solr.request.SimpleFacets.getFacetFieldCounts(
> > SimpleFacets.java:775)\n\tat
> > org.apache.solr.handler.component.FacetComponent.
> > getFacetCounts(FacetComponent.java:321)\n\tat
> > org.apache.solr.handler.component.FacetComponent.
> > process(FacetComponent.java:265)\n\tat
> > org.apache.solr.handler.component.SearchHandler.handleRequestBody(
> > SearchHandler.java:295)\n\tat
> > org.apache.solr.handler.RequestHandlerBase.handleRequest(
> > RequestHandlerBase.java:153)\n\tat
> > org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)\n\tat
> > org.apache.solr.servlet.HttpSolrCall.execute(
> HttpSolrCall.java:654)\n\tat
> > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)\n\tat
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:303)\n\tat
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:254)\n\tat
> > org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> > doFilter(ServletHandler.java:1668)\n\tat
> > org.eclipse.jetty.servlet.ServletHandler.doHandle(
> > ServletHandler.java:581)\n\tat
> > org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > ScopedHandler.java:143)\n\tat
> > org.eclipse.jetty.security.SecurityHandler.handle(
> > SecurityHandler.java:548)\n\tat
> > org.eclipse.jetty.server.session.SessionHandler.
> > doHandle(SessionHandler.java:226)\n\tat
> > org.eclipse.jetty.server.handler.ContextHandler.
> > doHandle(ContextHandler.java:1160)\n\tat
> > org.eclipse.jetty.servlet.ServletHandler.doScope(
> > ServletHandler.java:511)\n\tat
> > org.eclipse.jetty.server.session.SessionHandler.
> > doScope(SessionHandler.java:185)\n\tat
> > org.eclipse.jetty.server.handler.ContextHandler.
> > doScope(ContextHandler.java:1092)\n\tat
> > org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > ScopedHandler.java:141)\n\tat
> > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
> > ContextHandlerCollection.java:213)\n\tat
> > org.eclipse.jetty.server.handler.HandlerCollection.
> > handle(HandlerCollection.java:119)\n\tat
> > org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> > HandlerWrapper.java:134)\n\tat
> > org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat
> > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat
> > org.eclipse.jetty.server.HttpConnection.onFillable(
> > HttpConnection.java:244)\n\tat
> > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
> > AbstractConnection.java:273)\n\tat
> > org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat
> > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
> > SelectChannelEndPoint.java:93)\n\tat
> > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> > produceAndRun(ExecuteProduceConsume.java:246)\n\tat
> > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
> > ExecuteProduceConsume.java:156)\n\tat
> > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> > QueuedThreadPool.java:654)\n\tat
> > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(
> > QueuedThreadPool.java:572)\n\tat
> > java.lang.Thread.run(Thread.java:745)\nCaused by:
> > java.lang.NullPointerException\n\tat
> > org.apache.solr.request.DocValuesFacets.getCounts(
> > DocValuesFacets.java:117)\n\tat
> > org.apache.solr.request.SimpleFacets.getTermCounts(
> > SimpleFacets.java:530)\n\tat
> > org.apache.solr.request.SimpleFacets.getTermCounts(
> > SimpleFacets.java:380)\n\tat
> > org.ap

Re: Lib Directives in SolrConfig when using SolrCloud

2017-01-02 Thread Shawn Heisey
On 1/2/2017 7:07 PM, Bob Cook wrote:
> I've create some custom plugin that I put in a lib directory at the
> same level as the conf directory for the core which is loaded via the
> lib directives in solrconfig.xml /solrdata/CoreName/conf
> /solrdata/CoreName/lib /solrdata/CoreName/core.properties This works
> great when using Solr. But when I use SolrCloud, I get an error when
> creating the collection as it can't find the plugin libraries. Now
> that Zookeeper is managing the conf directory (configuration files,
> i.e. solrconfig.xml), what is the relative path in solrconfig.xml
> pointing to? 

I actually do not know what the path for lib directives is relative to
when running SolrCloud.  Most things in a core config are relative to
the location of the config file itself, but in this case, the config
file is not on the filesystem at all, it's in zookeeper, and I don't
think Solr can use jars in zookeeper.  Also there's a size limit of one
megabyte for items in zookeeper without special config, and increasing
the limit is not recommended.

> how do I specify the relative path, or do I need to specify an
> absolute path? Or do I need to put the libraries somewhere else? 

My recommendation:  Do not use "lib" directives, or any other kind of
jar-loading related config including sharedLib, at all.  On each
machine, create a "lib" directory in the solr home and place all your
extra jars there.  The solr home is where solr.xml lives and where the
core directories normally go.  All jars in this lib directory will be
loaded exactly once at startup and made available to all cores, without
*any* config.

You can also create a lib directory inside the core's instanceDir, which
is also automatically loaded by Solr without config, but this presents a
bit of a catch-22 problem where you can't create the lib directory until
the core exists, but can't get the core to work properly until the lib
directory exists.

There is a feature in SolrCloud for loading jars from a distributed
collection.  It doesn't work everywhere in Solr, but classes specified
in solrconfig.xml, for things like request handlers, should work.

https://cwiki.apache.org/confluence/display/solr/Blob+Store+API

There is an enhancement request filed to allow the schema to use jars
from the blob store, but it hasn't received much attention yet:

https://issues.apache.org/jira/browse/SOLR-9175

Thanks,
Shawn