Hello,

I configured Solr to use HDFS, which in turn configured to use S3N. I used
the information from this issue to configure:
*https://issues.apache.org/jira/browse/SOLR-9952
<https://issues.apache.org/jira/browse/SOLR-9952>*

Here is the command I have used to start the Solr with HDFS:

*bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory
-Dsolr.lock.type=hdfs -Dsolr.hdfs.home=s3n://amar-hdfs/solr
-Dsolr.hdfs.confdir=/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop
 -DXX:MaxDirectMemorySize=2g*

I am able to create a core, with the following properties:
*#Written by CorePropertiesLocator*
*#Thu Apr 06 23:08:57 UTC 2017*
*name=amar-s3*
*loadOnStartup=false*
*transient=true*
*configSet=base-config*

I am able to ingest messages into Solr and also query the content.
Everything seems to be fine until this stage and I can see the data dir on
S3.

However, the problem is when I restart the Solr server, that is when I see
the core not loaded even when accessed/queried against it. Here is the
admin API to get all cores gives:
<response>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">617</int>
</lst>
<lst name="initFailures"/>
<lst name="status">
<lst name="aggregator-core">...</lst>
*<lst name="amar-s3">*
*<str name="name">amar-s3</str>*
*<str name="instanceDir">*
*/Users/apalavalli/solr/solr-deployment/server/solr/amar-s3*
*</str>*
*<str name="dataDir">data/</str>*
*<str name="config">solrconfig.xml</str>*
*<str name="schema">schema.xml</str>*
*<str name="isLoaded">false</str>*
</lst>
</lst>
</response>

I don't see any issues reported in the log as well, but see this error from
the UI:

[image: Inline image 1]


Not sure about the problem. This is happening when I ingest more than 40K
messages in core before restarting Solr server.

I am using Hadoop 2.7.3 with S3N FS. Please help me on resolving this issue.

Thanks,
Regards,
Amar

Reply via email to