[ 
https://issues.apache.org/jira/browse/SOLR-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17260758#comment-17260758
 ] 

Gézapeti commented on SOLR-14251:
---------------------------------

Yupp, that would be the 'proper' solution, but that will require changes on 
multiple interfaces around SolrCloudManager. Currently the whole logic here 
thinks that storage is not something that can be shared and asks this 
information from the NodeStateProviders. I don't have a strong opinion on 
whether a core should be able to tell how much disk space is available to it or 
should that be delegated to something else logically. 

Also, we've got lost in the discussion about should we take HDFS quotas into 
account or not.
I'd like to introduce a workaround now as our customers are hitting this issue 
and we'd like to unblock them.

> Shard Split on HDFS 
> --------------------
>
>                 Key: SOLR-14251
>                 URL: https://issues.apache.org/jira/browse/SOLR-14251
>             Project: Solr
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 8.4
>            Reporter: Johannes Brucher
>            Priority: Major
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> Shard Split on HDFS Index will evaluate local disc space instead of HDFS space
> When performing a shard split on an index that is stored on HDFS the 
> SplitShardCmd however evaluates the free disc space on the local file system 
> of the server where Solr is installed.
> SplitShardCmd assumes that its main phase (when the Lucene index is being 
> split) always executes on the local file system of the shard leader; and 
> indeed the ShardSplitCmd.checkDiskSpace() checks the local file system's free 
> disk space - even though the actual data is written to the HDFS Directory so 
> it (almost) doesn't affect the local FS (except for core.properties file).
> See also: 
> [https://lucene.472066.n3.nabble.com/HDFS-Shard-Split-td4449920.html]
> My setup to reproduce the issue:
>  * Solr deployed on Openshift with local disc of about 5GB
>  * HDFS configuration based on solrconfig.xml with
> {code:java}
> <directoryFactory name="DirectoryFactory" class="solr.HdfsDirectoryFactory">
>     <str name="solr.hdfs.home">hdfs://path/to/index/</str>
> ...
> {code}
>  * Split command:
> {code:java}
> .../admin/collections?action=SPLITSHARD&collection=collection1&shard=shard1&async=1234{code}
>  * Response:
> {code:java}
> {
>   "responseHeader":{"status":0,"QTime":32},
>   "Operation splitshard caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  not enough free disk space to perform index split on node <solr 
> instance>:8983_solr, required: 294.64909074269235, available: 
> 5.4632568359375",
>   "exception":{
>     "msg":"not enough free disk space to perform index split on node <solr 
> instance>:8983_solr, required: 294.64909074269235, available: 
> 5.4632568359375",
>     "rspCode":500},
>   "status":{"state":"failed","msg":"found [1234] in failed tasks"}
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to