1. The exception and change in experience on the move to 4.6 seems like it 
could be a bug we want to investigate. 

2. Solr storing data on hdfs in other ways seems like a different issue / 
improvement. 

3. You shouldn't try and force more than one core to use the same index on 
hdfs. This would be bad. 

4. You really want to use the solr.hdfs.home setting described in the 
documentation IMO. 

- Mark

> On Dec 26, 2013, at 1:56 PM, Greg Walters <greg.walt...@answers.com> wrote:
> 
> Mark,
> 
> I'd be happy to but some clarification first; should this issue be about 
> creating cores with overlapping names and the stack trace that YouPeng 
> initially described, Solr's behavior when storing data on HDFS or YouPeng's 
> other thread (Maybe a bug for solr 4.6 when create a new core) that looks 
> like it might be a near duplicate of this one?
> 
> Thanks,
> Greg
> 
>> On Dec 26, 2013, at 12:40 PM, Mark Miller <markrmil...@gmail.com> wrote:
>> 
>> Can you file a JIRA issue?
>> 
>> - Mark
>> 
>>> On Dec 24, 2013, at 2:57 AM, YouPeng Yang <yypvsxf19870...@gmail.com> wrote:
>>> 
>>> Hi users
>>> 
>>> Solr supports for writing and reading its index and transaction log files
>>> to the HDFS distributed filesystem.
>>> **I am curious about that there are any other futher improvement about
>>> the integration with HDFS.*
>>> **For the solr  native replication  will make multiple copies  of the
>>> master node's index. Because of the native replication of HDFS,there is no
>>> need to do that.It just to need that multiple cores in solrcloud share the
>>> same index directory in HDFS?*
>>> 
>>> 
>>> The above supposition is what I want to achive when we are integrating
>>> SolrCloud with HDFS (Solr 4.6).
>>> To make sure of our application high available,we still have  to take
>>> the solr   replication with   some tricks.
>>> 
>>> Firstly ,noting that  solr's index directory is made up of
>>> *collectionName/coreNodeName/data/index *
>>> 
>>> *collectionName/coreNodeName/data/tlog*
>>> So to achive this,we want to create multi cores that use the same  hdfs
>>> index directory .
>>> 
>>> I have tested this  within solr 4.4 by expilcitly indicating  the same
>>> coreNodeName.
>>> 
>>> For example:
>>> Step1, a core was created with the name=core1 and shard=core_shard1 and
>>> collection=clollection1 and coreNodeName=*core1*
>>> Step2. create another core  with the name=core2 and shard=core_shard1 and
>>> collection=clollection1 and coreNodeName=
>>> *core1*
>>> *  T*he two core share the same shard ,collection and coreNodeName.As a
>>> result,the two core will get the same index data which is stored in the
>>> hdfs directory :
>>> hdfs://myhdfs/*clollection1*/*core1*/data/index
>>> hdfs://myhdfs/*clollection1*/*core1*/data/tlog
>>> 
>>> Unfortunately*, *as the solr 4.6 was released,we upgraded . the above
>>> goal failed. We could not create a core with both expilcit shard and
>>> coreNodeName.
>>> Exceptions are as [1].
>>> *  Can some give some help?*
>>> 
>>> 
>>> Regards
>>> [1]------------------------------------------------------------------------------------------------------------------
>>> 64893635 [http-bio-8080-exec-1] INFO  org.apache.solr.cloud.ZkController
>>> ?.publishing core=hdfstest3 state=down
>>> 64893635 [http-bio-8080-exec-1] INFO  org.apache.solr.cloud.ZkController
>>> ?.numShards not found on descriptor - reading it from system property
>>> 64893698 [http-bio-8080-exec-1] INFO  org.apache.solr.cloud.ZkController
>>> ?.look for our core node name
>>> 
>>> 
>>> 
>>> 64951227 [http-bio-8080-exec-17] INFO  org.apache.solr.core.SolrCore
>>> ?.[reportCore_201208] webapp=/solr path=/replication
>>> params={slave=false&command=details&wt=javabin&qt=/replication&version=2}
>>> status=0 QTime=107
>>> 
>>> 
>>> 65213770 [http-bio-8080-exec-1] INFO  org.apache.solr.cloud.ZkController
>>> ?.waiting to find shard id in clusterstate for hdfstest3
>>> 65533894 [http-bio-8080-exec-1] ERROR org.apache.solr.core.SolrCore
>>> ?.org.apache.solr.common.SolrException: Error CREATEing SolrCore
>>> 'hdfstest3': Could not get shard id for core: hdfstest3
>>>  at
>>> org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:535)
>>>  at
>>> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:152)
>>>  at
>>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>>>  at
>>> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:662)
>>>  at
>>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248)
>>>  at
>>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197)
>>>  at
>>> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
>>>  at
>>> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
>>>  at
>>> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
>>>  at
>>> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
>>>  at
>>> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
>>>  at
>>> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
>>>  at
>>> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:947)
>>>  at
>>> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
>>>  at
>>> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
>>>  at
>>> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1009)
>>>  at
>>> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
>>>  at
>>> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
>>>  at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>  at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>  at java.lang.Thread.run(Thread.java:722)
>>> Caused by: org.apache.solr.common.SolrException: Could not get shard id for
>>> core: hdfstest3
>>>  at
>>> org.apache.solr.cloud.ZkController.waitForShardId(ZkController.java:1302)
>>>  at
>>> org.apache.solr.cloud.ZkController.doGetShardIdAndNodeNameProcess(ZkController.java:1248)
>>>  at
>>> org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1399)
>>>  at
>>> org.apache.solr.core.CoreContainer.preRegisterInZk(CoreContainer.java:942)
>>>  at
>>> org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:492)
>>>  ... 20 more
> 

Reply via email to