Hi,

I have a case where I have a csv file on my Unix file system and not in
Hadoop file system.

For example I have abc.xml in /home/cloudera/abc.xml on my Cloudera VMware.

Now in Hadoop I go and I create a collection named test10 according to
schema of abc.xml

and using post.jar I post the file , then in path
"/solr/test10/core_node1/data/index", it creates the index and I can search
it.

Now following are my queries, I find the path where index gets created, but
if I use post.jar then does it also copy the actual fie "abc.xml" in this
case from Local file system to Hadoop file system.

If yes then where does it copy in Hadoop file system. The relevance of
question is I want to figure out where does it copy so that I can make sure
file is not deleted from that path in Hadoop file system and if I need it
for a different use case then I know where is file lying in Hadoop file
system and I can access it without using Solr Search too.

Thanks and Regards
Aniruddh

Reply via email to