bq: But when indexing a document in one shard,it gets reflected in every shard
of that collection

This is a misunderstanding (and I'm being a bit pedantic here). Each shard
contains a portion of the entire corpus. Say you have 1M docs and 2 shards.
Each shard will have very close to 500K documents.

If a shard has multiple _replicas_, each replica has a copy of the doc.

Please take the time to work through the Solr tutorials, much will become
clearer. You don't need any kind of extensive setup, you can see how things
run on any machine you have.

Best,
Erick

On Wed, Jan 6, 2016 at 5:19 AM, Binoy Dalal <binoydala...@gmail.com> wrote:
> The machines part may have been a bit misleading. I am sorry for that. What
> I actually meant was shards. Now, you can have multiple shards hosted on a
> single machine or multiple machines as in the example I gave.
>
> "I have to make sure that all those machines have solr server or gateway
> should be deplyed ?"
>
> Yes you do need a solr process running on all machines on which you plan to
> distribute your index.
>
> "And what multiple JVM processes run behind a solr server running?"
>
> If you mean how many jvms are running for a solr server, the answer's 1.
> "then what is a solr instance?"
> One solr process on your machine.
>
> On Wed, 6 Jan 2016, 18:33 vidya <vidya.nade...@tcs.com> wrote:
>
>> Hi
>> You described that sharding is to distribute data over multiple machines.Do
>> I have to make sure that all those machines have solr server or gateway
>> should be deplyed ?
>> And what multiple JVM processes run behind a solr server running?
>> I wanted to know what is a node. -> I understood like a mchine with solr
>> server deployed.
>> then what is a solr instance?
>>
>> Am I correct.If not,please help me
>>
>> Thanks in advance
>>
>>
>>
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/core-Collection-Shard-Replication-tp4248850p4248865.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
> --
> Regards,
> Binoy Dalal

Reply via email to