Also not sure about your domain but you may want to double check if you
really need 350 fields for searching & storing. Many times when you
challenge this against the higher cost of hardware, you may be able to
reduce # of searchable / stored fields.
Thanks,
Susheel
On Thu, Jun 2, 2016 at 9:21 AM
On 6/2/2016 1:28 AM, Selvam wrote:
> We need to run a heavy SOLR with 300 million documents, with each
> document having around 350 fields. The average length of the fields
> will be around 100 characters, it may have date and integers fields as
> well. Now we are not sure whether to have single se
Hi,
On a note, we also need all 350 fields to be stored and indexed.
On Thu, Jun 2, 2016 at 12:58 PM, Selvam wrote:
> Hello all,
>
> We need to run a heavy SOLR with 300 million documents, with each
> document having around 350 fields. The average length of the fields will be
> around 100 cha
Hello all,
We need to run a heavy SOLR with 300 million documents, with each document
having around 350 fields. The average length of the fields will be around
100 characters, it may have date and integers fields as well. Now we are
not sure whether to have single server or run multiple servers (
le that enu will be routed to shard1 while deu goes
> to shard2, and esp and chs gets indexed in either of them. Or, all of them
> can potentially end up getting indexed in the same shard, either 1 or 2,
> leaving one shard under-utilized.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-Cloud-sharding-strategy-tp4262274p4262336.html
> Sent from the Solr - User mailing list archive at Nabble.com.
ble that enu will be routed to shard1 while deu goes
to shard2, and esp and chs gets indexed in either of them. Or, all of them
can potentially end up getting indexed in the same shard, either 1 or 2,
leaving one shard under-utilized.
--
View this message in context:
http://lucene.472066.n3.n
n grow up to half a TB from its current state.
>> Honestly, my perception of "big" index is still vague :-) . All I'm trying
>> to make sure is that decision I take is scalable in the long term and will
>> be able to sustain the growth without compromising the performance.
>>
>>
>>
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/Solr-Cloud-sharding-strategy-tp4262274p4262304.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
Honestly, my perception of "big" index is still vague :-) . All I'm trying
> to make sure is that decision I take is scalable in the long term and will
> be able to sustain the growth without compromising the performance.
>
>
>
> --
> View this message in context:
472066.n3.nabble.com/Solr-Cloud-sharding-strategy-tp4262274p4262304.html
Sent from the Solr - User mailing list archive at Nabble.com.
20M docs is actually a very small collection by the "usual" Solr
standards unless they're _really_ large documents, i.e.
large books.
Actually, I wouldn't even shard to begin with, it's unlikely that it's
necessary and it adds inevitable overhead. If you _must_ shard,
just go with <1>, but again I
Hi,
I'm trying to figure the best way to design/allocate shards for our Solr
Cloud environment.Our current index has around 20 million documents, in 10
languages. Around 25-30% of the content is in English. Rest are almost
equally distributed among the remaining 13 languages. Till now, we had to
11 matches
Mail list logo