Thanks, Alessandro. We can attempt to come up with such a blog and I can
volunteer for bullets/headings to start with. I also agree that we can
can't come up with some definitive answer as mentioned in other places but
can give an attempt to at least consolidate all these knowledge into one
place.
On Thu, 2015-12-10 at 14:43 -0500, Susheel Kumar wrote:
> Like the details here Eric how you broke memory into different parts. I
> feel if we can combine lot of this knowledge from your various posts, above
> sizing blog, Solr wiki pages, Uwe article on MMap/heap, consolidate and
> present in at
Susheel, this is a very good idea.
I am a little bit busy this period, so I doubt I can contribute with a blog
post, but it would be great if anyone has time.
If not I will add it to my backlog and sooner or later I will do it :)
Furthermore latest observations from Erick are pure gold, and I agre
Like the details here Eric how you broke memory into different parts. I
feel if we can combine lot of this knowledge from your various posts, above
sizing blog, Solr wiki pages, Uwe article on MMap/heap, consolidate and
present in at single place which may help lot of new folks/folks struggling
wi
I object to the question. And the advice. And... ;).
Practically, IMO guidance that "the entire index should
fit into memory" is misleading, especially for newbies.
Let's break it down:
1> "the entire index". What's this? The size on disk?
90% of that size on disk may be stored data which
uses v
Thanks, Jack for quick reply. With Replica / Shard I mean to say on a
given machine there may be two/more replicas and all of them may not fit
into memory.
On Wed, Dec 9, 2015 at 11:00 AM, Jack Krupansky
wrote:
> Yes, there are nuances to any general rule. It's just a starting point, and
> your
Yes, there are nuances to any general rule. It's just a starting point, and
your own testing will confirm specific details for your specific app and
data. For example, maybe you don't query all fields commonly, so each
field-specific index may not require memory or not require it so commonly.
And,
Hi Jack,
Just to add, OS Disk Cache will still make query performant even though
entire index can't be loaded into memory. How much more latency compare to
if index gets completely loaded into memory may vary depending to index
size etc. I am trying to clarify this here because lot of folks takes
@Upayavira,
could you provice the any link, that issue has been resolved.
>>So long as your joined-to collection is replicated across every box
wher i can find this related link or example.
--
View this message in context:
http://lucene.472066.n3.nabble.com/capacity-of-storage-a-singl
o join on my core, that why i am going to solrlcoud(join
> does not support in solrlcoud)
>
> Is there any alternate way to doing it ?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/capacity-of-storage-a-single-core-tp4244197p4244
Generally, you will be resource limited (memory, cpu) rather than by some
arbitrary numeric limit (like 2 billion.)
My personal general recommendation is for a practical limit is 100 million
documents on a machine/node. Depending on your data model and actual data
that number could be higher or lo
Thanks Toke Eskildsen,
Actually i need to join on my core, that why i am going to solrlcoud(join
does not support in solrlcoud)
Is there any alternate way to doing it ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/capacity-of-storage-a-single-core-tp4244197p4244248
On Tue, 2015-12-08 at 05:18 -0700, Mugeesh Husain wrote:
> Capacity regarding 2 simple question:
>
> 1.) How many document we could store in single core(capacity of core
> storage)
There is hard limit of 2 billion documents.
> 2.) How many core we could create in a single server(single node clus
-storage-a-single-core-tp4244197.html
Sent from the Solr - User mailing list archive at Nabble.com.
14 matches
Mail list logo