10 billion documents on 12 cores is over 800M documents/shard at best.
This is _very_ aggressive for a shard. Could you give more information
about your setup?
I've seen 250M docs fit in 12G memory. I've also seen 10M documents
strain 32G of memory. Details matter a lot. The only way I've been
abl
On 8/18/2015 2:30 AM, Daniel Collins wrote:
> I think this is expected. As Shawn mentioned, your hard commits have
> openSearcher=false, so they flush changes to disk, but don't force a
> re-open of the active searcher.
> By contrast softCommit, sets openSearcher=true, the point of softCommit is
>
--Original Message-
> From: Shawn Heisey [mailto:apa...@elyograg.org]
> Sent: 17 August 2015 19:02
> To: solr-user@lucene.apache.org
> Subject: Re: Solr Caching (documentCache) not working
>
>
>
> On 8/17/2015 7:04 AM, Maulin Rathod wrote:
>
> > We have observed t
:02
To: solr-user@lucene.apache.org
Subject: Re: Solr Caching (documentCache) not working
On 8/17/2015 7:04 AM, Maulin Rathod wrote:
> We have observed that Intermittently querying become slower when
> documentCache become empty. The documentCache is getting flushed whenever new
>
On Mon, Aug 17, 2015 at 4:36 PM, Daniel Collins wrote:
> we had to turn off
> ALL the Solr caches (warming is useless at that kind of frequency
Warming and caching are related, but different. Caching still
normally makes sense without warming, and Solr is generally written
with the assumption th
On Mon, Aug 17, 2015 at 11:36 PM, Daniel Collins
wrote:
> Just to open the can of worms, it *can* be possible to have very low commit
> times, we have 250ms currently and are in production with that. But it
> does come with pain (no such thing as a free lunch!), we had to turn off
> ALL the Solr
Just to open the can of worms, it *can* be possible to have very low commit
times, we have 250ms currently and are in production with that. But it
does come with pain (no such thing as a free lunch!), we had to turn off
ALL the Solr caches (warming is useless at that kind of frequency, it will
tak
On 8/17/2015 7:04 AM, Maulin Rathod wrote:
> We have observed that Intermittently querying become slower when
> documentCache become empty. The documentCache is getting flushed whenever new
> document added to the collection.
>
> Is there any way by which we can ensure that newly added documents
Great explanation and article.
Yes, this buffer for merges seems very small, and still optimized. Thats
impressive.
Manuel:
First off, anything that Mike McCandless says about low-level
details should override anything I say. The memory savings
he's talking about there are actually something he tutored me
in once on a chat.
The savings there, as I understand it, aren't huge. For large
sets I think it's a 25% s
Alright, thanks Erick. For the question about memory usage of merges, taken
from Mike McCandless Blog
The big thing that stays in RAM is a logical int[] mapping old docIDs to
new docIDs, but in more recent versions of Lucene (4.x) we use a much more
efficient structure than a simple int[] ... see
Inline
On Thu, Jul 11, 2013 at 8:36 AM, Manuel Le Normand
wrote:
> Hello,
> As a result of frequent java OOM exceptions, I try to investigate more into
> the solr jvm memory heap usage.
> Please correct me if I am mistaking, this is my understanding of usages for
> the heap (per replica on a solr
On Apr 17, 2013, at 3:09 PM, Furkan KAMACI wrote:
> I've just started to read about Solr caching. I want to learn one thing.
> Let's assume that I have given 4 GB RAM into my Solr application and I have
> 10 GB RAM. When Solr caching mechanism starts to work, does it use memory
> from that 4 GB pa
4.0 is significantly more efficient memory-wise, both in the usage and
number of objects allocated. See:
http://searchhub.org/dev/2012/04/06/memory-comparisons-between-solr-3x-and-trunk/
Erick
On Sun, Sep 30, 2012 at 12:25 AM, varun srivastava
wrote:
> Hi Erick,
> You mentioned for 4.0 memory
Hi Erick,
You mentioned for 4.0 memory pattern is much difference than 3.X . Can you
elaborate whether its worse or better ? Does 4.0 tend to use more memory
for similar index size as compared to 3.X ?
Thanks
Varun
On Sat, Sep 29, 2012 at 1:58 PM, Erick Erickson wrote:
> Well, I haven't had exp
Well, I haven't had experience with JDK7, so I'll skip that part...
But about caches. First, as far as memory is concerned, be
sure to read Uwe's blog about MMapDirectory here:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
As to the caches.
Be a little careful here. Get
On 3/31/2012 4:30 AM, Suneel wrote:
Hello friends,
I am using DIH for solr indexing. I have 60 million records in SQL which
need to upload on solr. i started caching its smoothly working and memory
consumption is normal, But after some time incrementally memory consumption
going high and process
Hello friends,
I am using DIH for solr indexing. I have 60 million records in SQL which
need to upload on solr. i started caching its smoothly working and memory
consumption is normal, But after some time incrementally memory consumption
going high and process reach more then 6 gb. that the reason
There are now two excellent books: "Lucene In Action 2" and "Solr 1.4
Enterprise Search Server" the describe the inners workings of these
technologies and how they fit together.
Otherwise Solr and Lucene knowledge are only available in a fragmented
form across many wiki pages, bug reports and emai
Is there any way to analyze or see that which documents are getting cached
by documentCache -
On Wed, Sep 23, 2009 at 8:10 AM, satya wrote:
> First of all , thanks a lot for the clarification.Is there any way to see,
> how this cache is working internally and what are the objects being sto
First of all , thanks a lot for the clarification.Is there any way to see,
how this cache is working internally and what are the objects being stored
and how much memory its consuming,so that we can get a clear picture in
mind.And how to test the performance through cache.
On Tue, Sep 22, 2009 at
> 1)Then do you mean , if we delete a perticular doc ,then that is going to
be
> deleted from
> cache also.
When you delete document, and then COMMIT your changes, new caches will be
warmed up (and prepopulated by some key-value pairs from old instances),
etc:
- this one won't be 'prep
1)Then do you mean , if we delete a perticular doc ,then that is going to be
deleted from
cache also.
2)In solr,is cache storing the entire document in memory or only the
references to
documents in memory.
And how to test this caching after all.
I ll be thankful upon getting an elaboration.
Solr's caches should be transparent - they should only speed up
queries, not change the result of queries.
-Yonik
http://www.lucidimagination.com
On Tue, Sep 22, 2009 at 9:45 AM, satyasundar jena wrote:
> I configured filter cache in solrconfig.xml as here under :
> class="solr.FastLRUCache"
>
24 matches
Mail list logo