Thank you for helping out.

Further inquiry: I am committing records to my solr implementation and they
are not getting showing up in my search. I am search on the default id.
Is this related to the fact that I dont have enough memory so my SOLR is
taking a lot of time to actually making the indexed documents available
instantly.

I  also looked at the solr log when I sent in my curl commit with my
record(which I can not see in the SOLR instance even after sending it
repeatedly), but it didn't through an error.

I got this as my response on insertion of that record:

{"responseHeader":{"status":0,"QTime":57}}

Thank you.

Sid.

On Tue, Oct 6, 2015 at 3:21 PM, Shawn Heisey <apa...@elyograg.org> wrote:

> On 10/6/2015 8:18 AM, Siddhartha Singh Sandhu wrote:
> > A have a few questions about optimize. Is the search index fully
> searchable
> > after a commit?
>
> If openSearcher is true on the commit, then changes to the index
> (additions, replacements, deletions) will be visible when the commit
> completes.
>
> > How much time does one have to wait in case of a hard commit for the
> index
> > to be available?
>
> This is impossible to answer.  It will take as long as it takes, and the
> time will depend on many factors, so it is nearly impossible to
> predict.  The only way to know is to try it ... and the number you get
> on one test may be very different than what you actually see once the
> system is in production.
>
> > I have an index of 180G. Do I need to hit the optimize on this chunk.
> This
> > is a single core. Say I cannot get in a cloud env because of cost but
> this
> > is a fairly large
> > amazon machine where I have given SOLR 12G of memory.
>
> Whatever RAM is left over after you give 12GB to Java for Solr will be
> used automatically by the operating system to cache index data on the
> disk.  Solr is completely reliant on that caching for good performance.
> A perfectly ideal system for that index and heap size would have 192GB
> of RAM, which is enough to entirely cache the index.  I personally
> wouldn't expect good performance with less than 96GB.  Some systems with
> a 180GB index and a 12GB heap might be OK with 64GBtotal memory, while
> others with the same size index will require more.
>
>
> https://lucidworks.com/blog/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/
>
> If the index is on SSD, then RAM is *slightly* less important, and
> performance usually goes up with SSD ... but an SSD cannot completely
> replace RAM, because RAM is much faster.  With SSD, you can get away
> with less RAM than you can on a spinning disk system, but depending on a
> bunch of factors, it may not be a LOT less RAM.
>
> https://wiki.apache.org/solr/SolrPerformanceProblems
>
> Optimizing the index is almost never necessary with recent versions.  In
> almost all cases optimizing will get you a performance increase, but it
> comes at a huge cost in terms of resource utilization to DO the
> optimize.  While the optimize is happening performance will likely be
> worse, possibly a LOT worse.  Newer versions of Solr (Lucene) have
> closed the gap on performance with non-optimized indexes, so it doesn't
> gain you as much in performance as it did in earlier versions.
>
> Thanks,
> Shawn
>
>

Reply via email to