Yes it is amazon ec2 indeed.

To expqnd on that,
This solr deployment was working fine, handling the same load, on a 34 GB
instance on ebs storage for quite some time. To reduce the time taken by a
commit, I shifted this to a 30 GB SSD instance. It performed better in
writes and commits for sure. But, since the last week I started facing this
problem of infinite back to back commits. Not being able to resolve this, I
have finally switched back to a 34 GB machine with ebs storage, and now the
commits are working fine, though slow.

Any thoughts?
On 6 Feb 2014 23:00, "Shawn Heisey" <s...@elyograg.org> wrote:

> On 2/6/2014 9:56 AM, samarth s wrote:
> > Size of index = 260 GB
> > Total Docs = 100mn
> > Usual writing speed = 50K per hour
> > autoCommit-maxDocs = 400,000
> > autoCommit-maxTime = 1500,000 (25 mins)
> > merge factor = 10
> >
> > M/c memory = 30 GB, Xmx = 20 GB
> > Server - Jetty
> > OS - Cent OS 6
>
> With 30GB of RAM (is it Amazon EC2, by chance?) and a 20GB heap, you
> have about 10GB of RAM left for caching your Solr index.  If that server
> has all 260GB of index, I am really surprised that you have only been
> having problems for a short time.  I would have expected problems from
> day one.  Even if it only has half or one quarter of the index, there is
> still a major discrepancy in RAM vs. index size.
>
> You either need more memory or you need to reduce the size of your
> index.  The size of the indexed portion generally has more of an impact
> on performance than the size of the stored portion, but they do both
> have an impact, especially on indexing and committing.  With regular
> disks, it's best to have at least 50% of your index size available to
> the OS disk cache, but 100% is better.
>
> http://wiki.apache.org/solr/SolrPerformanceProblems#OS_Disk_Cache
>
> If you are already using SSD, you might think there can't be
> memory-related performance problems ... but you still need a pretty
> significant chunk of disk cache.
>
> https://wiki.apache.org/solr/SolrPerformanceProblems#SSD
>
> Thanks,
> Shawn
>
>

Reply via email to