On 8/4/2015 3:30 PM, Jay Potharaju wrote:
> For the last few days I have been trying to correlate the timeouts with GC.
> I noticed in the GC logs that full GC takes long time once in a while. Does
> this mean that the jvm memory is to high or is it set to low?
> 1973953.560: [GC 4474277K->33004
For the last few days I have been trying to correlate the timeouts with GC.
I noticed in the GC logs that full GC takes long time once in a while. Does
this mean that the jvm memory is to high or is it set to low?
[GC 4730643K->3552794K(4890112K), 0.0433146 secs]
1973853.751: [Full GC 3552794K->
Yeah a separate by month or year is good and can really help in this case.
Bill Bell
Sent from mobile
> On Aug 2, 2015, at 5:29 PM, Jay Potharaju wrote:
>
> Shawn,
> Thanks for the feedback. I agree that increasing timeout might alleviate
> the timeout issue. The main problem with increasing t
There are two things that are likely to cause the timeouts you are
seeing, I'd say.
Firstly, your server is overloaded - that can be handled by adding
additional replicas.
However, it doesn't seem like this is the case, because the second query
works fine.
Secondly, you are hitting garbage colle
Shawn,
Thanks for the feedback. I agree that increasing timeout might alleviate
the timeout issue. The main problem with increasing timeout is the
detrimental effect it will have on the user experience, therefore can't
increase it.
I have looked at the queries that threw errors, next time I try it
On 8/2/2015 8:29 AM, Jay Potharaju wrote:
> The document contains around 30 fields and have stored set to true for
> almost 15 of them. And these stored fields are queried and updated all the
> time. You will notice that the deleted documents is almost 30% of the
> docs. And it has stayed around t
The document contains around 30 fields and have stored set to true for
almost 15 of them. And these stored fields are queried and updated all the
time. You will notice that the deleted documents is almost 30% of the
docs. And it has stayed around that percent and has not come down.
I did try optim
On 8/1/2015 6:49 PM, Jay Potharaju wrote:
> I currently have a single collection with 40 million documents and index
> size of 25 GB. The collections gets updated every n minutes and as a result
> the number of deleted documents is constantly growing. The data in the
> collection is an amalgamation
40 million docs isn't really very many by modern standards,
although if they're huge documents then that might be an issue.
So is this a single shard or multiple shards? If you're really facing
performance issues, simply making a new collection with more
than one shard (independent of how many rep
Hi
I currently have a single collection with 40 million documents and index
size of 25 GB. The collections gets updated every n minutes and as a result
the number of deleted documents is constantly growing. The data in the
collection is an amalgamation of more than 1000+ customer records. The
numb
10 matches
Mail list logo