On 5/22/2015 3:15 AM, Angel Todorov wrote:
> Thanks for the feedback guys. What i am going to try now is deploying my
> SOLR server on a physical machine with more RAM, and checking out this
> scenario there. I have some suspicion it could well be a hypervisor issue,
> but let's see. Just for the r
Hi Angel,
a while ago I had issues with VMWare VM - somehow snapshots were created
regularly which dragged down the machine. So I think is is a good idea to
baseline the performance on physical box before moving to VMs, production boxes
or whatever is thrown at you
Cheers,
Siegfried Goeschl
Thanks for the feedback guys. What i am going to try now is deploying my
SOLR server on a physical machine with more RAM, and checking out this
scenario there. I have some suspicion it could well be a hypervisor issue,
but let's see. Just for the record - I've noticed those issues on a Win
2008R2 V
bq: Which is logical as index growth and time needed to put something
to it is log(n)
Not really. Solr indexes to segments, each segment is a fully
consistent "mini index".
When a segment gets flushed to disk, a new one is started. Of course
there'll be a
_little bit_ of added overyead, but it sho
Hi Angel
We also noticed that kind of performance degrade in our workloads.
Which is logical as index growth and time needed to put something to it is
log(n)
четверг, 21 мая 2015 г. пользователь Angel Todorov написал:
> hi Shawn,
>
> Thanks a bunch for your feedback. I've played with the heap
hi Shawn,
Thanks a bunch for your feedback. I've played with the heap size, but I
don't see any improvement. Even if i index, say , a million docs, and the
throughput is about 300 docs per sec, and then I shut down solr completely
- after I start indexing again, the throughput is dropping below 30
On 5/21/2015 2:07 AM, Angel Todorov wrote:
> I'm crawling a file system folder and indexing 10 million docs, and I am
> adding them in batches of 5000, committing every 50 000 docs. The problem I
> am facing is that after each commit, the documents per sec that are indexed
> gets less and less.
>
hi guys,
I'm crawling a file system folder and indexing 10 million docs, and I am
adding them in batches of 5000, committing every 50 000 docs. The problem I
am facing is that after each commit, the documents per sec that are indexed
gets less and less.
If I do not commit at all, I can index thos