On 2/4/09 3:44 PM, "Chris Hostetter" wrote:
> I don't thinkg the Query class implementations themselves changed in
> anyway that would have made them larger -- but if you switched from the
> standard parser to dismax parser, or started using lots of boost
> queries, or started using prefix or wil
: >> Aha! I bet that the full Query object became a lot more complicated
: >> between Solr 1.1 and 1.3. That would explain why we did 4X as much GC
: >> after the upgrade.
I don't thinkg the Query class implementations themselves changed in
anyway that would have made them larger -- but if you s
On 2/4/09 3:17 PM, "Mark Miller" wrote:
> Walter Underwood wrote:
>> Aha! I bet that the full Query object became a lot more complicated
>> between Solr 1.1 and 1.3. That would explain why we did 4X as much GC
>> after the upgrade.
>>
>> Items evicted from cache are tenured, so they contribute t
Walter Underwood wrote:
Aha! I bet that the full Query object became a lot more complicated
between Solr 1.1 and 1.3. That would explain why we did 4X as much GC
after the upgrade.
Items evicted from cache are tenured, so they contribute to the full GC.
With an HTTP cache in front, there is hard
Aha! I bet that the full Query object became a lot more complicated
between Solr 1.1 and 1.3. That would explain why we did 4X as much GC
after the upgrade.
Items evicted from cache are tenured, so they contribute to the full GC.
With an HTTP cache in front, there is hardly anything left to be
cac
On Wed, Feb 4, 2009 at 5:52 PM, Walter Underwood wrote:
> I have not had the time to pin it down, but I suspect that items
> evicted from the query result cache contain a lot of objects.
> Are the keys a full parse tree? That could be big.
Yes, keys are full Query objects.
It would be non-trivial
On 2/4/09 2:48 PM, "Mark Miller" wrote:
> If there are spots in Lucene/Solr that are producing so much garbage
> that we can't keep up, perhaps work can be done to address this upon
> pinpointing the issues.
>
> - Mark
I have not had the time to pin it down, but I suspect that items
evicted fro
Walter Underwood wrote:
Also, only use as much heap as you really need. A larger heap
means longer GCs.
Right. Ideally you want to figure out how to get longer pauses down.
There is a lot of fiddling that you can do to improve gc times.
On a multiprocessor machine you can parallelize collec
This is when a load balancer helps. The requests sent around the
time that the GC starts will be stuck on that server, but later
ones can be sent to other servers.
We use a "least connections" load balancing strategy. Each connection
represents a request in progress, so this is the same as equaliz
On Wed, Feb 4, 2009 at 3:12 PM, Otis Gospodnetic
wrote:
> I'd be curious if you could reproduce this in Jetty
All application threads are blocked... it's going to be the same in
Jetty or Tomcat or any other container that's pure Java. There is an
OS level listening queue that has a certain d
Wojtek,
I'm not familiar with the details of Tomcat configuration, but this definitely
sounds like a container issue, closely related to the JVM.
Doing a thread dump for the Java process (the JVM your TOmcat runs in) while
the GC is running will show you which threads are blocked and in turn th
That is the expected behaviour, all application threads are paused
during GC (CMS collector being an exception, there are smaller pauses
but the application threads continue to mostly run). The number of
connections that could end up being queued would depend on your
acceptCount setting in th
12 matches
Mail list logo