Thank you for the replies!
Because of everyone's insight, I was able to deduce that the problem was on
our configuration.
Our heap size was 10 gigs so I don't think this is the problem since we only
have 900k data. So when we took a closer look at our schema, 2 of the
relevant fields has ShingleF
512M was the default heap for Java 1.1. We never changed the default. So no
size was “chosen”.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On May 3, 2019, at 10:11 PM, Shawn Heisey wrote:
>
> On 5/3/2019 1:37 PM, Erick Erickson wrote:
>> We alread
On 5/3/2019 1:37 PM, Erick Erickson wrote:
We already do warnings for ulimits, so memory seems reasonable. Along the same
vein, does starting with 512M make sense either?
Feel free to, raise a JIRA, but I won’t have any time to work on it….
Done.
https://issues.apache.org/jira/browse/SOLR-13
Shawn:
We already do warnings for ulimits, so memory seems reasonable. Along the same
vein, does starting with 512M make sense either?
Feel free to, raise a JIRA, but I won’t have any time to work on it….
> On May 3, 2019, at 3:27 PM, Walter Underwood wrote:
>
> We run very long queries with
We run very long queries with an 8 GB heap. 30 million documents in 8 shards
with an average query length of 25 terms.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On May 3, 2019, at 6:49 PM, Shawn Heisey wrote:
>
> On 5/3/2019 2:32 AM, solrnoobie
On 5/3/2019 2:32 AM, solrnoobie wrote:
So whenever we have long q values (from a sentence to a small paragraph), we
encounter some heap problems (OOM) and I guess this is normal?
So my question would be is how should we handle this type of problem? Of
course we could always limit the size of the