Steve Rowe [sar...@gmail.com] wrote:
> 1 Lakh (aka Lac) = 10^5 is written as 1,00,000
>
> It’s used in Bangladesh, India, Myanmar, Nepal, Pakistan, and Sri Lanka,
> roughly 1/4 of the world’s population.
Yet still it causes confusion and distracts from the issue. Let's just stick to
metric, okay
On Jul 25, 2014, at 9:13 AM, Shawn Heisey wrote:
> On 7/24/2014 7:53 AM, Ameya Aware wrote:
> The odd location of the commas in the start of this thread make it hard
> to understand exactly what numbers you were trying to say
On Jul 24, 2014, at 9:32 AM, Ameya Aware wrote:
> I am in process o
On 7/24/2014 7:53 AM, Ameya Aware wrote:
> I did not make any other change than this.. rest of the settings are
> default.
>
> Do i need to set garbage collection strategy?
The collector chosen and its and tuning params can have a massive impact
on performance, but it will make no difference at a
n, thats why i why getting java heap space error?
On Thu, Jul 24, 2014 at 9:58 AM, Marcello Lorenzi
mailto:mlore...@sorint.it>>
wrote:
I think that on large heap is suggested to monitor the garbage collection
behavior and try to add a strategy adapted to your performance. On my
prod
ts why i why getting java heap space error?
>
>
>
>
>
> On Thu, Jul 24, 2014 at 9:58 AM, Marcello Lorenzi
> wrote:
>
>> I think that on large heap is suggested to monitor the garbage collection
>> behavior and try to add a strategy adapted to your performance.
ooh ok.
So you want to say that since i am using large heap but didnt set my
garbage collection, thats why i why getting java heap space error?
On Thu, Jul 24, 2014 at 9:58 AM, Marcello Lorenzi
wrote:
> I think that on large heap is suggested to monitor the garbage collection
> be
ives java heap
space error
again.
Any fix for this?
Thanks,
Ameya
/2014 03:32 PM, Ameya Aware wrote:
>
>> Hi
>>
>> I am in process of indexing around 2,00,000 documents.
>>
>> I have increase java jeap space to 4 GB using below command :
>>
>> java -Xmx4096M -Xms4096M -jar start.jar
>>
>> Still after indexin
indexing around 15000 documents it gives java heap space error
again.
Any fix for this?
Thanks,
Ameya
Hi
I am in process of indexing around 2,00,000 documents.
I have increase java jeap space to 4 GB using below command :
java -Xmx4096M -Xms4096M -jar start.jar
Still after indexing around 15000 documents it gives java heap space error
again.
Any fix for this?
Thanks,
Ameya
You may want to change your solr startup script such that it creates a
heap dump on OOM. Add -XX:+HeapDumpOnOutOfMemoryError as an option.
The heap dump can be nicely analyzed with http://www.eclipse.org/mat/.
Just increasing -Xmx is a workaround that may help to get around for a
while. With m
Hello!
Yes, just edit your Jetty configuration file and add -Xmx and -Xms
parameters. For example, the file you may be looking at it
/etc/default/jetty.
--
Regards,
Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
> So can
So can i come over this exception by increasing heap size somewhere?
Thanks,
Ameya
On Tue, Jul 22, 2014 at 2:00 PM, Shawn Heisey wrote:
> On 7/22/2014 11:37 AM, Ameya Aware wrote:
> > i am running into java heap space issue. Please see below log.
>
> All we have here is an out of memory except
On 7/22/2014 11:37 AM, Ameya Aware wrote:
> i am running into java heap space issue. Please see below log.
All we have here is an out of memory exception. It is impossible to
know *why* you are out of memory from the exception. With enough
investigation, we could determine the area of code where
Hi
i am running into java heap space issue. Please see below log.
ERROR - 2014-07-22 11:38:59.370; org.apache.solr.common.SolrException;
null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at
org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:790
I'm embarrassed (but hugely relieved) to say that, the script I had for
starting Jetty had a bug in the way it set java options! So, my heap
start/max was always set at the default. I did end up using jconsole and
learned quite a bit from that too.
Thanks for your help Yonik :)
Matt
On Sat, Jan
On Sat, Jan 16, 2010 at 11:04 AM, Matt Mitchell wrote:
> These are single valued fields. Strings and integers. Is there more specific
> info I could post to help diagnose what might be happening?
Faceting on either should currently take ~24MB (6M docs @ 4 bytes per
doc + size_of_unique_values)
Wi
These are single valued fields. Strings and integers. Is there more specific
info I could post to help diagnose what might be happening?
Thanks!
Matt
On Sat, Jan 16, 2010 at 10:42 AM, Yonik Seeley
wrote:
> On Sat, Jan 16, 2010 at 10:01 AM, Matt Mitchell
> wrote:
> > I have an index with more tha
On Sat, Jan 16, 2010 at 10:01 AM, Matt Mitchell wrote:
> I have an index with more than 6 million docs. All is well, until I turn on
> faceting and specify a facet.field. There is only about unique 20 values for
> this particular facet throughout the entire index.
Hmmm, that doesn't sound right..
I have an index with more than 6 million docs. All is well, until I turn on
faceting and specify a facet.field. There is only about unique 20 values for
this particular facet throughout the entire index. I was able to make things
a little better by using facet.method=enum. That seems to work, until
20 matches
Mail list logo