On Mon, Oct 29, 2012 at 7:04 AM, Shawn Heisey wrote:
> They are indeed Java options. The first two control the maximum and
> starting heap sizes. NewRatio controls the relative size of the young and
> old generations, making the young generation considerably larger than it is
> by default. The
On 10/28/2012 2:28 PM, Dotan Cohen wrote:
On Fri, Oct 26, 2012 at 11:04 PM, Shawn Heisey wrote:
Warming doesn't seem to be a problem here -- all your warm times are zero,
so I am going to take a guess that it may be a heap/GC issue. I would
recommend starting with the following additional argu
On Fri, Oct 26, 2012 at 11:04 PM, Shawn Heisey wrote:
> Warming doesn't seem to be a problem here -- all your warm times are zero,
> so I am going to take a guess that it may be a heap/GC issue. I would
> recommend starting with the following additional arguments to your JVM.
> Since I have no id
On 10/26/2012 9:41 AM, Dotan Cohen wrote:
On the dashboard of the GUI, it lists all the jvm arguments. Include those.
Click Java Properties and gather the "java.runtime.version" and
"java.specification.vendor" information.
After one of the long update times, pause/stop your indexing application
On Fri, Oct 26, 2012 at 4:02 PM, Shawn Heisey wrote:
>
> Taking all the information I've seen so far, my bet is on either cache
> warming or heap/GC trouble as the source of your problem. It's now specific
> information gathering time. Can you gather all the following information
> and put it in
On 10/26/2012 7:16 AM, Dotan Cohen wrote:
I spoke too soon! Wereas three days ago when the index was new 500
records could be written to it in <3 seconds, now that operation is
taking a minute and a half, sometimes longer. I ran optimize() but
that did not help the writes. What can I do to improv
I spoke too soon! Wereas three days ago when the index was new 500
records could be written to it in <3 seconds, now that operation is
taking a minute and a half, sometimes longer. I ran optimize() but
that did not help the writes. What can I do to improve the write
performance?
Even opening the L
On Wed, Oct 24, 2012 at 4:33 PM, Walter Underwood wrote:
> Please consider never running "optimize". That should be called "force merge".
>
Thanks. I have been letting the system run for about two days already
without an optimize. I will let it run a week, then merge to see the
effect.
--
Dotan
Please consider never running "optimize". That should be called "force merge".
wunder
On Oct 24, 2012, at 3:28 AM, Dotan Cohen wrote:
> On Tue, Oct 23, 2012 at 3:07 PM, Erick Erickson
> wrote:
>> Maybe you've been looking at it but one thing that I didn't see on a fast
>> scan was that maybe
On Tue, Oct 23, 2012 at 3:07 PM, Erick Erickson wrote:
> Maybe you've been looking at it but one thing that I didn't see on a fast
> scan was that maybe the commit bit is the problem. When you commit,
> eventually the segments will be merged and a new searcher will be opened
> (this is true even i
Maybe you've been looking at it but one thing that I didn't see on a fast
scan was that maybe the commit bit is the problem. When you commit,
eventually the segments will be merged and a new searcher will be opened
(this is true even if you're NOT optimizing). So you're effectively committing
every
On Tue, Oct 23, 2012 at 3:52 AM, Shawn Heisey wrote:
> As soon as you make any change at all to an index, it's no longer
> "optimized." Delete one document, add one document, anything. Most of the
> time you will not see a performance increase from optimizing an index that
> consists of one larg
On 10/22/2012 3:11 PM, Dotan Cohen wrote:
On Mon, Oct 22, 2012 at 10:01 PM, Walter Underwood
wrote:
First, stop optimizing. You do not need to manually force merges. The system
does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
might be the cause of your problem.
On Mon, Oct 22, 2012 at 10:44 PM, Walter Underwood
wrote:
> Lucene already did that:
>
> https://issues.apache.org/jira/browse/LUCENE-3454
>
> Here is the Solr issue:
>
> https://issues.apache.org/jira/browse/SOLR-3141
>
> People over-use this regardless of the name. In Ultraseek Server, it was
>
On Mon, Oct 22, 2012 at 10:01 PM, Walter Underwood
wrote:
> First, stop optimizing. You do not need to manually force merges. The system
> does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
> might be the cause of your problem.
>
Thanks. Looking at the index statistic
On Mon, Oct 22, 2012 at 4:39 PM, Michael Della Bitta
wrote:
> Has the Solr team considered renaming the optimize function to avoid
> leading people down the path of this antipattern?
If it were never the right thing to do, it could simply be removed.
The problem is that it's sometimes the right t
Lucene already did that:
https://issues.apache.org/jira/browse/LUCENE-3454
Here is the Solr issue:
https://issues.apache.org/jira/browse/SOLR-3141
People over-use this regardless of the name. In Ultraseek Server, it was called
"force merge" and we had to tell people to stop doing that nearly e
Has the Solr team considered renaming the optimize function to avoid
leading people down the path of this antipattern?
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a
First, stop optimizing. You do not need to manually force merges. The system
does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
might be the cause of your problem.
Second, the OS will use the "extra" memory for file buffers, which really helps
performance, so you migh
On Mon, Oct 22, 2012 at 9:22 PM, Mark Miller wrote:
> Perhaps you can grab a snapshot of the stack traces when the 60 second
> delay is occurring?
>
> You can get the stack traces right in the admin ui, or you can use
> another tool (jconsole, visualvm, jstack cmd line, etc)
>
Thanks. I've refacto
Perhaps you can grab a snapshot of the stack traces when the 60 second
delay is occurring?
You can get the stack traces right in the admin ui, or you can use
another tool (jconsole, visualvm, jstack cmd line, etc)
- Mark
On Mon, Oct 22, 2012 at 1:47 PM, Dotan Cohen wrote:
> On Mon, Oct 22, 2012
On Mon, Oct 22, 2012 at 7:29 PM, Shawn Heisey wrote:
> On 10/22/2012 9:58 AM, Dotan Cohen wrote:
>>
>> Thank you, I have gone over the Solr admin panel twice and I cannot find
>> the cache statistics. Where are they?
>
>
> If you are running Solr4, you can see individual cache autowarming times
>
On 10/22/2012 9:58 AM, Dotan Cohen wrote:
Thank you, I have gone over the Solr admin panel twice and I cannot
find the cache statistics. Where are they?
If you are running Solr4, you can see individual cache autowarming times
here, assuming your core is named collection1:
http://server:port/
On Mon, Oct 22, 2012 at 5:27 PM, Mark Miller wrote:
> Are you using Solr 3X? The occasional long commit should no longer
> show up in Solr 4.
>
Thank you Mark. In fact, this is the production release of Solr 4.
--
Dotan Cohen
http://gibberish.co.il
http://what-is-what.com
On Mon, Oct 22, 2012 at 5:02 PM, Rafał Kuć wrote:
> Hello!
>
> You can check if the long warming is causing the overlapping
> searchers. Check Solr admin panel and look at cache statistics, there
> should be warmupTime property.
>
Thank you, I have gone over the Solr admin panel twice and I canno
Are you using Solr 3X? The occasional long commit should no longer
show up in Solr 4.
- Mark
On Mon, Oct 22, 2012 at 10:44 AM, Dotan Cohen wrote:
> I've got a script writing ~50 documents to Solr at a time, then
> commiting. Each of these documents is no longer than 1 KiB of text,
> some much le
Hello!
You can check if the long warming is causing the overlapping
searchers. Check Solr admin panel and look at cache statistics, there
should be warmupTime property.
Lowering the autowarmCount should lower the time needed to warm up,
howere you can also look at your warming queries (if you hav
When Solr is slow, I'm seeing these in the logs:
[collection1] Error opening new searcher. exceeded limit of
maxWarmingSearchers=2, try again later.
[collection1] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
Googling, I found this in the FAQ:
"Typically the way to avoid this error is to eit
28 matches
Mail list logo