Thanks for that. I am sorry this isn't really Solr-related but how can I
monitor the swapping if I can't rely on the output of the free command?

Do you think I could still achieve any significant improvements by going
through the performance tuning advice on the wiki? 

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik Seeley
Sent: 11 January 2007 20:32
To: solr-user@lucene.apache.org
Subject: Re: Performance tuning

On 1/11/07, Stephanie Belton <[EMAIL PROTECTED]> wrote:
> This is the output of the free command:
>
> [EMAIL PROTECTED] root2]# free -m
>              total       used       free     shared    buffers     cached
> Mem:          2007       1888        119          0         86        814
> -/+ buffers/cache:        986       1020
> Swap:         1992        207       1784
>
> We normally have no swapping at all on this server and since last night
> (when Solr was deployed on the site) it's been going up.

That may be fine... swap in use != swapping.
The OS may be swapping out some processes that haven't been used in a
long time to free up more memory for disk cache (notice 814M cached).
This is a good thing.

> Here is an extract of the top command output sorted by memory usage, does
> each of the processes really take up 566M???

No, older versions of linux show each thread as a separate process.

 CU usage is low because we are
> outside of peak time but during the day it's at 40% when it used to be
just
> 20%:

Full-text search is CPU intensive.  An average peak of 40% seems
acceptable.  If the load gets too high, you can scale out by adding
multiple servers behind a load balancer.

-Yonik

> 20:14:16  up 45 days, 21:47,  1 user,  load average: 1.06, 1.14, 1.11
> 167 processes: 166 sleeping, 1 running, 0 zombie, 0 stopped
> CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
>            total    8.8%    0.0%    0.3%   0.1%     0.2%    6.9%   83.2%
>            cpu00    7.9%    0.0%    0.3%   0.7%     0.9%    6.9%   82.8%
>            cpu01    8.5%    0.0%    0.3%   0.0%     0.0%    6.9%   84.0%
>            cpu02    9.9%    0.0%    0.1%   0.0%     0.0%    6.9%   82.8%
>            cpu03    9.0%    0.0%    0.6%   0.0%     0.2%    7.0%   83.2%
> Mem:  2055300k av, 1914588k used,  140712k free,       0k shrd,   89032k
> buff
>                    1326540k actv,  301236k in_d,   30788k in_c
> Swap: 2040244k av,  212948k used, 1827296k free                  843380k
> cached
>
>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
> 12201 root      15   0  566M 561M 13276 S     0.0 27.9   0:02   0 java
> 12203 root      15   0  566M 561M 13276 S     0.0 27.9   4:48   2 java
> 12204 root      16   0  566M 561M 13276 S     0.0 27.9   4:45   1 java
> 12205 root      15   0  566M 561M 13276 S     0.0 27.9   4:45   0 java
> 12206 root      15   0  566M 561M 13276 S     0.0 27.9   4:46   2 java
> 12207 root      15   0  566M 561M 13276 S     0.0 27.9   8:35   2 java
> 12208 root      16   0  566M 561M 13276 S     0.0 27.9  15:53   1 java
> 12209 root      16   0  566M 561M 13276 S     0.0 27.9  27:30   1 java
> 12210 root      21   0  566M 561M 13276 S     0.0 27.9   0:00   1 java
> 12211 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   0 java
> 12212 root      15   0  566M 561M 13276 S     0.0 27.9   0:17   1 java
> 12213 root      15   0  566M 561M 13276 S     0.0 27.9   0:15   2 java
> 12214 root      21   0  566M 561M 13276 S     0.0 27.9   0:00   3 java
> 12215 root      15   0  566M 561M 13276 S     0.0 27.9   0:33   2 java
> 12217 root      21   0  566M 561M 13276 S     0.0 27.9   0:00   3 java
> 12218 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   2 java
> 12219 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   1 java
> 12220 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   2 java
> 12221 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   0 java
> 12222 root      25   0  566M 561M 13276 S     0.0 27.9 297:21   2 java
> 12223 root      15   0  566M 561M 13276 S     0.0 27.9   0:13   3 java
> 12224 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   0 java
> 12225 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   3 java
> 12226 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   2 java
> 12227 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   1 java
> 12228 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   0 java
> 12229 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   1 java
> 12230 root      15   0  566M 561M 13276 S     0.0 27.9   0:00   1 java
> Etc...
>
> On the server we also have a website running using mod_perl, it's been
> running for 1 year and up until now the CPU usage was peaking at 20% and
> memory around 28% no swapping.
>
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik
Seeley
> Sent: 11 January 2007 15:12
> To: solr-user@lucene.apache.org
> Subject: Re: Performance tuning
>
> On 1/11/07, Stephanie Belton <[EMAIL PROTECTED]> wrote:
>
> > Solr is now up and running on our production environment and working
> great. However it is taking up a lot of extra CPU and memory (CPU usage
has
> doubled and memory is swapping). Is there any documentation on performance
> tuning? There seems to be a lot of useful info in the server output but I
> don't understand it.
>
> Swapping if it's constant isn't good...  How much memory does this box
> have, and what is the heap size of the JVM?  Are there other things
> running on this box?
>
> Solr does warming of caches by default to make complex queries that
> hit a new snapshot of the index fast.  This takes up CPU in bursts,
> but is normally nothing to worry about unless you have other apps
> running on the same box that need CPU.  Because of this warming, CPU
> usage of a Solr collection isn't directly related to query traffic at
> all times.
>
>
> -Yonik


Reply via email to