rebecca,
you probably need to dig into your queries, but if you want to force/preload
the index into memory you could try doing something like
cat `find /path/to/solr/index` > /dev/null
if you haven't already reviewed the following, you might take a look here
https://wiki.apache.org/solr/SolrP
meant to type "JMX or sflow agent"
also should have mentioned you want to be running a very recent JDK
____
From: Boogie Shafer
Sent: Tuesday, February 24, 2015 18:03
To: solr-user@lucene.apache.org
Subject: Re: how to debug solr performance d
rebecca,
i would suggest making sure you have some gc logging configured so you have
some visibility into the JVM, esp if you don't already have JMX for sflow agent
configured to give you external visibility of those internal metrics
the options below just print out the gc activity to a log
-
In the abstract, it sounds like you are seeing the difference between tuning
for latency vs tuning for throughput
My hunch would be you are seeing more (albeit individually quicker) GC events
with your new settings during the rebuild
I imagine that in most cases a solr rebuild is relatively ra
i think we can agree that the basic requirement of *knowing* when the OOM
occurs is the minimal requirement, triggering an alert (email, etc) would be
the first thing to get into your script
once you know when the OOM conditions are occuring you can start to get to the
root cause or remedy (ad
This will do:
> kill -9 `ps aux | grep -v grep | grep tomcat6 | awk '{print $2}'`
>
> pkill should also work
>
> On Tuesday 14 October 2014 07:02:03 Yago Riveiro wrote:
> > Boogie,
> >
> >
> >
> >
> > Any example for java_error.sh scri
a really simple approach is to have the OOM generate an email
e.g.
1) create a simple script (call it java_oom.sh) and drop it in your tomcat bin
dir
echo `date` | mail -s "Java Error: OutOfMemory - $HOSTNAME" not...@domain.com
2) configure your java options (in setenv.sh or similar) to tr
when you say performance is very poor, what is happening at the system level?
e.g.
are cpu's pegged out?
is there a lot of IO wait?
is the storage busy?
is the network busy?
some easy tools to watch this stuff live if you arent sure and dont have full
on system monitoring agents installed
you might want to take a look at the rpm building scripts i have here
https://github.com/boogieshafer/jetty-solr-rpm
gives an example of taking the included jetty and tweaking it in a few ways to
make it more production ready by adding init script, configuring JMX, tuning
logging and putting
you will probably also want to get some better visibility into what is going on
with your JVM and GC
easiest way is to enable some GC logging options. the following additional
options will give you a good deal of information in gc logs
-Xloggc:$JETTY_LOGS/gc.log
-verbose:gc
-XX:+PrintGCDateStam
aman,
if you don't trust the tomcat bits repackaged by heliosearch, perhaps the best
step for you is to try looking at the helioseach packaging and configs on a
test environment and you can diff out the deltas between how they setup tomcat
to work with solr from the regular distribution you mig
shawn,
as you discovered, the double logging problem is because additivity is default
enabled and you have solr event defined specifically AND in the root logger
--
on your log4jv2 v logback investigation
logback IS "log4j done right" and the way to go. its here today and works.
http://logback.
start solr
On 12/9/2013 10:29 AM, Boogie Shafer wrote:
> you may want to start by updating both your solr and JVM to more recent
> releases. looks like you are running solr 4.3.0 and java 6 u31 in your trace.
>
> i would suggest trying with solr 4.5.1 and java 7 u45.
There are bug
you may want to start by updating both your solr and JVM to more recent
releases. looks like you are running solr 4.3.0 and java 6 u31 in your trace.
i would suggest trying with solr 4.5.1 and java 7 u45.
From: Wukang Lin
Sent: Monday, December 09, 201
its worth pointing out there are init scripts for jetty which can be pulled
from its regular distribution site and added to a solr installation with only
minor modifications
i do this with my rpm build process (i just pushed the updates for 4.5.1
release)
https://github.com/boogieshafer/jetty-
some basic tips.
-try to create enough shards that you can get the size of each index portion on
the shard closer to the amount of RAM you have on each node (e.g. if you are
~140GB index on 16GB nodes, try doing 12-16 shards)
-start with just the initial shards, add replicas later when you have
-user@lucene.apache.org
Subject: Re: Regarding mointoring the solr
On 8/19/2013 11:10 AM, Boogie Shafer wrote:
> the not often mentioned stats URL is another interface which you could scrape
> for stats (although i just noticed this url doesnt seem to work in my 4.4.0
> test environment
re: monitoring performance trends we use a free option which is lightweight and
works at collecting the general java stats info out of solr is using the sflow
agent for java. in concert with a host sflowd setup you can gather the jvm and
system stats in decently dense intervals (default is 30s)
riptor_count4096
________
From: Boogie Shafer
Sent: Friday, August 16, 2013 14:26
To: solr-user@lucene.apache.org
Subject: RE: external zookeeper with SolrCloud
the mntr command can give that info if you hit the leader of the zk quorum
e.g. in the example for that
the mntr command can give that info if you hit the leader of the zk quorum
e.g. in the example for that command on the link you can see that its a 5
member zk ensemble (zk_followers 4) and that all followers are synced
(zk_synced_followers 4)
you would obviously need to query for the zk leader
good stuff
here is a more recent version of the same resource as they have added a few new
commands in the recent releases of zookeeper
http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html#sc_zkCommands
From: Walter Underwood
Sent: Friday, August
its a little frustrating to see the smug responses to your query
and its fair to say the solr security situation could be *improved*
this JIRA ticket is worth reading
https://issues.apache.org/jira/browse/SOLR-4470
in short
-it is possible to restrict access to solr nodes using connection filte
there is an rpm build framework for building a jetty powered solr rpm here if
you are interested
https://github.com/boogieshafer/jetty-solr-rpm
its currently set for solr 4.3.0 + built in jetty example + jetty start script
and configs + jmx + logging via logback framework
edit the build script
aming appender as [FILE]
14:11:15,774 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy - Will use
gz compression
On Mon, May 20, 2013 at 10:11 AM, Shawn Heisey wrote:
> On 5/20/2013 10:44 AM, Boogie Shafer wrote:
>
>> BUT i havent figured out what i need to do to get the logging eve
i have logging working for the most part with logback 1.0.13 and slf4j
1.7.5 under solr 4.3.0 (or previously under solr 4.2.1)
with two exceptions, i'm very happy with the setup as i can get all the
jetty request logs, and various solr service events logged out with
rotation, etc
BUT i havent fig
i think the use case here is more of a management one.
-wanting to explicitly configure a specific node as leader (the
reasons for this could vary)
-wanting to gracefully/safely move a leader role from a specific node
without going thru an actual election process (as was mentioned
previously, why
26 matches
Mail list logo