Shalin Shekhar Mangar wrote on 02/25/2010 07:38:39
AM:
> On Thu, Feb 25, 2010 at 5:34 PM, gunjan_versata
wrote:
>
> >
> > We are using SolrJ to handle commits to our solr server.. All runs
fine..
> > But whenever the commit happens, the server becomes slow and stops
> > responding.. therby result
Otis Gospodnetic wrote on 01/22/2010 12:20:45
AM:
> I'm missing the bigger context of this thread here, but from the
> snippet below - sure, commits cause in-memory index to get written
> to disk, that causes some IO, and that *could* affect search *if*
> queries are running on the same box. Wh
ysee...@gmail.com wrote on 01/20/2010 02:24:04 PM:
> On Wed, Jan 20, 2010 at 2:18 PM, Jerome L Quinn
wrote:
> > This is essentially the same problem I'm fighting with. Once in a
while,
> > commit
> > causes everything to freeze, causing add commands to timeout.
>
&g
ysee...@gmail.com wrote on 01/20/2010 02:24:04 PM:
> On Wed, Jan 20, 2010 at 2:18 PM, Jerome L Quinn
wrote:
> > This is essentially the same problem I'm fighting with. Once in a
while,
> > commit
> > causes everything to freeze, causing add commands to timeout.
>
&g
ysee...@gmail.com wrote on 01/19/2010 06:05:45 PM:
> On Tue, Jan 19, 2010 at 5:57 PM, Steve Conover
wrote:
> > I'm using latest solr 1.4 with java 1.6 on linux. I have a 3M
> > document index that's 10+GB. We currently give solr 12GB of ram to
> > play in and our machine has 32GB total.
> >
> >
Lance Norskog wrote on 01/16/2010 12:43:09 AM:
> If your indexing software does not have the ability to retry after a
> failure, you might with to change the timeout from 20 seconds to, say,
> 5 minutes.
I can make it retry, but I have somewhat real-time processes doing these
updates. Does an
Otis Gospodnetic wrote on 01/14/2010 10:07:15
PM:
> See those "waitFlush=true,waitSearcher=true" ? Do things improve if
> you make them false? (not sure how with autocommit without looking
> at the config and not sure if this makes a difference when
> autocommit triggers commits)
Looking at Dir
Hi, folks,
I am using Solr 1.3 pretty successfully, but am running into an issue that
hits once in a long while. I'm still using 1.3 since I have some custom
code I will have to port forward to 1.4.
My basic setup is that I have data sources continually pushing data into
Solr, around 20K adds
Otis Gospodnetic wrote on 11/13/2009 11:15:43
PM:
> Let's take a step back. Why do you need to optimize? You said: "As
> long as I'm not optimizing, search and indexing times are
satisfactory." :)
>
> You don't need to optimize just because you are continuously adding
> and deleting documents
Lance Norskog wrote on 11/13/2009 11:18:42 PM:
> The 'maxSegments' feature is new with 1.4. I'm not sure that it will
> cause any less disk I/O during optimize.
It could still be useful to manage the "too many open files" problem that
rears its ugly head on occasion.
> The 'mergeFactor=2' id
ysee...@gmail.com wrote on 11/13/2009 09:06:29 AM:
> On Fri, Nov 13, 2009 at 6:27 AM, Michael McCandless
> wrote:
> > I think we sorely need a Directory impl that down-prioritizes IO
> > performed by merging.
>
> It's unclear if this case is caused by IO contention, or the OS cache
> of the hot p
ysee...@gmail.com wrote on 11/13/2009 09:06:29 AM:
>
> On Fri, Nov 13, 2009 at 6:27 AM, Michael McCandless
> wrote:
> > I think we sorely need a Directory impl that down-prioritizes IO
> > performed by merging.
>
> It's unclear if this case is caused by IO contention, or the OS cache
> of the hot
Mark Miller wrote on 11/12/2009 07:18:03 PM:
> Ah, the pains of optimization. Its kind of just how it is. One solution
> is to use two boxes and replication - optimize on the master, and then
> queries only hit the slave. Out of reach for some though, and adds many
> complications.
Yes, in my us
Hi, everyone, this is a problem I've had for quite a while,
and have basically avoided optimizing because of it. However,
eventually we will get to the point where we must delete as
well as add docs continuously.
I have a Solr 1.3 index with ~4M docs at around 90G. This is a single
instance run
Mark Miller wrote on 01/26/2009 04:30:00 PM:
> Just a point or I missed: with such a large index (not doc size large,
> but content wise), I imagine a lot of your 16GB of RAM is being used by
> the system disk cache - which is good. Another reason you don't want to
> give too much RAM to the JV
"Lance Norskog" wrote on 01/20/2009 02:16:47 AM:
> "Lance Norskog"
> 01/20/2009 02:16 AM
> Java 1.5 has thread-locking bugs. Switching to Java 1.6 may cure this
> problem.
Thanks for taking time to look at the problem. Unfortunately, this is
happening on Java 1.6,
so I can't put the blame t
uspect I'll add a watchdog, no matter what's causing the problem here.
> However, you should figure out why you are running out of memory. You
> don't want to use more resources than you have available if you can help
it.
Definitely. That's on the agenda :-)
Thanks,
Julian Davchev wrote on 01/20/2009 10:07:48 AM:
> Julian Davchev
> 01/20/2009 10:07 AM
>
> I get SEVERE: Lock obtain timed out
>
> Hi,
> Any documents or something I can read on how locks work and how I can
> controll it. When do they occur etc.
> Cause only way I got out of this mess was rest
Hi, all.
I'm running solr 1.3 inside Tomcat 6.0.18. I'm running a modified query
parser, tokenizer, highlighter, and have a CustomScoreQuery for dates.
After some amount of time, I see solr stop responding to update requests.
When crawling through the logs, I see the following pattern:
Jan 12,
Hi, all. Are there any plans for putting together a bugfix release? I'm
not looking for particular bugs, but would like to know if bug fixes are
only going to be done mixed in with new features.
Thanks,
Jerry Quinn
20 matches
Mail list logo