Ackermann
>
>
> James Brady schrieb:
>
>> Yeah I was thinking T would be SolrRequestHandler too. Eclipse's debugger
>> can't tell me...
>>
>
> You could try disassembling. Or Eclipse opens classes in a very rudimentary
> format when there is no source
Yeah I was thinking T would be SolrRequestHandler too. Eclipse's debugger
can't tell me...
Lot's of other handlers are created with no problem before my plugin falls
over, so I don't think it's a problem with T not being what we expected.
Do you know of any working examples of plugins I can downl
copy that class file's directory tree
(com/jmsbrdy/LiveCoresHandler.class) to a "lib" in the root of my jetty
install
3. add "lib" to Jetty's class path
4. add the Solr JARs from the exploded war to Jetty's class path
5. start the server
Can you see any p
rth trying to put it in
> the WEB-INF/lib of the solr.war
>
>
> On Tue, Aug 4, 2009 at 5:35 PM, James Brady
> wrote:
> > Hi, the LiveCoresHandler is in the default package - the behaviour's the
> > same if I have it in a properly namespaced package too...
> >
le Paul നോബിള് नोब्ळ्
> what is the package of LiveCoresHandler ?
> I guess the requestHandler name should be name="/livecores"
>
> On Tue, Aug 4, 2009 at 5:04 PM, James Brady
> wrote:
> > Solr version: 1.3.0 694707
> >
> > solrconfig.xml:
> >
t pieces in your solrconfig.xml and the
> request handler class you have created?
>
> Cheers
> Avlesh
>
> On Mon, Aug 3, 2009 at 10:51 PM, James Brady >wrote:
>
> > Hi,
> > Thanks for your suggestions!
> >
> > I'm sure I have the class name right -
"solr.LiveCoresHandler". It should be
> fully qualified class name - com.foo.path.to.LiveCoresHandler instead.
>
> Moreover, I am damn sure that you did not forget to drop your jar into
> solr.home/lib. Checking once again might not be a bad idea :)
>
> Cheers
> Av
Hi,
I'm creating a custom request handler to return a list of live cores in
Solr.
On startup, I get this exception for each core:
Jul 31, 2009 5:20:39 PM org.apache.solr.common. SolrException log
SEVERE: java.lang.ClassCastException: LiveCoresHandler
at
org.apache.solr.core.RequestHandler
RA issue requesting the feature for future versions
> -
> but that's prob the best solution for 1.3 - I'm not see the functionality
> there.
>
> --
> - Mark
>
> http://www.lucidimagination.com
>
> On Sat, Jul 18, 2009 at 9:02 PM, James Brady >wrote:
>
> > The So
The Solr application I'm working on has many concurrently active cores - of
the order of 1000s at a time.
The management application depends on being able to query Solr for the
current set of live cores, a requirement I've been satisfying using the
STATUS core admin handler method.
However, once
Hi,
The lastModified field the Solr status seems to only be updated when a
commit/optimize operation takes place.
Is there any way to determine when a core has been changed, including any
uncommitted add operations?
Thanks,
James
n and across segments, so now I'm at a loss as to why it's not catching
> your case. Any of these indexes small enough to post somewhere i could
> access?
>
> Mike
>
>
> James Brady wrote:
>
> Hi,My indices sometime become corrupted - normally when Solr has
Hi,My indices sometime become corrupted - normally when Solr has to be
KILLed - these are not normally too much of a problem, as
Lucene's CheckIndex tool can normally detect missing / broken segments and
fix them.
However, I now have a few indices throwing errors like this:
INFO: [core4] webapp=/
intensive
production usage?
Thanks!
James
-- Forwarded message ------
From: James Brady
Date: 2009/1/30
Subject: Re: Separate error logs
To: solr-user@lucene.apache.org
Oh... I should really have found that myself :/
Thank you!
2009/1/30 Ryan McKinley
check:
> http://wiki.ap
Great, thanks for that, Chris!
2009/2/3 Chris Hostetter
>
> : Hi, no the data_added field was one per document.
>
> i would suggest adding multiValued="false" to your "date" fieldType so
> that Solr can enforce that for you -- otherwise we can't be 100% sure.
>
> if it really is only a single va
Hi, no the data_added field was one per document.
2009/2/1 Erik Hatcher
> Is your date_added field multiValued and you've assigned multiple to some
> documents?
>
>Erik
>
>
> On Jan 31, 2009, at 4:12 PM, James Brady wrote:
>
> Hi,I'm
Hi,I'm following the recipe here:
http://wiki.apache.org/solr/SolrRelevancyFAQ#head-b1b1cdedcb9cd9bfd9c994709b4d7e540359b1fdfor
boosting recent documents: bf=recip(rord(date_added),1,1000,1000)
On some of my servers I've started getting errors like this:
SEVERE: java.lang.RuntimeException: there
Oh... I should really have found that myself :/
Thank you!
2009/1/30 Ryan McKinley
> check:
> http://wiki.apache.org/solr/SolrLogging
>
> You configure whatever flavor logger to write error to a separate log
>
>
>
> On Jan 30, 2009, at 4:36 PM, James Brady wrote:
>
&g
Hi all,What's the best way for me to split Solr/Lucene error message off to
a separate log?
Thanks
James
h the
superset of all possible terms in the end.
However, index size growth probably continues at roughly half the speed of
it's growth during the "filling up" period.
2009/1/26 Ryan McKinley
>
> On Jan 25, 2009, at 6:06 PM, James Brady wrote:
>
> Hi,I have a number
Hi,I have a number of indices that are supposed to maintaining "windows" of
indexed content - the last month's work of data, for example.
At the moment, I'm cleaning out old documents with a simple cron job making
requests like:
date_added:[* TO NOW-30DAYS]
I was expecting disk usage to plateau p
Hi all, I have 20 indices, each ~10GB in size, being searched by a single
Solr slave instance (using the multicore features in a slightly old 1.2 dev
build)
I'm getting unpredictable, but inevitable, OutOfMemoryError from the slave,
and I have no more physical memory to throw at the problem (HotSpo
In the meantime, I had imagined that, although clumsy, federated
search could
be used for this purpose - posting the new documents to a group of
servers
('latest updates servers') with v limited amount of documents with
v. fast
"reload / refresh" times, and sending them again (on a work qu
Hi,
The product I'm working on requires new documents to be searchable
very quickly (inside 60 seconds is my goal). The corpus is also going
to grow very large, although it is perfectly partitionable by user.
The approach I tried first was to have write-only masters and read-
only slaves wi
Hi, there was some talk on JIRA about whether Multicore would be able
to manage tens of thousands of cores, and dynamically create hundreds
every day:
https://issues.apache.org/jira/browse/SOLR-350?
focusedCommentId=12571282#action_12571282
The issue of multicore configuration was left open
ot myself in the foot both once and twice
before
getting it right but this is what I'm good at; to never stop trying :)
However it is nice to start playing at least on the right side of the
football field so a little push in the back would be really helpful.
Kindly
//Marcus
On Fri
Hi, we have an index of ~300GB, which is at least approaching the
ballpark you're in.
Lucky for us, to coin a phrase we have an 'embarassingly
partitionable' index so we can just scale out horizontally across
commodity hardware with no problems at all. We're also using the
multicore featu
Hi,
I'm seeing a problem mentioned in Solr-42, Highlighting problems with
HTMLStripWhitespaceTokenizerFactory:
https://issues.apache.org/jira/browse/SOLR-42
I'm indexing HTML documents, and am getting reams of "Mark invalid"
IOExceptions:
SEVERE: java.io.IOException: Mark invalid
at
r log? Anything in the snapinstaller
log?
Bill
On Thu, May 1, 2008 at 8:35 PM, James Brady <[EMAIL PROTECTED]
>
wrote:
Hi Ryan, thanks for that!
I have one outstanding question: when I take a snapshot on the
master,
snappull and snapinstall on the slave, the new index is not b
d to 'solr' at present). This module name is set in the
slave scripts.conf
James
On 29 Apr 2008, at 13:44, Ryan McKinley wrote:
On Apr 29, 2008, at 3:09 PM, James Brady wrote:
Hi all,
I'm aiming to use the new multicore features in development
versions of Solr. My ideal setu
Depending on your application, it might be useful to take control of
the queueing yourself: it was for me!
I needed quick turnarounds for submitting a document to be indexed,
which Solr can't guarantee right now. To address it, I wrote a
persistent queueing server, accessed by XML-RPC, whic
Hi all,
I'm aiming to use the new multicore features in development versions
of Solr. My ideal setup would be to have master / slave servers on the
same machine, snapshotting across from the 'write' to the 'read'
server at intervals.
This was all fine with Solr 1.2, but the rsync & snappul
Hi all,
In the latest trunk version, default='true' doesn't have the effect I
would have expected running in multi core mode.
The example multicore.xml has:
But queries such as
/solr/select?q=*:*
and
/solr/admin/
are executed against core1, not core0 as I would have expected: it
seems
ld be
great.
James
Begin forwarded message:
From: James Brady <[EMAIL PROTECTED]>
Date: 8 March 2008 19:41:56 PST
To: solr-user@lucene.apache.org
Subject: Favouring recent matches
Hello all,
In Lucene in Action, (replicated here: http://www.theserverside.com/tt/articles/article.tss?l=
Hello all,
In Lucene in Action, (replicated here: http://www.theserverside.com/tt/articles/article.tss?l=ILoveLucene)
, theserverside.com team say "The date boost has been really important
for us".
I'm looking for some advice on the best way to actually implement this
- the only way I can s
, too, see Recent
changes log on the Wiki to quickly find them.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: James Brady <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Friday, February 29, 2008 1:11:07 AM
Subject:
com/ -- Lucene - Solr - Nutch
- Original Message
From: James Brady <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Wednesday, February 27, 2008 10:08:02 PM
Subject: Strategy for handling large (and growing) index:
horizontal partitioning?
Hi all,
Our current setup is a ma
ndex files? Optimizing
can never be faster than that, since it must read every byte and write
a whole new set. Disc speed may be your bottleneck.
You could also look at disc access rates in a monitoring tool.
Is there read contention between the master and slave for the same
disc?
wunder
On 2/2
Hi all,
Our current setup is a master and slave pair on a single machine,
with an index size of ~50GB.
Query and update times are still respectable, but commits are taking
~20% of time on the master, while our daily index optimise can up to
4 hours...
Here's the most relevant part of solr
Unfortunately, you cannot hard link across mount points.
Snapshooter uses "cp -lr", which, on my Linux machine at least, fails
with:
cp: cannot create link `/mnt2/myuser/linktest': Invalid cross-device
link
James
On 23 Feb 2008, at 14:34, Brian Whitman wrote:
Will the hardlink snapshot sc
Hi,
Currently, the solr.py Python binding casts all key and value
arguments blindly to strings. The following changes deal with Unicode
properly and respect multi-valued parameters passed in as lists:
131a132,142
> def __makeField(self, lst, f, v):
> if not isinstance(f, basestring):
reasonable strategy in general, and has anyone
got advice on the specific points I raise above?
Thanks,
James
On 12 Feb 2008, at 11:45, Mike Klaas wrote:
On 11-Feb-08, at 11:38 PM, James Brady wrote:
Hello,
I'm looking for some configuration guidance to help improve
performance of my app
search for a user
entered term ("apache" above) and filtering by a user_id field.
This seems to be the case for every sort option except score asc and
score desc. Please tell me Solr doesn't sort all matching documents
before applying boolean filters?
James
Begin forwarded m
Hello,
I'm looking for some configuration guidance to help improve
performance of my application, which tends to do a lot more indexing
than searching.
At present, it needs to index around two documents / sec - a document
being the stripped content of a webpage. However, performance was so
Hi all,
So the Solr tutorial recommends batching operation to improve
performance by avoiding multiple costly commits.
To implement this, I originally had a couple of methods in my python
app reading from or writing to Solr, with a scheduled task blindly
committing every 15 seconds.
Howe
Hi all,
I was adding passing python unicode objects to solr.add and got these
sort of errors:
...
File "/Users/jamesbrady/Documents/workspace/YelServer/yel/
solr.py", line 152, in add
self.__add(lst,fields)
File "/Users/jamesbrady/Documents/workspace/YelServer/yel/
solr.py", line 146
46 matches
Mail list logo