mplete solr stack to the
> update handkler).
>
> -Yonik
> http://www.lucenerevolution.org -- Lucene/Solr User Conference, May
> 25-26, San Francisco
>
>
> On Wed, May 18, 2011 at 1:29 PM, Yonik Seeley
> wrote:
>> On Wed, May 18, 2011 at 1:24 PM, Paul Dlug wrote:
>
I updated to the latest branch_3x (r1124339) and I'm now getting the
error below when trying a delete by query or id. Adding documents with
the new format works as do the commit and optimize commands. Possible
regression due to SOLR-2496?
curl 'http://localhost:8988/solr/update/json?wt=json' -H
'C
On Wed, Sep 15, 2010 at 12:41 PM, Ken Krugler
wrote:
>
> On Sep 15, 2010, at 7:59am, Shawn Heisey wrote:
>
>> My index consists of metadata for a collection of 45 million objects, most
>> of which are digital images. The executives have fallen in love with
>> Google's color image search. Here's
Just reporting back, no issues on the latest branch3x build with your
revert of the optimization.
--Paul
On Tue, Aug 3, 2010 at 9:22 AM, Paul Dlug wrote:
> Sure, I'm reindexing now, I'll let you know how it goes.
>
>
> --Paul
>
> On Tue, Aug 3, 2010 at 9:05 AM,
ting it now!!
>
> For the time being, I just disabled (committed to trunk & 3x) the
> optimization that's causing the bug. Can you update to 3x head (or
> trunk head), remove your current index, and try again?
>
> Mike
>
> On Tue, Aug 3, 2010 at 8:52 AM, Paul Dlug
g 2, 2010 at 6:04 PM, Michael McCandless
wrote:
> This looks like the index corruption caused by a commit on Friday.
>
> See the thread I sent earlier with subject "heads up -- index
> corruption on Solr/Lucene trunk/3.x branch".
>
> Mike
>
> On Mon, Aug 2, 2010 at 6:
I'm running a recent build of branch3x (r981609), queries with
multiple wildcards (e.g. a*b*c*) are failing with the exception below
in the log. These queries worked fine for me with solr 1.4, known bug?
SEVERE: java.lang.IndexOutOfBoundsException: Index: 114, Size: 39
at java.util.ArrayL
2010-07-29 at 15:44 +0200, Paul Dlug wrote:
>> Is there a filter available that will remove large tokens from the
>> token stream? Ideally something configurable to a character limit? I
>> have a noisy data set that has some large tokens (in this case more
>> than 50 charact
Is there a filter available that will remove large tokens from the
token stream? Ideally something configurable to a character limit? I
have a noisy data set that has some large tokens (in this case more
than 50 characters) that I'd like to just strip. They're unlikely to
ever match a user query an
On Thu, Jul 22, 2010 at 4:01 PM, Jonathan Rochkind wrote:
> I think the Synonym filter should actually do exactly what you want, no?
> http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory
>
> Hmm, maybe not exactly what you want as you describe it. It comes close,
Is there a tokenizer that supports providing variants of the tokens at
index time? I'm looking for something that could take a syntax like:
International|I Business|B Machines|M
Which would take each pipe delimited token and preserve its position
so that phrase queries work properly. The above wo
11 matches
Mail list logo