Is UpdateProcessor triggered when updating an existing document or for new
documents also?
On Tue, Sep 27, 2011 at 6:00 AM, Chris Hostetter-3 [via Lucene] <
ml-node+s472066n3371110...@n3.nabble.com> wrote:
>
> : Hi Erick, The problem I am trying to solve is to filter invalid entities.
>
> : User
: Hi Erick, The problem I am trying to solve is to filter invalid entities.
: Users might mispell or enter a new entity name. This new/invalid entities
: need to pass through a KeepWordFilter so that it won't pollute our
: autocomplete result.
how are you doing autocomplete?
if you are using th
Well ... DIH can.
And update processors can.
And of course client-side indexers.
But yeah... elbow grease required.
Erik
On Sep 25, 2011, at 16:32, Erick Erickson wrote:
> Not that I know of...
>
> On Sun, Sep 25, 2011 at 11:15 AM, Jithin wrote:
>>
>> Erick Erickson wrote:
>>>
Not that I know of...
On Sun, Sep 25, 2011 at 11:15 AM, Jithin wrote:
>
> Erick Erickson wrote:
>>
>> See below:
>>
>> On Sun, Sep 25, 2011 at 9:53 AM, Jithin
>> wrote:
>>> Hi Erick, The problem I am trying to solve is to filter invalid entities.
>>> Users might mispell or
Erick Erickson wrote:
>
> See below:
>
> On Sun, Sep 25, 2011 at 9:53 AM, Jithin
> wrote:
>> Hi Erick, The problem I am trying to solve is to filter invalid entities.
>> Users might mispell or enter a new entity name. This new/invalid entities
>> need to pass through a Kee
See below:
On Sun, Sep 25, 2011 at 9:53 AM, Jithin wrote:
> Hi Erick, The problem I am trying to solve is to filter invalid entities.
> Users might mispell or enter a new entity name. This new/invalid entities
> need to pass through a KeepWordFilter so that it won't pollute our
> autocomplete res
Hi Erick, The problem I am trying to solve is to filter invalid entities.
Users might mispell or enter a new entity name. This new/invalid entities
need to pass through a KeepWordFilter so that it won't pollute our
autocomplete result.
I was looking into Luke. And it does seem to solve my use cas
No and no.
Hmmm, that's a bit terse. The split between stored and indexed
happens quite early in the update process, there's no way I know
of to use the tokenized stream as the input to your stored data.
And there's no out-of-the-box way to get the indexed tokens back
For anything except very sma