Steve,
Thank you thank you so much. You guys are awesome.
Steve how can i learn more about the lucene indexing process in more
detail. e.g. after we send documents for indexing which function calls till
the doc actually store in index files.
I will be thankful to you. If you guide me here.
With
Aman,
Solr uses the same Token filter instances over and over, calling reset() before
sending each document through. Your code sets “exhausted" to true and then
never sets it back to false, so the next time the token filter instance is
used, its “exhausted" value is still true, so no input str
Yes I just saw.
With Regards
Aman Tandon
On Fri, Jun 19, 2015 at 10:39 AM, Steve Rowe wrote:
> Aman,
>
> My version won’t produce anything at all, since incrementToken() always
> returns false…
>
> I updated the gist (at the same URL) to fix the problem by returning true
> from incrementToken()
Hi Steve,
> you never set exhausted to false, and when the filter got reused, *it
> incorrectly carried state from the previous document.*
Thanks for replying, but I am not able to understand this.
With Regards
Aman Tandon
On Fri, Jun 19, 2015 at 10:25 AM, Steve Rowe wrote:
> Hi Aman,
>
>
Aman,
My version won’t produce anything at all, since incrementToken() always returns
false…
I updated the gist (at the same URL) to fix the problem by returning true from
incrementToken() once and then false until reset() is called. It also handles
the case when the concatenated token is zer
Hi Aman,
The admin UI screenshot you linked to is from an older version of Solr - what
version are you using?
Lots of extraneous angle brackets and asterisks got into your email and made
for a bunch of cleanup work before I could read or edit it. In the future,
please put your code somewhere
Please help, what wrong I am doing here. please guide me.
With Regards
Aman Tandon
On Thu, Jun 18, 2015 at 4:51 PM, Aman Tandon
wrote:
> Hi,
>
> I created a *token concat filter* to concat all the tokens from token
> stream. It creates the concatenated token as expected.
>
> But when I am posti
Hi,
I created a *token concat filter* to concat all the tokens from token
stream. It creates the concatenated token as expected.
But when I am posting the xml containing more than 30,000 documents, then
only first document is having the data of that field.
*Schema:*
* required="false" omitNorms