rlaehdals commented on issue #14645:
URL: https://github.com/apache/lucene/issues/14645#issuecomment-3264285969

   I noticed that token_limit_size controls the input length, and I tried 
running the following test:
   
   ```
   public void testMaxTokenLengthNonDefault() throws Exception {
     StandardAnalyzer a = new StandardAnalyzer();
     a.setMaxTokenLength(5);
     assertAnalyzesTo(a, "ab cd toolong xy z", new String[] {"ab", "cd", 
"toolo", "ng", "xy", "z"});
     a.close();
   }
   ```
   
   ```
   org.junit.ComparisonFailure: term 2 
   requirement: toolo
   actual: xy
   ```
   
   When adjusting the max token length, the issue I observed is that instead of 
producing "toolo", the analyzer outputs "xy" because the buffer is expanded.
   
   Is this the correct and expected behavior, or is it indicating a potential 
issue?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to