You could develop an update processor to skip or trim long terms as you see fit. You can even code a script in JavaScruipt using the stateless script update processor.

Can you tell us more about the nature of your data? I mean, sometimes analyzer filters strip or fold accented characters anyway, so count of characters versus UTF-8 bytes may be a non-problem.

-- Jack Krupansky

-----Original Message----- From: Michael Ryan
Sent: Tuesday, July 1, 2014 9:49 AM
To: solr-user@lucene.apache.org
Subject: Best way to fix "Document contains at least one immense term"?

In LUCENE-5472, Lucene was changed to throw an error if a term is too long, rather than just logging a message. I have fields with terms that are too long, but I don't care - I just want to ignore them and move on.

The recommended solution in the docs is to use LengthFilterFactory, but this limits the terms by the number of characters, rather than the number of UTF-8 bytes. So you can't just do something clever like set max=32766, due to the possibility of multibyte characters.

So, is there a way of using LengthFilterFactory to do this such that an error will never be thrown? Thinking I could use some max less than 32766 / 3, but I want to be absolutely sure that there is not some edge case that is going to break. I guess I could just set it to something sane like 1000. Or is there another more direct solution to this problem?

-Michael

Reply via email to