Thanks Hoss -- adding in the LengthFilterFactory did the trick.

-- Chris

On Mon, Sep 15, 2014 at 1:57 PM, Bryan Bende <bbe...@gmail.com> wrote:

> I ran into this problem as well when upgrading to Solr 4.8.1...
>
> We had a somewhat large binary field that was "indexex=false stored=true",
> but because of the copyField copying "*" to "text" it would hit the immense
> term issue.
>
> In our case we didn't need this field to be indexed (parts of it were
> already indexed in other fields) so we worked around it by breaking out
> individual copyField directives for only the fields we needed.
>
> On Mon, Sep 15, 2014 at 1:52 PM, Chris Hostetter <hossman_luc...@fucit.org
> >
> wrote:
>
> >
> > : SCHEMA:
> > : <field name="content" type="string" indexed="false" stored="true"
> > : required="true"/>
> > :
> > : LOGS:
> > : Caused by: java.lang.IllegalArgumentException: Document contains at
> least
> > : one immense term in field="content" (whose UTF8 encoding is longer than
> > the
> >
> > I don't think you are using the schema.xml you think you are ... that
> > exception is *very* specific to the *INDEXED* terms.  It has nothing to
> do
> > with the stored value.
> >
> >
> > This change in behavior (from silently ignoring massive terms, to
> > propogating an error) was explicitly noted in the upgrade steps for
> 4.8...
> >
> >
> >
> https://lucene.apache.org/solr/4_8_0/changes/Changes.html#v4.8.0.upgrading_from_solr_4.7
> >
> > In previous versions of Solr, Terms that exceeded Lucene's
> MAX_TERM_LENGTH
> > were silently ignored when indexing documents. Begining with Solr 4.8, a
> > document an error will be generated when attempting to index a document
> > with a term that is too large. If you wish to continue to have large
> terms
> > ignored, use "solr.LengthFilterFactory" in all of your Analyzers. See
> > LUCENE-5472 for more details.
> >
> >
> > -Hoss
> > http://www.lucidworks.com/
> >
>

Reply via email to