data set(number of documents) is not large - 100k. Number of fields could max
to 10 . With average size of indexed field could be 200 characters.
I tried creating using multiple indexes by using copy field.
Let me see how the performance will be with EdgeNGramTokenFilter or
EdgeNGramTokenizer
T
: I have the following use case. I could implement the solution but
: performance is affected. I need some smart ways of doing this.
: Use Case :
: Incoming data has two fields which have values like 'WAL MART STORES INC'
: and 'wal-mart-stores-inc'.
: Users can search the data either in 'wal