Then you'll have to scrub the data on the way in.
Or change the type to something like KeywordTokenizer and use
PatternReplaceCharFilter(Factory) to get rid of unwanted stuff.
Best,
Erick
On Wed, Jul 12, 2017 at 7:07 PM, Zheng Lin Edwin Yeo
wrote:
> The field which I am bucketing is indexed us
The field which I am bucketing is indexed using String field, and does not
pass through any tokenizers.
Regards,
Edwin
On 12 July 2017 at 21:52, Susheel Kumar wrote:
> I checked on 6.6 and don't see any such issues. I assume the field you are
> bucketing on is string/keywordtokenizer not text/a
I checked on 6.6 and don't see any such issues. I assume the field you are
bucketing on is string/keywordtokenizer not text/analyzed field.
===
"facets":{
"count":5,
"myfacet":{
"buckets":[{
"val":"A\t\t\t",
"count":2},
{
"val":"L\t\t\t"
Hi,
Would like to check, does JSON facet output remove characters like \t from
its output?
Currently, we found that if the result is not in the last result set, the
characters like \t will be removed from the output. However, if it is the
last result set, the \t will not be removed.
As there is