This case isn't worth to bother with custom code. Deriving terms in
analysis usually works fine.
This might be addressed with
https://lucene.apache.org/solr/guide/6_6/filter-descriptions.html#reversed-wildcard-filter
 or
https://lucene.apache.org/solr/guide/6_6/tokenizers.html#Tokenizers-PathHierarchyTokenizer.
I haven't thought deep enough, but believe it might be tinkered by standard
analysis toolbox.


On Sun, May 5, 2019 at 10:28 PM alexpusch <a...@getjaco.com> wrote:

> Thanks for the quick reply.
>
> The real data is an representation of an HTML element "body div.class1
> div.b.a", My goal is to match documents by css selector i.e ".class1 .a.b"
>
> The field I'm querying on is a tokenzied texts field. The post filter takes
> the doc value of the field (which is not tokenized - whole string),
> transforms it a bit, and matches it to some input regex.
>
> My alternative is to create a non tokenzied copy field, create a custom
> filter that makes the transformation on index time and use regular regex
> query on it. Writing the post filter was my first choice since I wanted to
> avoid copying data and I'm not sure about regular regex query performance.
> The collection I'm querying is quite huge.
>
>
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


-- 
Sincerely yours
Mikhail Khludnev

Reply via email to