1. Does LTR only support phrase matching (complete user query) from training data for extracting feature score: ex. efi.user_query='tv+stand' matches the title feature only if title contains "tv stand" in the title. By removing the quotes, able to match at term level, but the behaviour is not consistent when we change the order of the terms in the query. i.e. efi.user_query=tv stand gives a different feature match score that of efi.user_query=stand tv for the same title match.
Are we supposed to always wrap efi.userquery with single quotes and do phrase matching. If we do so, we miss out on term matches. Which request handler does this query go through? 2. Generating training data using clickstream Please advice on usage of clickstream data for training (in the absence of human judgements). Can we expect LTR to do good job interms of weights learned when we use click data (implicit feedback data). 3. Newness challenge Generally clicks data is good for popular items. Learning newness seems a challenge with this approach. Any thoughts here.. 4. Original score feature weight is still zero -- Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html