: As a first approach I will evaluate (manually :( ) hits that are out of the
: intersection set for every query in each system. Anyway I will keep
FYI: LucidWorks has a "Relevancy Workbench" tool that serves as a simple
UI designed explicitly for the purpose of comparing the result sets of
fro
Thanks for your valuable answers.
As a first approach I will evaluate (manually :( ) hits that are out of the
intersection set for every query in each system. Anyway I will keep
searching for literature in the field.
Regards.
On Sun, Oct 20, 2013 at 10:55 PM, Doug Turnbull <
dturnb...@opensourc
That's exactly what we advocate for in our Solr work. We call in "Test
Driven Relevancy". We work closely with content experts to help build
collaboration around search quality. (disclaimer, yes we build a product
around this) but the advice still stands regardless.
http://www.opensourceconnection
Let's assume that you have keywords to search and different configurations
for indexing. A/B testing is one of techniques that you can use as like
Erick mentioned.
If you want to have an automated comparison and do not have a oracle for
A/B testing there is another way. If you have an ideal result
bq: How do you compare the quality of your
search result in order to decide which schema is better?
Well, that's actually a hard problem. There's the
various TREC data, but that's a generic solution and most
every individual application of this generic thing called
"search" has its own version of