Thanks for the input guys.
I've decided to implement some unit tests for now, although we don't have a
clean data set to work from (sucks, I know).
We're going to keep track of a set of vital queries, and ensure they don't
return 0 results, as we have a pretty decent level of confidence with Solr
Mark,
In one project, with Lucene not Solr, I also use a smallish unit test sample
and apply some queries there.
It is very limited but is automatable.
I find a better way is to have precision and recall measures of real users run
release after release.
I could never fully apply this yet on a
Hi Mark,
What we're doing is using a bunch of acceptance tests with JBehave to
drive our testing. We run this in a clean room environment, clearing
out the indexes before a test run and inserting the data we're
interested in. As well as tests to ensure things "just work" we have a
bunch of tests
Hey guys,
I'm wondering how people are managing regression testing, in particular with
things like text based search.
I.e. if you change how fields are indexed or change boosts in dismax,
ensuring that doesn't mean that critical queries are showing bad data.
The obvious answer to me was using un