: I have a field "searchname" with a boost of "3.0" during the : document.add. Another field "text" is a copyField of several entries,
index time "field boosts", "document boosts", and document length are all factored into the "fieldNorm" value at indexing time -- so if you want to use field boosts you'll have to skip the omitNorms="true" suggestion i gave you earlier. : What I need is, a search, which will handle each document the : same, regardless of the frequency and the size, it shall calculate : the score only on the boost factors, so a document with a hight : boostfactor and the same text in it as another one with less factor : shall be before the others. if you have your lengthNorm function returning 1.0 in all cases, then the fieldNorm should be based entirely on your fieldBoost -- so i'm not sure what you aren't getting the results you expect. One thing you may not be realizing though is that the Similarity.lengthNorm function is used when the documents are being index (unlike all the other Similarity functions that are used at query time) so if you changed the Similarity class without rebuilding your index you won't see those norm changes. the main way to make sense of your scores is to look at the Explanation output you get for each doc ... in Solr you can turn this on for the Standard and DisMax request handlers by adding debugQuery=1 to your URLs ... a new block of space indented text will be added for each doc explaining why it got the scores it did. If you still can't make sense of your scores after looking at the Explanation output, then you may want to followup on the java-user lucene list -- it's a much bigger audience then the solr list, and someone there might be able to spot your problem (including the query toString and the Explanations for your various docs will go a long way) -Hoss