through a levenshtein algorithm implemented in c# code. Levenshtein
gives back a % match. We then use the highest match so long as it is above
85%
Hope this makes it a little more clear what we are doing.
On Thu, Mar 28, 2013 at 11:39 AM, Roman Chyla wrote:
> On Thu, Mar 28, 2013 at 12:27
. You
> might look into implementing a custom search component and register it as a
> first-component in your search handler (take a look at solrconfig.xml for
> how search handlers are configured, e.g. /browse).
>
> Cheers,
> Tim
>
>
> On Thu, Mar 28, 2013 at 9:43 AM, Mike Haas
ie. remember)
> exact numbers, but my feeling is that you end up storing ~13% of document
> text (besides, it is a one token fingerprint, therefore quite fast to
> search for - you could even try one huge boolean query with 1024 clauses,
> ouch... :))
>
> roman
>
> On
Hello. My company is currently thinking of switching over to Solr 4.2,
coming off of SQL Server. However, what we need to do is a bit weird.
Right now, we have ~12 million segments and growing. Usually these are
sentences but can be other things. These segments are what will be stored
in Solr. I’v