On Thursday 29 of July 2010 14:00:21 Eric Grobler wrote:
> But faceting then looks like:
> molln
> munchen
> rossdorf
>
> How can I enable case-insensitive and german agnostic character filters and
> output proper formatted names in the facet result?
Just create another field without any filte
On Fri, Jun 27, 2008 at 1:54 AM, Chris Hostetter
<[EMAIL PROTECTED]> wrote:
> A basic technique that can be used to mitigate the risk of a possible CSRF
> attack like this is to configure your Servlet Container so that access to
> paths which can modify the index (ie: /update, /update/csv, etc...)
the synonym
definition:
reise,urlaub is converted to
reis,urlaub
which then should solve all problems.
Best regards
- Christian
--
Christian Vogler, Ph.D.
Institute for Language and Speech Processing
Athens, Greece
> If i've given differnet advice in the past, I'm sure i had a good reason
> for -- possible due to some aspect of those problems that are subtly
> differnet then yours ... can you post links to hte specific messages
> you're refering to, it might help jog my memory.
One thread is: http://www.nabb
Hi all,
does anyone have experience with running SOLR on OpenJDK 6.0? Any data points,
positive or negative, would be appreciated. I am trying to decide whether to
switch to OpenJDK on Debian Lenny, or whether to stick with the non-free JDK
5.0 for the time being.
Best regards
- Christian
Hi Matt,
On Tue, Apr 28, 2009 at 4:24 AM, Matt Mitchell wrote:
> I've been toying with setting custom pre/post delimiters and then removing
> them in the client, but I thought I'd ask the list before I go to far with
> that idea :)
this is what I do. I define the custom highlight delimiters as
[
; into Solr?
> > >
> > > Thanks,
> > > Liam
> >
> > --
> > Grant Ingersoll
> > http://www.lucidimagination.com/
> >
> > Search the Lucene ecosystem using Solr/Lucene:
> > http://www.lucidimagination.com/search
>
--
Christian Vogler, Ph.D.
Institute for Language and Speech Processing, Athens, Greece
Hi,
I am using Solr 1.2.0 with a custom compound word analyzer, which inserts the
decompositions into the token stream. Because I assume that when the user
queries for a compound word, he is interested only in whole-word matches, I
have it enabled only in my index analyzer chain.
However, due
On Wednesday 27 February 2008 03:58:14 Chris Hostetter wrote:
> I'm not much of a highligher expert, but this *seems* like it was probably
> intentional ... you are tlaking abouthte use case where you have a stored
> field, and no term positions correct? ... so in order to highlight, the
> highligh
On Monday 10 March 2008 19:34:09 Eric Falconnier wrote:
> I am beginning to use the python client from the subversion
> repository. Everything works well except if I want to pass a parameter
> with a dot to the search method of the SolrConnection class (for
> example facet.field). The solution I ha
On Monday 24 March 2008 01:01:59 Leonardo Santagada wrote:
> I have done some modifications on the solr python client[1], and
> though we kept the same license and my work could be put back in solr
> I think if there are more people interested we could improve the
> module a lot.
Have you taken a
On Friday 28 March 2008 21:44:29 Leonardo Santagada wrote:
> Well his examples are in brazilian portuguese and not spanish and the
> biggest problem is that a spanish stemmer is not goin to work. I
> haven't found a pt_BR steammer, have I overlooked something?
Try the Snowball Porter filter factor
egards
- Christian
--
Christian Vogler, Ph.D.
Institute for Language and Speech Processing, Athens, Greece
http://gri.gallaudet.edu/~cvogler/
[EMAIL PROTECTED]
On Wednesday 28 May 2008 01:37:57 Otis Gospodnetic wrote:
> If you have tokenized fields of variable size and you want the field length
> to affect the relevance score, then you do not want to omit norms.
> Omitting norms is good for fields where length is of no importance (e.g.
> gender="Male" vs
14 matches
Mail list logo