: I updated with a patch. Is it possible to get this in soon cuz I
: have a client waiting on this.
I've posted some comments about your patch.
at the moment, the committers have started focusing on getting 1.2
released. Even if this was a relaly popular issue, it's a non trivial
change th
facet.analyzer is true, do analyze, if false don't analyze.
why i say that, Chinese word not use space to split, so if analyzed, it will
change.
now i will use map to fix it before no facet.analyzer.
--
regards
jl
Hi Yonik:
I updated with a patch. Is it possible to get this in soon cuz I
have a client waiting on this.
Thanks again
-John
On 5/22/07, John Wang <[EMAIL PROTECTED]> wrote:
Hi Yonik:
Thank you again for your help!
I created an improvement item in jira (SOLR-243) on this.
-Jo
On 25-May-07, at 2:49 AM, Burkamp, Christian wrote:
Thierry,
If you always start from scratch you could even reset the index
completely (i.e. delete the index directory). Solr will create a
new index automatically at startup.
This will also make indexing and optimizing much faster for an
This would require some storage when the index is built to map between the
internal field name and the "display name" ... since this is not a Lucene
concept it would have to be a higher level concept hat Solr write to disk
directly -- there are currently no concepts like this but that doens't
mean
I had a similar issue with a heavy use of dynamic fields. You first want to get
those spaces out of there. Lucene does not like spaces in field names. So, I
just replaced the space with a rarely used character (ASCII 8 or something like
that). I did this in my indexing. And then I just translate
I would normally agree but the problem is that I'm making very heavy use
of the dynamic fields and therefore don't really know what a record
looks like. Ie the only thing that knows about the data is the input
data itself. I've added logic to 'solrify' the input field names as
they come to me in
Will Johnson wrote:
Has anyone done anything interesting to preserve display values for
field names. Ie my users would like to see
Download Speed (MB/sec): 5
As opposed to:
ds:5
The general model has been to think of solr like SQL... it is only the
database - display choices should be
Has anyone done anything interesting to preserve display values for
field names. Ie my users would like to see
Download Speed (MB/sec): 5
As opposed to:
ds:5
there are options for doing fancy encoding of field names but those seem
less that ideal. What I'd really like to do is at
: Anyone encounter a problem when changing their hostname? (via
: /etc/conf.d/hostname or just the hostname command) I'm getting this error
: when going to the admin screen, I have a feeling it's a simple fix. It
: seems to work when it thinks the machine's name is just 'localhost'.
i don't thi
We're controlling this with Tomcat configuration on our end. I'm not a
servlet-container guru, but I would imagine similar capabilities exist on
Jetty, et al.
-- j
On 5/24/07, Ryan McKinley <[EMAIL PROTECTED]> wrote:
Is there a good way to force an index to be read-only?
I could configure a
On 5/25/07, Ethan Gruber <[EMAIL PROTECTED]> wrote:
Posting utf8-example.xml is the first thing I tried when I ran into this
problem, and like the other files I had been working with, query results
return garbage characters inside of unicode.
After posting utf8-example.xml, try this query:
htt
Didn't somebody talk about providing Solr with a custom (subclass of)
IndexReader here on the list the other day? Perhaps then a ReadOnlyIndexWriter
with an appropriately overriden delete methods might be one approach to this.
Or chmod -w? ;)
Otis
. . . . . . . . . . . . . . . . . . . . . .
Posting utf8-example.xml is the first thing I tried when I ran into this
problem, and like the other files I had been working with, query results
return garbage characters inside of unicode.
On 5/25/07, Yonik Seeley <[EMAIL PROTECTED]> wrote:
On 5/25/07, Ethan Gruber <[EMAIL PROTECTED]> wrote:
On 5/25/07, Ethan Gruber <[EMAIL PROTECTED]> wrote:
Yes, it's definitely encoded in UTF-8. I'm going to attempt either today or
Tuesday to post the files to a solr index that is online (as opposed to
localhost as was my case a few days ago) using post.sh through SSH and let
you know how it turns
Just to be clear, [* TO *] does not necessarily return all
documents. It returns all documents that have a value in the
specified (or default) field. Be careful with that! *:*, however,
does match all documents.
Erik
On May 25, 2007, at 5:49 AM, Burkamp, Christian wrote:
Thi
I think I had the same problem (the same error at least) and submitted a
patch. The patch adds a new config option to use the nio locking
facilities instead of the default lucene locking. In the ~week since I
haven't seen the issue after applying the patch (ymmv)
https://issues.apache.org/jira/b
Thanks Yonik.
Regards,
Doss.
On 5/25/07, Yonik Seeley <[EMAIL PROTECTED]> wrote:
On 5/24/07, Doss <[EMAIL PROTECTED]> wrote:
> Is it advisable to maintain a large amount of data in synonyms.txt file?
It's read into an in-memory map, so the only real impact is increased
RAM usage. There reall
Anyone encounter a problem when changing their hostname? (via
/etc/conf.d/hostname or just the hostname command) I'm getting this error
when going to the admin screen, I have a feeling it's a simple fix. It
seems to work when it thinks the machine's name is just 'localhost'.
org.apache.jasper.
unsubcribe
Yes, it's definitely encoded in UTF-8. I'm going to attempt either today or
Tuesday to post the files to a solr index that is online (as opposed to
localhost as was my case a few days ago) using post.sh through SSH and let
you know how it turns out. That should definitely indicate whether or not
Hi my name is Techan.
I want to put the function of distinct of RDBMS in solr.
I want to use any field.
However, whether it solves it in detail like any is not understood. Do
you know
someone?
(I'm sorry about computing english.)
Thierry,
If you always start from scratch you could even reset the index completely
(i.e. delete the index directory). Solr will create a new index automatically
at startup.
If you don't like to delete the files another approach would be to use a query
that returns all documents. You do not nee
We always do a full delete before indexing, this is because for us that is
the only way to be sure that there are no documents in the index that don't
exist anymore.
So delete all, than add all.
To use the delete all, we did the following. We added a field called
dummyDelete. This field always c
24 matches
Mail list logo