On Mon, Jan 26, 2009 at 12:20 PM, Parisa wrote:
>
> Is there any solution for fixing this bug ?
> --
> View this message in context:
> http://www.nabble.com/solrj-delete-by-Id-problem-tp21433056p21661131.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
I don't think it is
Is there any solution for fixing this bug ?
--
View this message in context:
http://www.nabble.com/solrj-delete-by-Id-problem-tp21433056p21661131.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Mon, Jan 26, 2009 at 3:41 AM, Paul Libbrecht wrote:
>
> Is it common practice to use the maven war-overlay function so as to build,
> mostly, a solr webapp but with some added servlets and a few more classes
> (e.g. my own analyzers) ?
your own analyzersa can be added w/o modifying the solr web
The number of documents varies - sometimes it increases, sometimes it
decreases - month to month.
However, the index size increases monotonically.
I was expecting some gradual growth as I expect Lucene retains terms that
are no longer referenced from any documents, so you'll end up with the
supers
On Jan 25, 2009, at 6:06 PM, James Brady wrote:
Hi,I have a number of indices that are supposed to maintaining
"windows" of
indexed content - the last month's work of data, for example.
At the moment, I'm cleaning out old documents with a simple cron job
making
requests like:
date_added:[
Chris,
Sorry about avoiding shingle part but:
> ... boo boo bar car la la la car bar bar bar ...
>
> This too doesn't seem to happen if I disable bigram indexing.
I've seen same thing with bigram tokens (not shingle) and reported it:
https://issues.apache.org/jira/browse/LUCENE-1489
Then I wro
Hi,I have a number of indices that are supposed to maintaining "windows" of
indexed content - the last month's work of data, for example.
At the moment, I'm cleaning out old documents with a simple cron job making
requests like:
date_added:[* TO NOW-30DAYS]
I was expecting disk usage to plateau p
I don't know of any standard export/import tool -- i think luke has
something, but it will be faster if you write your own.
Rather then id:[* TO *], just try *:* -- this should match all
documents without using a range query.
On Jan 25, 2009, at 3:16 PM, Ian Connor wrote:
Hi,
Given the
Is it common practice to use the maven war-overlay function so as to
build, mostly, a solr webapp but with some added servlets and a few
more classes (e.g. my own analyzers) ?
thanks in advance
paul
smime.p7s
Description: S/MIME cryptographic signature
Hi,
Given the only real way to reindex is to save the document again, what is
the fastest way to extract all the documents from a solr index to resave
them.
I have tried the id:[* TO *] trick however, it takes a while once you get a
few thousand into the index. Are there any tools that will quick
Hi Noble,
Great stuff, no problem, I really think the Solr development team is
excellent and takes pride in delivering high quality software!
And we're going into production with a brand new Solr based system in a few
weeks as well, so I'm really happy that this is fixed now.
Bye,
Jaco.
2009/1
I even tried the solr client (which communicates binarily) and a
reader is converted... with toString!
Yes, something of the sort you describe below is what I'm looking for.
I think a URL would be a safe bet for many applications.
paul
Le 25-janv.-09 à 14:39, Yonik Seeley a écrit :
I have
Yup, it is exactly this thing.
Setting up in Tomcat "server.xml"
solved the problem!
Otis Gospodnetic wrote:
>
> Sergey,
>
> Could it be the wrong character encoding set in your servlet container's
> config?
>
>
> Otis --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
Sergey,
Could it be the wrong character encoding set in your servlet container's config?
Otis --
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Sergey Nikitin
> To: solr-user@lucene.apache.org
> Sent: Sunday, January 25, 2009 11:41:18 AM
> Subje
Hello! I've encountered following problem:
When i try to run some test query from Solr admin interface with russian
letters, it is not working. Example:
Im typing in "преступление" and i've got following query:
...
пÑеÑÑÑпление
...
Analyzing tool works pretty fine with this word:
On Sat, Jan 24, 2009 at 6:30 PM, Paul Libbrecht wrote:
> is good practice to post large solr update documents?
> (e.g. 100kb-2mb).
> Will solr do the necessary tricks to make the field use a reader instead of
> strings?
Solr will stream a *document* at a time from the input stream fine,
but it ca
Paul
Its not just about merging the fields or resource usage. If you look
at the scenario below, the issue is that it mixes up my fields
(shipping and billing address) for instance. I can't merge them and
still keep the 'distinction' for search.Your case is a
'generalization' field.
Thanks
Much appreciate the guidance. I think I will go with the single field
approach for now. Also will take a look at the URL below and come
back if I have any ideas.
Guna
On Jan 25, 2009, at 12:49 AM, Shalin Shekhar Mangar wrote:
On Sun, Jan 25, 2009 at 2:05 PM, Gunaranjan Chandraraju
Guna,
it's really really normal to duplicate stuffs to be merged into a field.
We do this all the time, for example to have a field "text-in-any-
language" while a field "text-in-english" is also there and the
queries boost matches in text-in-any-language less than text-in-
english (if user
On Sun, Jan 25, 2009 at 2:05 PM, Gunaranjan Chandraraju <
chandrar...@apple.com> wrote:
> Thanks
> This sounds redundant to me - to store the fields separately and then
> concat all of them to one copy field again.
>
Sometimes that may be the only way. For example, if you want to facet on
some of
Thanks
This sounds redundant to me - to store the fields separately and then
concat all of them to one copy field again.
My XML is like this
I am currently using XPATH or XSL to separate them into individual
indexed fields like: address_state_1, address_type_1 etc. in SOLR.
From what you
21 matches
Mail list logo