Hello, I am begining with solr and i have a problem with the delete by query.
If i do a query to solr, it give me the results that i hope but when the
query is sent by XML as a delete, solr don't erase it from the index.
The query contains "!·$%&=¿^¨Ç_;^Ǩ_;;;[EMAIL PROTECTED]@[EMAIL PROTECTED]"
Hi,
I'm currently in the middle of converting my index from the old
spellchecker request handler to the spellcheck component. My index has a
category field and my frontend only allows to search in one category at
once so I have a spellchecker request handler for each category in order
to present
I figured it out myself.
The parameter is called "spellcheck.dictionary". This needs to be set to the
desired spellchecker name.
I'll updated the wiki with this information.
Best regards,
Stefan
-Ursprüngliche Nachricht-
Von: Stefan Oestreicher [mailto:[EMAIL PROTECTED]
Gesendet: Mitt
are you sure you committed after the 'delete' ?
On Wed, Sep 10, 2008 at 2:26 PM, Athok <[EMAIL PROTECTED]> wrote:
>
> Hello, I am begining with solr and i have a problem with the delete by query.
> If i do a query to solr, it give me the results that i hope but when the
> query is sent by XML as a
Yes, when the file is index, i send a commit
Noble Paul നോബിള് नोब्ळ् wrote:
>
> are you sure you committed after the 'delete' ?
>
> On Wed, Sep 10, 2008 at 2:26 PM, Athok <[EMAIL PROTECTED]> wrote:
>>
>> Hello, I am begining with solr and i have a problem with the delete by
>> query.
>> If i
Hi, Is there a way to force solr/lucene to return a given number of
documents in multiple categories? Facets doesn't seem to be what I want
because it only returns category names + items count. In my case I want to
specify the categories that I want and a maximum number of items to retrieve
in each
The only thing I can suggest is that each and every Query in Solr/
Lucene is an example of custom scoring. You might be better off
starting w/ TermQuery and working through PhraseQuery, BooleanQuery,
on up. At the point you get to DisJunctionMax, then ask questions
about that specific one.
On Sep 5, 2008, at 6:27 PM, Ravindra Sharma wrote:
Hi Folks,
I have somewhat complex scoring/boosting requirement.
Say I have 3 text fields A, B, C and a Numeric field called D.
Say My query is "testrank".
Scoring should be based on following:
Query matches
1. text fields A, B and C, & High
Hi All,
We have a cluster of 4 servers for the application and Just one
server for Solr. We have just about 2 million docs to index and we never
bothered to make the solr environment clustered as Solr was delivering
performance with the current setup itself. Offlate we just discovered
We do both #2 and #4 from the Wiki page. If the schemas have a lot of
overlap and you don't foresee the need to scale to multiple machines (either
due to index size or amount of traffic), it may be best to put all the data
in a single index with different type fields (#4); this certainly minimizes
Have you tried performing an "optimize"? Solr doesn't seem to fully
integrate all updates into a single index until an optimize is performed.
Jason
On Wed, Sep 10, 2008 at 1:05 PM, sundar shankar <[EMAIL PROTECTED]>wrote:
> Hi All,
> We have a cluster of 4 servers for the application a
: > query = (A:testrank AND B:testrank AND C:testrank)^10 OR (A:testrank AND
: > B:testrank)^9 OR (A:testrank AND C:testrank)^8 OR (B:testrank AND
: > C:testrank)^7 OR (A:testrank)^6 OR (B:testrank)^5 OR (C:testrank)^4
: > sort = by Score (primary), Field D (Secondary)
: >
: > Also, I do need to
I would like to use (abuse?) the dataimporter.last_index_time variable in my
full-import query, but it looks like that variable is only set when running
a delta-import.
My use case:
I'd like to use a stored procedure to manage how data is given to the
DataImportHandler so I can gracefully handle
I had an Optimize earlier. But removed it as it was too grueling and very time
consuming. IS there a way to configure auto optimize in solr. A settings that
should optimize the data in some time or after some records, Similar to what we
have for commit?
> Date: Wed, 10 Sep 2008 14:37:11 -0400
OPtimize solved it . Thanks Jason. I am surprised on why solr does this?
> Date: Wed, 10 Sep 2008 14:37:11 -0400
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: Re: Question on how index works - runs out of disk space!
>
> Have you tried performing an "optimize"? Solr do
: I need to implement a Query similar to DisjunctionMaxQuery, the only
: difference would
: be it should score based on sum of score of sub queries' scores instead of
: max.
BooleanQuery computes scores that are the sub of hte subscores -- you just
need to disable the coordFactor (there is a con
On 5-Sep-08, at 5:01 PM, Ravindra Sharma wrote:
I am looking for an example if anyone has done any custom scoring with
Solr/Lucene.
I need to implement a Query similar to DisjunctionMaxQuery, the only
difference would
be it should score based on sum of score of sub queries' scores
instead of
: OPtimize solved it . Thanks Jason. I am surprised on why solr does this?
this gets into some complicated discussions about the underlying Lucnee
index format, this is discussed at a very low level in the Lucene docs...
http://lucene.apache.org/java/2_3_2/fileformats.html
...but at a
Thats brilliant. I am just starting to wonder if there anything at all
that you guys haven't thought about ;) Thanks that setting should be
really useful.
> Date: Wed, 10 Sep 2008 15:26:57 -0700
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: RE: Question on how index works
I created a JIRA issue for this and attached a patch:
https://issues.apache.org/jira/browse/SOLR-768
wojtekpia wrote:
>
> I would like to use (abuse?) the dataimporter.last_index_time variable in
> my full-import query, but it looks like that variable is only set when
> running a delta-import.
20 matches
Mail list logo