Hello,
I tried to use SOLR-788 with solr1.4 so that distributed MLT works
well . While working with this patch i got an error mesg like
1 out of 1 hunk FAILED -- saving rejects to file
src/java/org/apache/solr/handler/component/MoreLikeThisComponent.java.rej
Can anybody help me out?
T
Hi ,
@Tommaso @Jan Høydahl Thanks for the response :)
I 've done it almost similar to what Tommaso suggested and yes it's about
70-80% accurate.
I understand the contradiction in the search - customer find stuff without
the exact right wording (recall) at the same time as you want the query to
be
Greg,
You need to get stopword lists for your 6 languages. Then you need to create
new field types just like that 'text' type, one for each language. Point them
to the appropriate stopwords files and instead of "English" specify each one of
your languages. You can either index each language
Hello,
I'm looking into Field Collapsing. According to the documentation one
limitation is that "distributed search support for result grouping has not yet
been implemented."
Just wondered if there's any plan to add distributed search support to field
collapsing. Or is there any technical obst
Checked out a version of 4.0 to test field collapsing. When I field
collapse the numFound always returns the number of documents BEFORE
collapsing. Is there a way to get the total number of documents after
collapsing?
Thanks
I download Solr 4.0 from trunk today and I tried using a custom
Evaluator during my full/delta-importing.
Within the evaluate method though, the Context is always null? When
using this same class with Solr 1.4.1 the context always exists. Is this
a bug or is this behavior expected?
Thanks
Hello all,
I have gotten my DataImporthandler to index my data from my MySQL database. I
was looking at the schema tool and noticing that stopwords in different
languages are being indexed as terms. The 6 languages we have are English,
French, Spanish, Chinese, German and Italian.
Right now I
Excellent. Thanks, Robert!
-- Avi
On Mon, Feb 21, 2011 at 19:24, Robert Muir wrote:
> On Mon, Feb 21, 2011 at 12:16 PM, Avi Rosenschein
> wrote:
> > Is there any analyzer that can do full Unicode case folding (for example,
> as
> > described at
> >
> http://www.w3.org/International/wiki/Case_f
B@1266f55
It looks like you are using MySQL.
The data field needs to be in -MM-dd'T'hh:mm:ss format.
I would probably concert datetime in Mysql to varchar() in this format.
On 2/21/11 8:40 AM, "MOuli" wrote:
>
>Hey guys.
>
>I want to evaluate Solr as search engine, but now I have got an
On Mon, Feb 21, 2011 at 12:16 PM, Avi Rosenschein
wrote:
> Is there any analyzer that can do full Unicode case folding (for example, as
> described at
> http://www.w3.org/International/wiki/Case_folding#Recommendations_for_Case_Folding
> )?
Hi, in branch_3x you can use the ICUNormalizer2FilterFac
Is there any analyzer that can do full Unicode case folding (for example, as
described at
http://www.w3.org/International/wiki/Case_folding#Recommendations_for_Case_Folding
)?
Specifically, in a German index, I would like the sharp s character (ß) to
be normalized into ss, which isn't done by any
Hey guys.
I want to evaluate Solr as search engine, but now I have got an "Invalid
Date String"-Exception.
Here is the Error Message:
WARNUNG: Error creating document :
SolrInputDocument[{machineId=machineId(1.0)={1151665},
priceBrutto=priceBrutto(1.0
Hi,
Adding and deleting fields is not something you do regularly in production, so
I
assume you are in development phase, in which case I'd suggest just reindexing.
I'm not sure if you get an error or not if you, say, request fl=MyOldFieldName
and the query returns documents without that fie
No Wease,
We got the performance improvement after doing the following stuff
--> Reduced the merge factor from 10 to 3.
--> Auto-warming queries as I mentioned in my initial thread.
Thanks,
Johnny
--
View this message in context:
http://lucene.472066.n3.nabble.com/Query-performance-very-s
Hello
What about adding or deleting fields? I have been reindexing after doing that
but is it needed?
François
On Feb 21, 2011, at 7:16 AM, Otis Gospodnetic wrote:
> Hello,
>
> When you change types you typically want to reindex everything.
>
> Otis
>
> Sematext :: http://sematext.com/
Hi I'm using solr py to stored files in pdf, however at moment of run script,
shows me that issue:
An invalid XML character (Unicode: 0xc) was found in the element content of
the document.
Someone could give some help?
cheers
jayron
--
View this message in context:
http://lucene.472066.n3.na
Hi,
I recommend you to browse through the WIKI at http://wiki.apache.org/solr/
You will learn a lot about Solr, much faster than asking every question on the
list.
http://wiki.apache.org/solr/HowToContribute#Working_With_Patches
--
Jan Høydahl, search solution architect
Cominvent AS - www.comin
hii,
I am currently using solr1.4 . I want to update it using patch files
.Can anyone tell me hoow to update it using these files .
Hi Marc,
Check hit#3: http://search-lucene.com/?q=free+synonyms&fc_project=Solr
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
> From: Marc Kalberer
> To: solr-user@lucene.apache.org
> Sent:
Hi Stijn,
Yes, there is a link to Login or create account either on top or bottom of
every
page.
Please do edit it directly if you see things that are incorrect or outdated.
If
you share them in email, I'm 99.9% sure nobody will take the time to transfer
that to the Wiki.
If you are afraid
Hi Mark,
Check out the Wiki -
http://search-lucene.com/?q=nightly+builds&fc_project=Solr&fc_type=wiki
Now, these are nightly builds, not necessarily stable snapshots :) But after
you test them you can call them stable snapshots for your purposes/app.
Otis
Sematext :: http://sematext.com/
Hello,
When you change types you typically want to reindex everything.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
> From: Isha Garg
> To: solr-user@lucene.apache.org
> Sent: Mon, February
Hello Isha,
I strongly suggest:
1) a Hello :)
2) searching before asking, e.g.,
http://search-lucene.com/?q=%22more+like+this%22+mlt+%2Bdistributed&fc_project=Solr&fc_type=jira
3) Thanks/signature/something-human :)
I think people will be more inclined to help you that way.
Otis
Semate
Hi,
Facet counts in Solr are 100% accurate. That's not the problem.
The problem lies in how you configure search
- In what fields do you search?
If you search the product description and it says "LED is much better than
LCD" you get that product in your facet counts
- What synonyms/stemming et
Hi Praveen,
as far as I understand you have to set the type of the field(s) you are
searching over to be conservative.
So for example you won't include stemmer and lowercase filters and use only
a whitespace tokenizer, more over you should search with the default
operator set to AND.
Then faceting
Hi,
Is it possible to have 100% accuracy for facet counts using solr ? Since
this is for a product price comparison site I would need the search to
return accurate results. for example if I search "sony lcd Tv" I do not want
"sony Led Tv" to be returned int he results. Please let me know if this
can anyone tell me whether Distributed index or sharding support
morelikethis feature of solr.
27 matches
Mail list logo