Hi buddy,
I am working on a music search project and I have a special requirement.
It's about the ranking when query the artist name.
My project is I get a lot of music records from different sources, some are
the same song name but the size or source are different.
I want to implement that effec
: I was wondering if there was an option to initialize Solr server with
: synonyms pulled from a database while indexing documents? At the moment, the
: only option seems to be to use a flat file.
loading from a text file was implemented because it was very generic and
universal (almost anyone c
This thread got buried last night, so... bump.
pof wrote:
>
> Hi, I was wondering if any has had luck deleting added documents from
> SolrQueryResponse? I am subclassing StandardRequestHandler and after I run
> the handle request body method (super.handleRequestBody(req, rsp);) I want
> to filte
: currently the terms component does not support filter queries. However,
: without them the returned count for the terms might differ to the actual
: results the user gets when conducting a search with a suggested word and
: (automatically) applied filter queries.
:
: So, are there any plans to
: Date: Fri, 12 Jun 2009 10:17:14 -0300
: From: hpn1975 nasc
: Subject: Identification of matching by field
This question doesn't seem to have anything to do with Solr. I would
suggest you ask it on the java-us...@lucene mailing list.
:
: Hi,
:
: Is possible to identify docId of document wh
: Date: Tue, 16 Jun 2009 21:26:37 +
: From: siping liu
: Subject: DisMaxRequestHandler usage
: q=(field1:hello OR field2:hello) AND (field3:world)
:
: Can I use dismax handler for this (applying the same search term on
: field1 and field2, but keep field3 with something separate)? If it can
: Date: Fri, 12 Jun 2009 09:09:43 -0700 (PDT)
: From: Vincent Pér�
: Subject: Stats for all documents and not current search
: I need to retrieve the stats of my index (using StatsComponent). It's not a
: problem when my query is empty, but the stats are update according the
: current search...
: Date: Thu, 11 Jun 2009 16:18:01 +1200
: From: Nick Jenkin
Nick: in the example you posted, there is really no advantage to using
something like dismax at all. the purpose of the dismax parser is to
search for the the input words across multiple fields with different
weights using a Disjunc
: Date: Thu, 11 Jun 2009 14:02:48 +0700
: From: chem leakhina
: Subject: How to copyFeild to reverse string in another field
I'm not sure if you figured this out yet (i didn't see any answers so far)
but you can use the ReverseStringFilterFactory in the "index" analyzer for
the dest field of yo
Hi
I have the following fieldType that processes korean/chinese/japanese text
When I supply korean words/phrases in the query, I do get several expected
Korean URLs as search results, and the my keywords are correctly highlighted
in the excerpt. B
It's simply the initial size given to a Map implementation so that its internal
array bucket is already sized up. See HashMap constructor args for more info.
Frankly, I think this is an example of option-itis - too many configuration
options.
~ David Smiley
On 6/30/09 4:03 PM, "Phillip Farber
The exception SOLR raises is :
org.apache.lucene.queryParser.ParseException: Cannot parse
'vector:_*[^_]*_[^_]*_[^_]*': Encountered "]" at line 1, column 12.
Was expecting one of:
"TO" ...
...
...
Ben wrote:
Passing in a RegularExpression like "[^_]*_[^_]*" (e.g. matching
anyth
Passing in a RegularExpression like "[^_]*_[^_]*" (e.g. matching
anything with an underscore in the string) using some code like :
...
parameters.add("fq", "vector:[^_]*_[^_]*");
...
seems to cause problems for SOLR, I assume because of the [ or ^ character.
Can somebody please advise how to h
I'm trying to understand the purpose of the initialSize parameter for
the queryResultCache and documentCache. Is it correct that it controls
how much heap is allocated to each cache at startup?
I can see how it makes sense for queryResultCache since it is documented
as an "ordered lists of
Sorry, but on a closer look this allocation appears to make sense given
our solrconfig parameters. Thanks for taking the time to reply.
Regards,
Phil Farber
Otis Gospodnetic wrote:
Hello,
If Solr is your only webapp in that container, than this is probably a Solr issue. Note that
"Solr iss
: We've had a couple of people ask for scores as percentages. If you really
: want this, it should be just barely possible, though it would take some
: coding. You calculate the maximum possible score, then report scores
: normalized against that.
Right ... this idea has come up on the java-user
On Jun 30, 2009, at 2:02 PM, Walter Underwood wrote:
In Ultraseek, each collection (core) has a gathering method
associated with
it, for example, a web spider. There were two special collection
types, a
mirror collection, which copied an index from a different server,
and a
merge collectio
In Ultraseek, each collection (core) has a gathering method associated with
it, for example, a web spider. There were two special collection types, a
mirror collection, which copied an index from a different server, and a
merge collection, which combined other collections.
See the "Creating Collec
>
> On Jun 30, 2009, at 1:51 PM, dar...@ontrenet.com wrote:
>> It seems the merge index request (admin or lucene tool) expects the
>> indexes to already be local.
>>
>> Maybe in the future I can specify the URL to the remote indexes for
>> Solr
>> to merge. For now, I will find a way maybe using rs
On Jun 30, 2009, at 1:51 PM, dar...@ontrenet.com wrote:
It seems the merge index request (admin or lucene tool) expects the
indexes to already be local.
Maybe in the future I can specify the URL to the remote indexes for
Solr
to merge. For now, I will find a way maybe using rsync or scp or
s
Thank you. That's what I need I think, but the indexes are created on
other computers.
It seems the merge index request (admin or lucene tool) expects the
indexes to already be local.
Maybe in the future I can specify the URL to the remote indexes for Solr
to merge. For now, I will find a way may
On Tue, Jun 30, 2009 at 10:39 PM, wrote:
> I read the distributed search/indexing wiki page but wondered if there
> is a best-practice for re-combining distributed shards back into a
> single local index. I have many computers generated indexes that cost
> money when running, but want to merge al
Ludwig,
Thank you for your prompt answer and I could solve the issue.
Then I have another question.
When I search "make for", solr returns words include both "make" and "for",
but when I type more than 3 words such as "in order to", the result becomes
0 though the index is sure to have several w
akinori schrieb:
I indexed English dictionary to solr.
When I search "apple juice" for example, solr understands the query is
"apple" & "juice" as what I want. Howerver, when I search "apple for",
solr thinks that the query is just "apple".
How can I solve this? I think I have to understand the a
: Date: Tue, 16 Jun 2009 20:38:24 +0530
: From: Avlesh Singh
: Subject: Re: Query parameter encode issue
: > qryString = "+text:test +site_id:(4 ) +publishDate:[2008-05-01T00\:00\:00Z
: > TO 2009-06-30T00\:00\:00Z]";
: > URLEncoder.encode(qryString, "UTF-8");
: >
:
: You don't have to encode the
Hi,
I read the distributed search/indexing wiki page but wondered if there
is a best-practice for re-combining distributed shards back into a
single local index. I have many computers generated indexes that cost
money when running, but want to merge all the indexes so I can terminate
the servers.
I indexed English dictionary to solr.
When I search "apple juice" for example, solr understands the query is
"apple" & "juice" as what I want. Howerver, when I search "apple for", solr
thinks that the query is just "apple".
How can I solve this? I think I have to understand the analyzer. Could
an
: Date: Mon, 15 Jun 2009 16:30:38 -0500
: From: "Mukerjee, Neiloy (Neil)"
: Subject: RE: Using The Tomcat Container
:
: I followed the steps detailed in this tutorial:
:
http://justin-hayes.com/2009-04-08/installing-apache-tomcat-6-and-solr-nightly-on-ubuntu-804
:
: When I get to the point at
: Date: Tue, 9 Jun 2009 16:04:03 -0700 (PDT)
: From: JCodina
: Subject: facets and stopwords
: I have a text field from where I remove stop words, as a first approximation
: I use facets to see the most common words in the text, but.. stopwords are
: there, and if I search documents having the st
: Date: Thu, 11 Jun 2009 11:25:28 +0800
: From: James liu
: Subject: does solr support summary
:
: if user use keyword to search and get summary(auto generated by
: keyword)...like this
...in the Lucene/Solr ecosystem, this is refered to as "highlighting" ...
if
you search the docs, code, and
Kraus, Ralf | pixelhouse GmbH schrieb:
When I am searching for ONE word with an german umlaut like
"kräuterkeckse" (the right word is kräuterkekse) the spellchecker
gives me two corrections :
Spellcheck for kr = kren
Spellcheck for uterkeksse = butterkekse
WHY is SOLR break this ONE word apart
Hello,
I am trying to install a patch for Solr
(https://issues.apache.org/jira/browse/SOLR-284) but I'm not sure how to do
it in Windows.
I have a copy of the nightly build, but I don't know how to proceed. I
looked at the HowToContribute wiki for patch installation instructions, but
there are n
We've had a couple of people ask for scores as percentages. If you really
want this, it should be just barely possible, though it would take some
coding. You calculate the maximum possible score, then report scores
normalized against that.
1. Use a TF scoring formula that has a ceiling, like hyper
Hello,
I really need some help with the SOLR SpeelChecker and german Umlauts.
So far I am really satisfied with the JAROWINKLER algorithm.
Now my problem :-)
When I am searching for ONE word with an german umlaut like
"kräuterkeckse" (the right word is kräuterkekse) the spellchecker
gives me
Interesting, thanks for the link. I'm glad we didn't want to implement
this anyway.
On Tue, 2009-06-30 at 18:22 +0530, Shalin Shekhar Mangar wrote:
> On Tue, Jun 30, 2009 at 5:06 PM, Sushan Rungta wrote:
>
> > Please assist me in making a query in lucene whereby I shall see the
> > result with
I recently upgraded to a nightly build of 1.4. The build works fine, I
can deploy fine. But when I go to insert data into the index, I get the
following error:
26-Jun-2009 5:52:06 PM
org.apache.solr.update.processor.LogUpdateProcessor
finish
On Tue, Jun 30, 2009 at 5:06 PM, Sushan Rungta wrote:
> Please assist me in making a query in lucene whereby I shall see the
> result with:
>
> 1. 100% exact match
> 2. 80% match
> 3. 60% match.
>
> My query string will have minimum number of 100 characters and it may go
> up by 1 characters.
Hi,
I am not sure if i understand completely but i suppose you'd need to
retrieve the score field per document by adding &fl=score as parameter.
This will return the score per document based on your query. If you then
use the maxScore result as base, you can then calculate what is 100%,
80% etc.
I'm having a hard time understanding what you're reallyafter. What does 80%
exact match mean? Perhaps
a couple of examples would help us help you.
Best
Erick
On Tue, Jun 30, 2009 at 7:36 AM, Sushan Rungta wrote:
> Please assist me in making a query in lucene whereby I shall see the
> result wit
my -1 for a a maven only build system. Building has not been a problem
for me with Solr.
+0 to add a pom.xml as a parallel setup
On Mon, Jun 29, 2009 at 6:46 PM, Erik Hatcher wrote:
> I'll weigh in and throw a -1 to a Maven-only build system for Solr. If
> there is still a functioning Ant build,
On Tue, Jun 30, 2009 at 4:50 AM, Smiley, David W. wrote:
> FWIW
> I strongly agree with your sentiments, Manual.
> One of the neat maven features that isn't well known is just being able to do
> "mvn jetty:run" and have Jetty load up right away (no creating of a web-app
> directory or packaging o
Please assist me in making a query in lucene whereby I shall see the
result with:
1. 100% exact match
2. 80% match
3. 60% match.
My query string will have minimum number of 100 characters and it may go
up by 1 characters.
regards,
Sushan Rungta
Mob: +91-9312098968
Do you have to remove the documents which satisfies the both conditions? If
so use AND.
you have used OR, so if the first condition is satisfied , it will not
consider the second condition at all.
try using fq for both the condition, fq=(spacegroupID:g*) fq=(!userID:g*).
-Original Message-
Thanks, I did a quick testing with collate parameter, It works !!
_
From: Markus Jelsma - Buyways B.V. [mailto:mar...@buyways.nl]
Sent: Tuesday, June 30, 2009 2:49 PM
To: cra...@ceiindia.com
Cc: solr-user@lucene.apache.org
Subject: Re: spelling suggestion in solr.
Hello,
This is ava
Thank you for your reply.
-Original Message-
From: Michael Ludwig [mailto:m...@as-guides.com]
Sent: Tuesday, June 30, 2009 2:48 PM
To: solr-user@lucene.apache.org
Subject: Re: spelling suggestion in solr.
Radha C. schrieb:
>
> The feature "spelling suggestion" is available in solr? If
I want to execute the following query:
(spacegroupID:g*) OR (!userID:g*).
What I want to do here is select all docs where spacegroupID is starts with
'g' or selects docs where userId not start with 'g'.
In above syntax (!userID:g*) gives results correctly.
Also (spacegroupID:g*) gives results co
Hi Noble,
i am using solr 1.3
Regards,
Raakhi
2009/6/30 Noble Paul നോബിള് नोब्ळ्
> which version of solr are you using?
>
> 2009/6/30 Noble Paul നോബിള് नोब्ळ् :
> > use the XMLResponseParser
> >
> http://wiki.apache.org/solr/Solrj#head-12c26b2d7806432c88b26cf66e236e9bd6e91849
> >
> > I gues
Ok, thats work...
Sorry for this Simple question
which version of solr are you using?
2009/6/30 Noble Paul നോബിള് नोब्ळ् :
> use the XMLResponseParser
> http://wiki.apache.org/solr/Solrj#head-12c26b2d7806432c88b26cf66e236e9bd6e91849
>
> I guess there was some error during the update
>
> On Tue, Jun 30, 2009 at 3:33 PM, Rakhi Khatwani wrote:
>>
What happens when you search for test* ?
Wildcard terms are not analyzed, and thus not lowercased, yet when you
indexed you likely lowercased all terms by way of the analyzer
configuration for the field you're querying.
One solution/workaround is simply to lowercase the entire query string
use the XMLResponseParser
http://wiki.apache.org/solr/Solrj#head-12c26b2d7806432c88b26cf66e236e9bd6e91849
I guess there was some error during the update
On Tue, Jun 30, 2009 at 3:33 PM, Rakhi Khatwani wrote:
> Hi,
> I am integrating solr with hadoop. so i wrote a map reduce method which
> writes
Hi,
I am integrating solr with hadoop. so i wrote a map reduce method which
writes the indexes in HDFS.
the map methods work fine, and in my reduce method, i call solrServer to
update the indexes, but when i try accessing solrServer, i get the following
exception
java.lang.ClassCastException: j
Hello,
This is available indeed, check the
http://wiki.apache.org/solr/SpellCheckComponent wiki page for detailed
information. The interesting parameter here is &spellcheck.collate=true
which will return something like price:[80 TO 100]
dell ultrasharp for the, erronous spelled price:[80 TO 100]
Radha C. schrieb:
The feature "spelling suggestion" is available in solr? If yes, can
you tell me some documentations?
Have you tried googling for: solr spelling ? First hit:
http://wiki.apache.org/solr/SpellCheckComponent
Michael Ludwig
Hello List,
The feature "spelling suggestion" is available in solr? If yes, can you tell
me some documentations?
Thanks,
Radha.C
Hallo Users...
I have download today the New Nightly Build for a testing System...
I have eddit my "schema.xml"
and runn the Server...
Indext my Own Text XML and search for this...
I indext file name:
Test01 in file 1
Test02 in file 2
Test03 in file 3
Test04 in file 4
Test05 in file 5
Test06 in
On Tue, Jun 30, 2009 at 1:48 PM, Fergus McMenemie wrote:
>
> I have not used Maven much but we seemed to have tons of bother
> setting it up on nodes which had no internet access. In fact I
> can state is was the key reason we did not move to cocoon 2.x.
>
>
You can setup an internal repository m
>FWIW
>I strongly agree with your sentiments, Manual.
>One of the neat maven features that isn't well known is just being able to do
>"mvn jetty:run" and have Jetty load up right away (no creating of a web-app
>directory or packaging of a war or anything like that).
>What I hate about ant based p
Hi All,
1. There're 3 kind of searches in my application: a) PhraseQuery search; b)
search for separate words; c) MLT search. The problem I encountered is in
the use of a stop words list. If don't take it into account, the MLT query
picks up common words as the most important words what is not ri
59 matches
Mail list logo