Hi
I have indexed 8 fileds with different boost. Now i have given a
searchstring which consists of a words and phrases. Now i want to do AND
search of that searchString on four fields and show the result based on
boost. For me searchString should occur completely in one of the field and
then the b
Hi,
I want to perform search with default AND operator on multiple fields.
Below mentioned query works fine with AND operator and one field :
?q={!lucene q.op=AND df=prdMainTitle_product_s}Ladybird Shrinkwrap
However as soon as I add two fields, it starts giving me records which have
two terms
Development Team wrote:
>
> To specify the
> solr-home I use a Java system property (instead of the JNDI way) since I
> already have other necessary system properties for my apps.
>
Could you please give me a concrete example of how you did this? There is no
example code or commandline example
Chris Hostetter wrote:
: Yeah, I actually looked at the code and saw that later. I was forgetting the
: issue that bugged me (and confusing it with the trouble this guy was having) -
: which is that plugins in the solr/lib folder cannot load from other jars in
: that folder. I think that was the
Marc: I know it's been a while since you asked this question, but i didn't
see any reply ... in general the problem is that a "low" boost is stil la
boost, it can only improve the score of documents that match.
one way to fake a "negative boost" is to give a high boost to everything
that does
: Yeah, I actually looked at the code and saw that later. I was forgetting the
: issue that bugged me (and confusing it with the trouble this guy was having) -
: which is that plugins in the solr/lib folder cannot load from other jars in
: that folder. I think that was the actual issue.
WTF?!? ..
Chris Hostetter wrote:
: Date: Fri, 08 May 2009 08:27:58 -0400
: From: Mark Miller
: Subject: Re: Solr spring application context error
:
: I've run into this in the past as well. Its fairly annoying. Anyone know why
: the limitation? Why aren't we passing the ClassLoader thats loading Solr
: c
: Date: Fri, 08 May 2009 08:27:58 -0400
: From: Mark Miller
: Subject: Re: Solr spring application context error
:
: I've run into this in the past as well. Its fairly annoying. Anyone know why
: the limitation? Why aren't we passing the ClassLoader thats loading Solr
: classes as the parent to th
On Thu, Jun 18, 2009 at 8:30 PM, Peter Wolanin wrote:
> So for now would it make sense to spread out the autocommit times for
> the different cores?
Sure.
You might also consider using commitWithin (solr 1.4) when updating
the index - then you could either send the updates at slightly
different ti
So for now would it make sense to spread out the autocommit times for
the different cores?
Thanks.
-Peter
On Thu, Jun 18, 2009 at 7:07 PM, Yonik Seeley wrote:
> On Thu, Jun 18, 2009 at 4:27 PM, Peter Wolanin
> wrote:
>> I think I understand
>> that all the pending changes are on disk already,
On Thu, Jun 18, 2009 at 4:00 PM, Jonathan Vanasco wrote:
> can anyone give me a suggestion ? i haven't touched java / jetty / tomcat /
> whatever in at least a good 8 years and am lost.
I spent a lot of time trying to get this working too. My conclusion
was simply that the .deb packages for Solr a
My problem is that my project doesn't compile and I have know way of knowing
if I'm on the right track code wise. There just isn't any comprehensive
guide out there for having a solr/jetty app.
Development Team wrote:
>
> Hey,
> So... I'm assuming your problem is that you're having trouble
On Thu, Jun 18, 2009 at 4:27 PM, Peter Wolanin wrote:
> I think I understand
> that all the pending changes are on disk already, so the "commit" that
> happens when the time is up is really just opening new searchers that
> include the added documents.
Only some of the pending changes may be on d
i'm a bit confused. hoping someone can help.
solr is awesome on my macbook for development.
i've been fighting with getting solr-jetty running on my ubuntu box
all day.
after countless searching, it seems that there is no .war file in the
distro
should this be the case?
the actual file
building a temporary index would certainly work, but it's a question of
how efficient it would be (ie: how many users do you have, how often do
they log in, how long does it take to build a typical index, how many
concurrent users will you have, etc...)
one solution i've seen to a problem like
Hello Daryl,
thank you very much for sharing your experience with me :-)
My software Architect reported some exceptions thrown when accessing some
Admin JSPs using Solr 1.4, jboss 4.0.1 SP3 and tomcat, java jdk 1.5.0_06.
I will forward the info you gave me.
Thank you very much.
Giovanni
On
Michael Ludwig-4 wrote:
>
> MilkDud schrieb:
> What do you expect the user to enter?
>
> * "dream theater innocence faded" - certainly wrong
> * dream theater "innocence faded" - much better
>
> Most likely they would just enter dream theater innocence faded, no
> quotes. Without any quotes a
Yes, that's exactly what I needed. I don't know how I missed that.
Thank you!
--
Steve
On Jun 18, 2009, at 4:49 PM, Brendan Grainger wrote:
Are you using Porter Stemming? If so I think you can just specify
your word in the protwords.txt file (or whatever you've called it).
Check out htt
Are you using Porter Stemming? If so I think you can just specify your
word in the protwords.txt file (or whatever you've called it).
Check out http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
and the example config for the Porter Stemmer:
protected="protwords.txt" />
Hi,
I've hit a bit of a problem with destemming and could use some advice.
Right now there is a word in the index called "Stylesight" and another
word "Stylesightings", which was just added. When users search for
"Stylesightings", the client really only wants them to get results
that matc
A question for anyone familiar with the details of the time-based
autocommit mechanism in Solr:
if I am running several core on the same server and send updates to
each core at the same time, what happens? If all the cores have
their autocommit time run out at the same time, will every core try
Got that. Since I am still using Solr 1.3, the defaults should work fine, field
cache for single value and enum for multi-valued fields.
Thanks,
Kalyan Manepalli
-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Thursday, June 18, 2009 3:01 PM
To: solr-user@lucene
Its the facet.method param:
http://wiki.apache.org/solr/SimpleFacetParameters#head-7574cb658563f6de3ad54cd99a793cd73d593caa
--
- Mark
http://www.lucidimagination.com
Manepalli, Kalyan wrote:
Mark,
Where do we specify the method? fieldCache or otherwise
Thanks,
Kalyan Manepalli
---
And unfortunately, that isn't the best approach for highlighting to
take - a uniqueKey shouldn't be required for highlighting. I've yet
to see a real-world deployment of Solr that did not have a uniqueKey
field, but there's no reason Solr should make that assumption.
Erik
On Jun 1
Just figured out what happened... It's necessary for the schema to have a
uniqueKey set, otherwise, highlighting will have one or less entries, as the
map's key is the doc uniqueKey, so on debuggin I figured out that the
QueryResponse tries to put all highlighted results in a map with null key...
a
Mark,
Where do we specify the method? fieldCache or otherwise
Thanks,
Kalyan Manepalli
-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Thursday, June 18, 2009 12:22 PM
To: solr-user@lucene.apache.org
Subject: Re: FilterCache issue
Maybe he is not using t
I've checked the NamedList you told me about, but it contains only one
highlighted doc, when there I have more docs that sould be highlighted.
On Thu, Jun 18, 2009 at 3:03 PM, Erik Hatcher wrote:
> Note that highlighting is NOT part of the document list returned. It's in
> an additional NamedLis
Here is the query, search for the term "ipod" on the "log" field
q=log%3Aipod+AND+requestid%3A1029+AND+logfilename%3Apayxdev-1245272062125-USS.log.zip&hl=true&hl.fl=log&hl.fl=message&hl.simple.post=%3Ci%3E&hl.simple.pre=%3C%2Fi%3E&hl.usePhraseHighlighter=true&hl.highlightMultiTerm=true&hl.snippets=
Note that highlighting is NOT part of the document list returned.
It's in an additional NamedList section of the response (with
name="highlighting")
Erik
On Jun 18, 2009, at 1:22 PM, Bruno wrote:
Hi guys.
I new at using highlighting, so probably I'm making some stupid
mistake,
Nothing off the top of my head ...
I can play around with some of the solrj unit tests a bit later and
perhaps see if I can dig anything up.
Note:
if you expect wildcard/prefix/etc queries to highlight, they will not
with Solr 1.3.
query.set("hl.highlightMultiTerm", *true*);
The above only
Couple of things I've forgot to mention:
Solr Version: 1.3
Enviroment: Websphere
On Thu, Jun 18, 2009 at 2:34 PM, Bruno wrote:
> I've tried with default values and didn't work either.
>
>
> On Thu, Jun 18, 2009 at 2:31 PM, Mark Miller wrote:
>
>> Why do you have:
>> query.set("hl.maxAnalyzedCha
On Thu, Jun 18, 2009 at 1:22 PM, Mark Miller wrote:
> Maybe he is not using the FieldCache method?
It occurs to me that this might be nice info to add to debugging info
(the exact method used + perhaps some other info).
-Yonik
http://www.lucidimagination.com
I've tried with default values and didn't work either.
On Thu, Jun 18, 2009 at 2:31 PM, Mark Miller wrote:
> Why do you have:
> query.set("hl.maxAnalyzedChars", -1);
>
> Have you tried using the default? Unless -1 is an undoc'd feature, this
> means you wouldnt get anything back! This should no
Why do you have:
query.set("hl.maxAnalyzedChars", -1);
Have you tried using the default? Unless -1 is an undoc'd feature, this
means you wouldnt get anything back! This should normally be a fairly
hefty value and defaults to 51200, according to the wiki.
And why:
query.set("hl.fragsize", 1);
Maybe he is not using the FieldCache method?
Yonik Seeley wrote:
On Thu, Jun 18, 2009 at 12:19 PM, Manepalli,
Kalyan wrote:
The fields are defined as single valued and they are non tokenized for.
I am using solr 1.3 waiting for release of solr 1.4.
Then the filterCache won't be used f
can you just log it and see what is contained in the plainText field.
(using LogTransformer)
On Thu, Jun 18, 2009 at 8:54 PM, Jay Hill wrote:
> I'm having some trouble getting the PlainTextEntityProcessor to populate a
> field in an index. I'm using the TemplateTransformer to fill 2 fields, and
>
On Thu, Jun 18, 2009 at 12:19 PM, Manepalli,
Kalyan wrote:
> The fields are defined as single valued and they are non tokenized for.
> I am using solr 1.3 waiting for release of solr 1.4.
Then the filterCache won't be used for faceting, just for filters.
You should be able to verify this by lookin
The fields are defined as single valued and they are non tokenized for.
I am using solr 1.3 waiting for release of solr 1.4.
Thanks,
Kalyan Manepalli
-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley
Sent: Thursday, June 18, 2009 10:15 AM
To: s
Can I transport the index from Solr 1.2 to Sol 1.3 without
resubmiting/reloading again from Database?
Francis
Hi,
I'm currently using facet.query to do my numerical range faceting. I
basically use a fixed price range of €0 to €1 in steps of €500 which
means 20 facet.queries plus an extra facet.query for anything above
€1. I use the inclusive/exclusive query as per my question two days
ago so
I'm having some trouble getting the PlainTextEntityProcessor to populate a
field in an index. I'm using the TemplateTransformer to fill 2 fields, and
have a timestamp field in schema.xml, and these fields make it into the
index. Only the plaintText data is missing. Here is my configuration:
Hey,
So... I'm assuming your problem is that you're having trouble deploying
Solr in Jetty? Or is your problem that it's deploying just fine but your
code throws an exception when you try to run it?
I am running Solr in Jetty, and I just copied the war into the webapps
directory and it wo
On Thu, Jun 18, 2009 at 10:59 AM, Manepalli,
Kalyan wrote:
> I am faceting on the single values only.
You may have only added a single value to each field, but is the field
defined to be single valued or multi valued?
Also, what version of Solr are you using?
-Yonik
http://www.lucidimagination.c
On Jun 18, 2009, at 10:54 AM, Vicky_Dev wrote:
is there any way to boost fields in standard query parser itself?
You can boost terms using field:term^2.0 syntax
See http://wiki.apache.org/solr/SolrQuerySyntax and down into http://lucene.apache.org/java/2_4_0/queryparsersyntax.html
for more
Hi Vicky,
Vicky_Dev schrieb:
We are also facing same problem mentioned in the post (we are using
dismaxrequesthandler)::
When we are searching for --q=prdTitle_s:"ladybird"&qt=dismax , we are
getting 2 results -- unique key ID =1000 and unique key ID =1001
(1) Append debugQuery=true to yo
I am faceting on the single values only. I ran load test against solr app and
found that under increased load the faceting just gets slower and slower. That
is why I wanted to investigate filtercache and any other features to tweak the
performance.
As suggested by Mark in the earlier email, I in
Hi Hossman,
We are also facing similar issue:
is there any way to boost fields in standard query parser itself?
~Vikrant
hossman wrote:
>
>
> : The reason I brought the question back up is that hossman said:
> ...
> : I tried it and it didn't work, so I was curious if I was still do
Hi Giovanni,
Solr 1.4 does work fine in JBoss (all of the features, including all of
the admin pages). For example, I am running it in JBoss 4.0.5.GA on JDK
1.5.0_18 without problems. I am also using Jetty instead of Tomcat, however
instructions for getting it to work in JBoss with Tomcat
Hi Michel,
We are also facing same problem mentioned in the post (we are using
dismaxrequesthandler)::
Ex: There is product title field in which --possible values
1) in unique key ID =1000
prdTitle_s field contains value "ladybird classic"
2) in unique key ID =1001
prdTitle_s field contains val
Mark Miller wrote:
Yonik Seeley wrote:
On Thu, Jun 18, 2009 at 8:35 AM, Mark Miller
wrote:
Thats why I asked about multi-valued terms. If hes not using the enum
faceting method (which only makes sense with fewer uniques), and the
fields
are not multi-valued, than it is using the FieldCache
Yonik Seeley wrote:
On Thu, Jun 18, 2009 at 8:35 AM, Mark Miller wrote:
Thats why I asked about multi-valued terms. If hes not using the enum
faceting method (which only makes sense with fewer uniques), and the fields
are not multi-valued, than it is using the FieldCache method. Which of
cour
On Thu, Jun 18, 2009 at 8:35 AM, Mark Miller wrote:
> Thats why I asked about multi-valued terms. If hes not using the enum
> faceting method (which only makes sense with fewer uniques), and the fields
> are not multi-valued, than it is using the FieldCache method. Which of
> course does use the fi
On Jun 18, 2009, at 4:51 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
apparently the row return a null 'board_id'
I replied "No" earlier, but of course you're right here. The
deletedPkQuery I originally used was not returning a board_id column.
And even if it did, that isn't the uniqueKey (i
On Jun 18, 2009, at 4:51 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
apparently the row return a null 'board_id'
No. I'm working with a test database situation with a single record,
and I simply do a full-import, then change the deleted column to 'Y'
and try a delta-import. The deletedPkQuery
Hello all,
I have a simple question :-)
In my project it is mandatory to use Jboss 4.0.1 SP3 and Java 1.5.0_06/08.
The software relies on Solr 1.4.
Now, I am aware that some JSP Admin pages will not be displayed due to some
Java5/6 dependency but this is not a problem because rewriting some of t
Thats why I asked about multi-valued terms. If hes not using the enum
faceting method (which only makes sense with fewer uniques), and the
fields are not multi-valued, than it is using the FieldCache method.
Which of course does use the filterCache, and works best when the
filterCache size is t
On Jun 17, 2009, at 10:32 PM, Mark Miller wrote:
Right, so if you are on 1.3 or early 1.4 dev, with so many uniques,
you should be using the FieldCache method of faceting. The RAM
depends on the number of documents and number of uniques terms mostly.
With 1.4 you may be using an Uninverted
Hi Michael,
Sorry for the misinterpretation.
in that case, its the same like querying multiple shards. :)
Thanks,
Raakhi
On Thu, Jun 18, 2009 at 4:09 PM, Michael Ludwig wrote:
> Rakhi Khatwani schrieb:
>
>> On Thu, Jun 18, 2009 at 3:51 PM, Michael Ludwig
>> wrote:
>>
>
> I do
Rakhi Khatwani schrieb:
On Thu, Jun 18, 2009 at 3:51 PM, Michael Ludwig
wrote:
I don't know how we're supposed to use it. I did the following:
http://flunder:8983/solr/xpg/select?q=bla&shards=flunder:8983/solr/xpg,flunder:8983/solr/kk
i am gettin a page load error... "cannot find server"
On Thu, Jun 18, 2009 at 3:51 PM, Michael Ludwig wrote:
> Rakhi Khatwani schrieb:
>
>> [...] how do we do a distributed search across multicores?? is it
>> just like how we query using multiple shards?
>>
>
> I don't know how we're supposed to use it. I did the following:
>
>
> http://flunder:898
Rakhi Khatwani schrieb:
[...] how do we do a distributed search across multicores?? is it
just like how we query using multiple shards?
I don't know how we're supposed to use it. I did the following:
http://flunder:8983/solr/xpg/select?q=bla&shards=flunder:8983/solr/xpg,flunder:8983/solr/kk
Manepalli, Kalyan schrieb:
I am seeing an issue with the filtercache setting on my solr app
which is causing slower faceting.
Here is the configuration.
hitratio : 0.00
inserts : 973531
evictions : 972978
size : 512
cumulative_hitratio : 0.00
cumulative_inserts : 61170111
cumulative_evict
Otis Gospodnetic schrieb:
[...] nothing prevents the indexing client from sending the same doc
to multiple shards. In some scenarios that's exactly what you want
to do.
What kind of scenario would that be?
One scenario is making use of small and large core to provide near
real-time search - y
MilkDud schrieb:
Ok, so lets suppose i did index across just the album. Using that
index, how would I be able to handle searches of the form "artist name
track name".
What does the user interface look like? Do you have separate fields for
artists and tracks? Or just one field?
If i do the se
a have raised an issue and fixed it
https://issues.apache.org/jira/browse/SOLR-1228
2009/6/18 Noble Paul നോബിള് नोब्ळ् :
> apparently the row return a null 'board_id'
>
> your stacktrace sugggests this. even if it is fixed I guess it may not
> work because your are storing the id as
>
>
> board-
apparently the row return a null 'board_id'
your stacktrace sugggests this. even if it is fixed I guess it may not
work because your are storing the id as
board-${test.board_id}
and unless your query returns something like board- it may
not work for you.
Anyway i shall put in a fix ion DIH to
66 matches
Mail list logo