Hi,
i want to boost the result through query.
i have 4 fields in our schema.
If i search *deepak* then result should come in that order -
All *UPDBY* having deepak then
All *To* having deepak then
All *CC* having deepak
All *BCC* having deepak
I am using Standard request handler. Please
There's a ! missing in there, try {!key=label}.
Regards,
gwk
On 2/18/2010 5:01 AM, adeelmahmood wrote:
okay so if I dont want to do any excludes then I am assuming I should just
put in {key=label}field .. i tried that and it doesnt work .. it says
undefined field {key=label}field
Lance Norsk
Just a word of caution: I've been bitten by this bug, which affects Tika 0.6:
https://issues.apache.org/jira/browse/PDFBOX-541
It causes the parser to go into an infinite loop, which isn't exactly great
for server stability. Tika 0.4 is not affected in the same way - as far as I
remember, the p
Try using the dismax handler
http://wiki.apache.org/solr/DisMaxRequestHandler
This would be very good read for you.
you would use the bq ( boost query parameter) and it should look something
similar to..
&bq=UPDBY:deepak^5.0+TO:deepak^4.0+CC:deepak^3.0+BCC:deepak^2.0
Paul
On Thu, Feb 18, 2010
solr-user wrote:
Hossman, what do you mean by including a "TestCase"?
Will create issue in Jira asap; I will include the URL, schema and some code
to generate sample data
I think they are good for TestCase.
Koji
--
http://www.rondhuit.com/en/
Using release-1.4.0 or trunk branch Solr and indexing
example data and search 0 boosted word:
http://localhost:8983/solr/select/?q=usb^0.0
I got the following exception:
java.io.IOException: read past EOF
at
org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:
Hi,
We did some tests with omitNorms=false. We have seen that in the last
result's page we have some scores set to 0.0. This scores setted to 0 are
problematic to our sorters.
It could be some kind of bug?
Regrads,
Raimon Bosch.
--
View this message in context:
http://old.nabble.com/some-sco
2010/2/18 Koji Sekiguchi :
> Using release-1.4.0 or trunk branch Solr and indexing
> example data and search 0 boosted word:
>
> http://localhost:8983/solr/select/?q=usb^0.0
Confirmed - looks like Solr is requesting an incorrect docid.
I'm looking into it.
-Yonik
http://www.lucidimagination.com
Hi ,
When query is made across multiple fields in dismax handler using
paramater qf , I have observed that with debug query enabled the resultant
score is max score of scores of query across each fields . but I want the
resultant score to be sum of score across fields (like the standard ha
Hi Janne,
I *think* Ocean Realtime Search has been superseded by Lucene NRT search.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Hadoop ecosystem search :: http://search-hadoop.com/
- Original Message
> From: Janne Majaranta
> To: solr-user@lucene.apache.org
i gave it a rough shot Lance, if there's a better way to explain it, please
edit
On Wed, Feb 17, 2010 at 10:23 PM, Lance Norskog wrote:
> That would be great. After reading this and the PositionFilter class I
> still don't know how to use it.
>
> On Wed, Feb 17, 2010 at 12:38 PM, Robert Muir wr
Hi Tom,
32MB is very low, 320MB is medium, and I think you could go higher, just pick
whichever garbage collector is good for throughput. I know Java 1.6 update 18
also has some Hotspot and maybe also GC fixes, so I'd use that. Finally, this
sounds like a good use case for reindexing with Had
Hi,
you should not optimize index after each insert of document.insted you
should optimize it after inserting some good no of documents.
because in optimize it will merge all segments to one according to setting
of lucene index.
thanks,
Jagdish
On Fri, Feb 12, 2010 at 4:01 PM, mklprasad wrote:
SOLR makes heavy use of JUnit for testing. The real advantage
of a JUnit testcase being attached is that it can then be
permanently incorporated into the SOLR builds. If you're
unfamiliar with JUnit, then providing the raw data that illustrates
the bug allows people who work on SOLR to save a bunch
I was gonna ask a question about this but you seem like you might have the
answer for me .. wat exactly is the omitNorms field do (or is expected to
do) .. also if you could please help me understand what termVectors and
multiValued options do ??
Thanks for ur help
Raimon Bosch wrote:
>
>
> Hi
Hi Otis,
Ok, now I'm confused ;)
There seems to be a bit activity though when looking at the "last updated"
timestamps in the google code project wiki:
http://code.google.com/p/oceansearch/w/list
The Tag Index feature sounds very interesting.
-Janne
2010/2/18 Otis Gospodnetic
> Hi Janne,
>
>
I am not an expert in lucene scoring formula, but omintNorms=false makes the
scoring formula a little bit more complex, taking into account boosting for
fields and documents. If I'm not wrong (if I am please, correct me) I think
that with omitNorms=false take into account the queryNorm(q) and nor
I've noticed some peculiar behavior with the dismax searchhandler.
In my case I'm making the search "The British Open", and am getting 0 results.
When I change it to "British Open" I get many hits. I looked at the query
analyzer and it should be broken down to "british" and "open" tokens ('the'
Hi,
You can also make use of autocommit feature of solr.
You have two possibilities either based on max number of uncommited docs or
based on time.
see of your solrconfig.xml.
Example:-
60
once your done with adding run final optimize/commit.
Regards,
P.N.Raju,
I found the error. The definition in schema.xml was not set to
the primary key field/column as returned by the deletedPkQuery.
Jorg
On Wed, Feb 17, 2010 at 11:38 AM, Jorg Heymans wrote:
> Looking closer at the documentation, it appears that expungeDeletes in fact
> has nothing to do with 'remov
Janne,
I don't think there's any activity happening there.
SOLR-1606 is the tracking issue for moving to per segment facets and
docsets. I haven't had an immediate commercial need to implement
those.
Jason
On Thu, Feb 18, 2010 at 7:04 AM, Janne Majaranta
wrote:
> Hi Otis,
>
> Ok, now I'm conf
Hi all,
I've setup solr replication as described in the wiki.
when i start the replication a directory called index.$numebers is created
after a while
it disappears and a new index.$othernumbers is created
index/ remains untouched with an empty index.
any clue?
thank you in advance,
Riccardo
I'm getting the following exception
SEVERE: org.apache.solr.common.SolrException: ERROR:unknown field 'desc'
I'm wondering what I need to do in order to add the "desc" field to
the Solr schema for indexing?
Hello All,
When I use Maven or Eclipse to try and compile my bean which has the
@Field annotation as specified in http://wiki.apache.org/solr/Solrj
page ... the compiler doesn't find any class to support the
annotation. What jar should we use to bring in this custom Solr
annotation?
Add desc as a in your schema.xml
file would be my first guess.
Providing some explanation of what you're trying to do
would help diagnose your issues.
HTH
Erick
On Thu, Feb 18, 2010 at 12:21 PM, Pulkit Singhal wrote:
> I'm getting the following exception
> SEVERE: org.apache.solr.common.So
The PositionFilter worked great for my purpose along with another filter that I
build.
In my case, my indexed data may be something like "X150". So, a query for
"Nokia X150" should match. But I don't want random matches on "x". However, if
my indexed data is "G7", I do want a query on "PowerSho
have you used UIMA? i did a quick read on the docs and it seems to do what
i'm looking for.
2010/2/11 Otis Gospodnetic
> Note that UIMA doesn't doe NER itself (as far as I know), but instead
> relies on GATE or OpenNLP or OpenCalais, AFAIK :)
>
> Those interested in UIMA and living close to New
Ok, thanks.
-Janne
2010/2/18 Jason Rutherglen
> Janne,
>
> I don't think there's any activity happening there.
>
> SOLR-1606 is the tracking issue for moving to per segment facets and
> docsets. I haven't had an immediate commercial need to implement
> those.
>
> Jason
>
> On Thu, Feb 18, 201
use the common grams filter, itll create tokens for stop words and
their adjacent terms
On Thu, Feb 18, 2010 at 7:16 AM, Nagelberg, Kallin
wrote:
> I've noticed some peculiar behavior with the dismax searchhandler.
>
> In my case I'm making the search "The British Open", and am getting 0
> resul
Thanks
If this is really the case, i declared a new filed called mySpellTextDup and
retired the original field.
Now i have a new field which powers my dictionary with no words in it and
now i am free to index which ever term i want.
This is not the best of solution but i cant think of a reasonab
I guess my n00b-ness is showing :)
I started off using the instructions directly from
http://wiki.apache.org/solr/Solrj and there was no mention of schema
there and even after gettign this error and searching for schema.xml
in the wiki ... I found no meaningful hits so I thought it best to
ask.
W
NP. And I see why you'd be confused... What's actually happening
is that if you're using the tutorial to make things run, a lot
is happening under the covers. In particular, you're switching to
the solr/example directory where you're invoking the start.jar, which
is pre-configured to bring up the..
Thanks Otis,
I don't know enough about Hadoop to understand the advantage of using Hadoop
in this use case. How would using Hadoop differ from distributing the
indexing over 10 shards on 10 machines with Solr?
Tom
Otis Gospodnetic wrote:
>
> Hi Tom,
>
> 32MB is very low, 320MB is medium, a
On Thu, Feb 18, 2010 at 8:52 AM, Otis Gospodnetic
wrote:
> 32MB is very low, 320MB is medium, and I think you could go higher, just pick
> whichever garbage collector is good for throughput. I know Java 1.6 update
> 18 also has some Hotspot and maybe also GC fixes, so I'd use that.
I think you
On Feb 18, 2010, at 3:27 PM, Erick Erickson wrote:
> The Manning book for SOLR or LucidWorks are good resources
And of course the PACKT book ;-)
~ David Smiley
Author: http://www.packtpub.com/solr-1-4-enterprise-search-server/
Hello Everyone,
I do NOT want to host Solr separately. I want to run it within my war
with the Java Application which is using it. How easy/difficult is
that to setup? Can anyone with past experience on this topic, please
comment.
thanks,
- Pulkit
Why would you want to? Surely having it seperate increases scalablity?
On 18 Feb 2010, at 22:23, "Pulkit Singhal"
wrote:
> Hello Everyone,
>
> I do NOT want to host Solr separately. I want to run it within my war
> with the Java Application which is using it. How easy/difficult is
> that to se
Oops, got my Manning MEAP edition of LIA II mixed up with my PACKT SOLR 1.4
book.
But some author guy caught my gaffe ...
Erick
On Thu, Feb 18, 2010 at 5:13 PM, Smiley, David W. wrote:
> On Feb 18, 2010, at 3:27 PM, Erick Erickson wrote:
>
> > The Manning book for SOLR or LucidWorks are good r
Yeah I have been pitching that but I want all the functionality of
Solr in a small package because it is not a concern given the
specifically limited data set being searched upon. I understand that
the # of users is still another part of this equation but there just
aren't that many at this time an
Hello All.
After doing a lot of research i came to this conclusion please correct me if
i am wrong.
i noticed that if you have buildonCommit and buildOnOptimize as true in your
spell check component, then the spell check builds whenever a commit or
optimze happens. which is the desired behaviour a
Hi, I'm trying to do a search on a range of floats that are part of my solr
schema. Basically we have a collection of "fees" that are associated with
each document in our index.
The query I tried was:
q=fees:[3 TO 10]
This should return me documents with Fee values between 3 and 10
inclusively,
All sorting of facets works great at the field level (count/index)...all good
there...but how is sorting accomplished with range queries? The solrj
response doesn't seem to maintain the order the queries are sent in, and the
order is not in index or count order. What's the trick?
http://localhost
On 2/18/2010 4:22 PM, Pulkit Singhal wrote:
Hello Everyone,
I do NOT want to host Solr separately. I want to run it within my war
with the Java Application which is using it. How easy/difficult is
that to setup? Can anyone with past experience on this topic, please
comment.
thanks,
- Pulkit
Hm, yes, it sounds like your "fees" field has multiple values/tokens, one for
each fee. That's full-text search for you. :)
How about having multiple fee fields, each with just one fee value?
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Hadoop ecosystem search :: http://se
Hi Tom,
It wouldn't. I didn't see the mention of parallel indexing in the original
email. :)
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Hadoop ecosystem search :: http://search-hadoop.com/
- Original Message
> From: Tom Burton-West
> To: solr-user@lucene.ap
This sounds useful to me!
Here's a pointer: http://wiki.apache.org/solr/HowToContribute
Thanks!
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Hadoop ecosystem search :: http://search-hadoop.com/
From: Kevin Osborn
To: solr-user@lucene.ap
giskard,
Is this on the master or on the slave(s)?
Maybe you can paste your replication handler config for the master and your
replication handler config for the slave.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Hadoop ecosystem search :: http://search-hadoop.com/
__
solrj jar
On Thu, Feb 18, 2010 at 10:52 PM, Pulkit Singhal
wrote:
> Hello All,
>
> When I use Maven or Eclipse to try and compile my bean which has the
> @Field annotation as specified in http://wiki.apache.org/solr/Solrj
> page ... the compiler doesn't find any class to support the
> annotation.
Jagdish Vasani-2 wrote:
>
> Hi,
>
> you should not optimize index after each insert of document.insted you
> should optimize it after inserting some good no of documents.
> because in optimize it will merge all segments to one according to
> setting
> of lucene index.
>
> thanks,
> Jagdish
>
49 matches
Mail list logo