Hi,
I am trying to search the schema with the q query parameter. Query which
gets form is:
+(programJacketImage_program_s:test | courseCodeSeq_course_s:test |
authorLastName_product_s:test | Index_Type_s:test | prdMainTitle_s:test^10.0
| discCode_course_s:test | sourceGroupName_course_s:test |
i
Hi,
I am trying to override getFieldQuery method of QueryParser class, which
uses 2 classed i.e. PositionIncrementAttribute & TokenAttribute of
org.apache.lucene.analysis.tokenattributes package. I\
I am not able to find out this package.
Please help.
Thanks,
Amit Garg
--
View this message in
it is possible , but with some work.
you may need to write a new RequestWriter implementation which extends
org.apache.solr.client.solrj.request.RequestWriter for that .
It will be a nice addition to SolrJ if it can be contributed back.
On Thu, Feb 26, 2009 at 9:04 AM, Erwin Lawardy wrote:
>
Hi All,
I have been uploading my rich document(pdf/doc/xls) though url and it works
properly.
http://localhost:8983/solr/update/rich?stream.type=doc&stream.file=SOLR_HOME/test.pdf.doc&id=101&stream.fieldname=name&commit=true
Is there a way to do it through solrj as I am trying to build an appli
Hi,
This might be a very silly question that's documented everywhere but I
just can't find an answer right now.
When we first implemented solr we used version 1.2 (of course, 1.3 was
released days afterwards). In the brief period we used 1.2, whenever
we wanted to delete a bunch of docum
Is it possible to have Solr to remove duplicated query results?
For example, instead of return
Wireless
Wireless
Wireless
Video Games
Video Games
return:
Wireless
Video Games
Thanks a lot,
Kevin
Shalin your patch worked perfect for my use case.
Thank's both for the information!
Amit Nithian wrote:
>
> I'm actually working on one for my company which parses our tomcat log
> files
> to obtain queries to feed as warming queries (since GET queries are the
> dominant source of queries) to
Unfortunately, I think the way this works is the container creates a
Classloader for each context and loads the contents of the .war into
that, regardless of whether each context references the same .war
file. All those classes are stored in permanent generation space, and
I'm fairly sure if you re
: Is there any debug settings to see where the time is taken during a
: distributed search?
I don't think so. the existing timming code will show you how much
time each search component took, but i don't think anything breaks it down
to isolate the remote requests
: When I query all shards to
: I'm trying to understand the internal Sturcture of the lucene indexer.
: Well according to "Lucene in action" book , the Document are first converted
: into lucene Document Format, then analysed with the standardAnalyser.
: I don't understand how the analysed Documents added to the inverted inde
: > Yes, that's the standard trick. :)
: > > Ok, so it wouldn't be possible to have a smaller, faster authoritative
: > > shard for near-real-time updates while keeping the entire dataset in a
: > > second shard which is updates less frequently?
: Ok, now I'm confused, if the shard the document
: I see now that getBestTextFragments() takes in a token stream - and
: each token in this steam already has start/end positions set. So, the
: patch at LUCENE-1500 would mitigate the exception, but looks like the
: real bug is in Solr.
so what does the analysis screen tell you about each token
Fair enough. We should update the Wiki then? I think it currently does
read as if its a supported feature rather than something you should avoid.
--
- Mark
http://www.lucidimagination.com
Yonik Seeley wrote:
On Wed, Feb 25, 2009 at 11:52 AM, Mark Miller wrote:
You are not supposed to h
On Wed, Feb 25, 2009 at 11:52 AM, Mark Miller wrote:
> You are not supposed to have duplicates is a bit strong - I was over reading
> into something Yonik had mentioned in the past. It looks like its supposed
> to become more useful:
Well, perhaps slightly more deterministic so that two queries r
You are not supposed to have duplicates is a bit strong - I was over
reading into something Yonik had mentioned in the past. It looks like
its supposed to become more useful:
I think Yonik might have to clear this up, but it looks like the current
implementation is not deterministic, and he ha
thanks will try that .I also have the war file for each solr instance in the
home directory of the instance ,would that be the problem ?
if i were to have common war file for n instances ,would there be any issue?
regards
revas
On 2/25/09, Michael Della Bitta wrote:
>
> It's possible I don't kn
I don't think your supposed to have duplicate keys? I think its supposed
to work more as a graceful failure than a feature you should count on.
Id's should be unique across the collection.
Ok, now I'm confused, if the shard the document comes from is
non-deterministic, how can you use this '
I looked at that, elevate is a way to boost particular documents based on query
terms used. I was thinking in a more general sense... For instance, when google
displays search results, the 4th result (typically) are news results, then you
tube results come in at another fixed position or better.
Use GET. That is the correct semantic for search results, you are
getting information. A POST is wrong, because the request does not
update that URL.
With a GET, you can use HTTP caching. Our HTTP cache for Solr has
a 75% hit rate.
wunder
On 2/25/09 1:51 AM, "Ajay Agrawal" wrote:
> Hi,
>
> I
It's possible I don't know enough about Solr's internals and there's a
better solution than this, and it's surprising me that you're running
out of PermGen space before you're running out of heap, but maybe
you've already increased the general heap size without tweaking
PermGen, and loading all the
Is there way to exclude filters from a stats field, like it is possible to
exclude filters from a facet.field? It didn't work for me.
i.e: I have a field price, and although I filter on price, I would like to
be able to get the entire range (min,max) of prices as if I didn't specify
the filter.
Thanks for your Answer.
this is what I am trying to do :
I would like to find out how to customize the Lucene Indexing Prozess to
obtain a faster search.
etheir with Luke or with some other tool.
On Mon, Feb 23, 2009 at 6:53 PM, Erick Erickson wrote:
> please don't hijack topic threads, start a
One more question: when you run CheckIndex, are you enabling asserts?
(java -ea:org.apache.lucene)?
Mike
James Brady wrote:
Thanks for your answers Michael! I was using a pre-1.3 Solr build,
but I've
now upgraded to the 1.3 release, run the new CheckIndex shipped as
part of
the Lucene
James Brady wrote:
Thanks for your answers Michael! I was using a pre-1.3 Solr build,
but I've
now upgraded to the 1.3 release, run the new CheckIndex shipped as
part of
the Lucene 2.4 dev build and I'm still getting the
CorruptIndexException:
docs out of order exceptions I'm afraid.
Di
Otis Gospodnetic wrote:
Yes, that's the standard trick. :)
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: gwk
To: solr-user@lucene.apache.org
Sent: Wednesday, February 25, 2009 5:18:47 AM
Subject: Re: Distributed Search
Koji Sekiguch
Does someone knows something? I am going mad trying to make it work... it
compiles OK but at ejecution time I am getting these errors:
Feb 25, 2009 10:36:19 AM org.apache.catalina.core.StandardContext
filterStart
SEVERE: Exception starting filter SolrRequestFilter
java.lang.NoClassDefFoundError:
here are the details of the new replication
http://wiki.apache.org/solr/SolrReplication
On Wed, Feb 25, 2009 at 4:59 PM, Otis Gospodnetic
wrote:
>
> Hm, I know what you did is recommended, but I *think* I once set up a Solr
> instance that had multiple indices and only a single rsyncd. It's bee
Hi,
I am using q.alt parameter for passing the query. But with this q.alt
parameter, it doesnt read the QF parameter from solrConfig file. It reads BQ
parameter but not the QF. Hence the field boosting is not working with q.alt
parameter.
Please help how I can achieve that.
Thanks in advance.
Hm, I know what you did is recommended, but I *think* I once set up a Solr
instance that had multiple indices and only a single rsyncd. It's been a
while, so I don't recall the details. If you feel comfortable with 1.3-dev,
grab a nightly and use the new replication mechanism instead.
Otis
Sushan,
http://wiki.apache.org/solr/?action=fullsearch&context=180&value=cluster&titlesearch=Titles
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Sushan Rungta
> To: solr-user@lucene.apache.org
> Sent: Wednesday, February 25, 2009 4:18:
Yes, that's the standard trick. :)
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: gwk
> To: solr-user@lucene.apache.org
> Sent: Wednesday, February 25, 2009 5:18:47 AM
> Subject: Re: Distributed Search
>
> Koji Sekiguchi wrote:
> > gwk
Koji Sekiguchi wrote:
gwk wrote:
Hello,
The wiki states 'When duplicate doc IDs are received, Solr chooses
the first doc and discards subsequent ones', I was wondering whether
"the first doc" is the doc of the shard which responds first or the
doc in the first shard in the shards GET paramet
Hi,
I am a newbie in solr, please help me to solve this issue.
I are facing problem while sending a post request to search special
characters.
The get request is working fine but in post request we are not getting any
result.
Thanks,
Ajay
Great! This StatComponent exactly meets my needs.
Any info about a stable Solr1.4 release date ? Any roadmap somewhere ?
Thanks Erik
2009/2/23 Erik Hatcher
> Have a look at the StatsComponent, added after Solr 1.3 release though.
> You can grab a nightly build to have it built-in.
>
> More in
I am using lucene in my website (clickindia.com), and it is giving me good
results.
Now I would like to implement "search result clustering" in my searches, and
I am unable to find any kind of related document on the same.
Please suggest me some kind of relevant documents, which could help me in
dabboo wrote:
Hi,
I am trying to debug code of QueryParser class and other related files. I
have also taken the code of lucene from its SVN, but it is not going to the
right control during debug.
I wnated to know if I have taken the latest code and if not, from where I
can take the code.
Thans
Hi
I am sure this question has been repeated many times over and there has been
several generic answers ,but i am looking for specific answers.
I have a single server whose configuration i give below,this being the only
server we have at present ,the requirement is everytime we create a new
websi
37 matches
Mail list logo