If you are looking for an immediate need waiting for a release I must
advice you against waiting for the solr1.3 release. The best strategy
would be to take a nightly and start using it. Test is thoroughly and
if bugs are found report them back . If everything is fine go into
production with that
pool English, hope u can get it.
similar problem I met before was using the operation
The first time I sent to solr , the optimize operation did have
done.
But files were not merged. When i sent another to solr, all the
files were merged immediately.
This seems to happen just in Windows
200
similar problem I met before was using the operation
The first time I sent to solr , the optimize operation did have
down.
But files were not merged. When i sent another to solr, all the
files were merged.
This seems to happen just in Windows
2008/5/13, Yonik Seeley <[EMAIL PROTECTED]>:
>
> I
Otis:
I will take a look at the DistributedSearch page on solr wiki.
Thanks,
Bill
--
From: "Otis Gospodnetic" <[EMAIL PROTECTED]>
Sent: Thursday, May 15, 2008 12:54 PM
To:
Subject: Re: Some advice on scalability
Bill,
Quick feedback:
1) use
Bill,
Quick feedback:
1) use 1.3-dev or 1.3 when it comes out, not 1.2
2) you did not mention Solr's distributed search functionality explicitly, so I
get a feeling you are not aware of it. See DistributedSearch page on the Solr
wiki
3) you definitely don't want a single 500M docs index that
On 15-May-08, at 12:50 AM, Tim Mahy wrote:
Hi,
yep it is a very strange problem that we never encountered before.
We are uploading all the documents again to see if that solves the
problem (hoping that the delete will delete also the multiple
document instances)
If you are re-adding ever
I have a custom query object that extends ContstantScoreQuery. I give it a key
which pulls some documents out of a cache. Thinking to make it more efficient,
I used DocSet, backed by OpenBitSet or OpenHashSet. However, I need to set the
BitSet object for the Lucene filter. Any idea on how to bes
I am not a solr expert, but is it possible to try to build indexes based on
search statiscs?
What is that you could have a monitoring service that would generate statics
of search queries, document returns and place weights to each queriy based
on ocurrence, impact on the index, time to respond and
> Are you writing your xml by hand, as in no xml writer? That can cause
> problems. In your exception it says "latitude 59&", the & should have
> converted to '&'(I think). If you can use Java6, there is a
> XMLStreamWriter in java.xml.stream that does automatic special
character
> escaping. This
Folks:
We are building a search capability into our web and plan to use Solr. While
we have the initial prototype version up and running on Solr 1.2, we are now
turning our attention to sizing/scalability.
Our app in brief: We get merchant sku files (in either xml/csv) which we
process an
Jae,
It sounds like you are doing a distributed search across your 3 cores on a
single Solr instance? Why not do run 3 individual queries (parallel or serial,
your choice) and pick however many hits you need from each result?
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
I've worked with the Basis products. Solid, good support.
Last time I talked to them, they were working on hooking
them into Lucene.
For really good quality results from any of these, you need
to add terms to the user dictionary of the segmenter. These
may be local jargon, product names, personal
On Thu, May 15, 2008 at 6:56 PM, dharhsana <[EMAIL PROTECTED]>
wrote:
>
>
> hello umar,
>
> Thank u so much for replying me.
>
> I have a requirement that i have to display first 10 values in jsp page .
> In
> that page itself i have (NEXT) button while clicking, it should query the
> next 10 reco
hello umar,
Thank u so much for replying me.
I have a requirement that i have to display first 10 values in jsp page . In
that page itself i have (NEXT) button while clicking, it should query the
next 10 records from solr.How can i implement this can u give some example
for this.
As per u said
Hi,
I'm actually quite interested in this feature. What is the ranking
strategy for the group? is it based on the highest ranking document with
in the group? is it configurable?
cheers,
Uri
oleg_gnatovskiy wrote:
Yes, that is the patch I am trying to get to work. It doesn't have a feature
f
Hi,
I am looking for "the best or possible" way to add WEIGHT to each core in
multi core environment.
core 1 has about 10 millions articles from same publisher and core 2 and 3
have less than 10k.
I would like to have BALANCED Query result - ex. 10 from core 1, 10 from
core 2 and 10 from core 3..
On Thu, May 15, 2008 at 3:16 PM, dharhsana <[EMAIL PROTECTED]>
wrote:
>
> hi this is rekha,iam new to solr..
>
> I need to search the solr with limits,for example if we are going to get
> 100
> records ,those records should be seperated to 10 records each in seperate
> xml .can any one give me the
hi this is rekha,iam new to solr..
I need to search the solr with limits,for example if we are going to get 100
records ,those records should be seperated to 10 records each in seperate
xml .can any one give me the sample code.
with regards,
T.Rekha.
--
View this message in context:
htt
Hi,
yep it is a very strange problem that we never encountered before.
We are uploading all the documents again to see if that solves the problem
(hoping that the delete will delete also the multiple document instances)
greetings,
Tim
Van: Otis Gospodneti
19 matches
Mail list logo