Mike Klaas wrote:
On 30-Aug-07, at 4:01 PM, Chris Hostetter wrote:
You could accomplish the goal without any coding by using phrase
queries: "calico calico calico"~1 will match only documents
that have at least three occurrences of calico. If this is
performant enough, you are done. O
Just to make sure. you mean we can create a directory containing the shared
jars, and each solr home/lib will symlink to the jar files in that
directory. Right?
correct.
-Hoss
Thanks. I didn't mean to send that to the list-serv :}
On 8/31/07, Bertrand Delacretaz <[EMAIL PROTECTED]> wrote:
>
> On 8/31/07, Tim Archambault <[EMAIL PROTECTED]> wrote:
> > ...I'm thinking of sending a similar
> > list-serv item out, but I noticed this is a solr-user list, not
> necessarily
>
On 8/31/07, Tim Archambault <[EMAIL PROTECTED]> wrote:
> ...I'm thinking of sending a similar
> list-serv item out, but I noticed this is a solr-user list, not necessarily
> a developers list so I thought I'd ask
Note that there's also [EMAIL PROTECTED] for such purposes, see
http://www.apach
Mark,
Did you get any responses to your inquiry? I'm thinking of sending a similar
list-serv item out, but I noticed this is a solr-user list, not necessarily
a developers list so I thought I'd ask.
I'm looking for someone to integrate with Drupal.
Tim Archambault
Online Manager
Bangordailynews.
Sorry dude, I'm pining for Python and coding in Java. --wunder
On 8/30/07 6:57 PM, "Erik Hatcher" <[EMAIL PROTECTED]> wrote:
>
> On Aug 30, 2007, at 6:31 PM, Mike Klaas wrote:
>> Another reason why people use stored procs is to prevent multiple
>> round-trips in a multi-stage query operation. T
On Aug 30, 2007, at 6:31 PM, Mike Klaas wrote:
Another reason why people use stored procs is to prevent multiple
round-trips in a multi-stage query operation. This is exactly what
complex RequestHandlers do (and the equivalent to a custom stored
proc would be writing your own handler).
A
OK...I see...thk u ,mike.
2007/8/31, Mike Klaas <[EMAIL PROTECTED]>:
>
>
> On 29-Aug-07, at 10:21 PM, James liu wrote:
>
> > Does it affect with doc size?
> >
> > for example 2 billion docs, 10k doc2 billion docs, but doc size
> > is 10m.
>
> There might be other places that have 2G limit (see
Thanks, Hoss,
>> you still use a separate lib directory for each solr home and
symlink each jar ...
Just to make sure. you mean we can create a directory containing the shared
jars, and each solr home/lib will symlink to the jar files in that
directory. Right?
Thanks,
-Hui
On 8/30/07, Chri
On 30-Aug-07, at 4:01 PM, Chris Hostetter wrote:
You could accomplish the goal without any coding by using phrase
queries: "calico calico calico"~1 will match only documents
that have at least three occurrences of calico. If this is
performant enough, you are done. Otherwise, you'll
You could accomplish the goal without any coding by using phrase queries:
"calico calico calico"~1 will match only documents that have at least
three occurrences of calico. If this is performant enough, you are done.
Otherwise, you'll have to do some custom coding.
I'll be searching art
Mike Klaas wrote:
On 30-Aug-07, at 1:22 PM, Jed Reynolds wrote:
Jed Reynolds wrote:
Apologies if this is in the Lucene FAQ, but I was looking thru the
Lucene syntax and I just didn't see it.
Is there a way to search for documents that have a certain number of
occurrences of a term in the
On 30-Aug-07, at 3:30 PM, Chris Hostetter wrote:
One way would be to create your own Query subclass (similar to
TermQuery) that returned a score of zero for docs below a certain
tf threshold. This is
minor clarification: a score of zero is still a match ... the key
to writting custom que
On 30-Aug-07, at 3:18 PM, Chris Hostetter wrote:
2. Someone asked me if SOLR utilizes anything like a "stored
procedure" to make queries faster. Does SOLR support anything
such as this?
it's kind of an apples vs orange-juice comparison, ut typcailly
when people talk about DB stored pr
One way would be to create your own Query subclass (similar to TermQuery)
that returned a score of zero for docs below a certain tf threshold. This is
minor clarification: a score of zero is still a match ... the key to
writting custom queries is to "skip" past a document that doesn't meet the
2. Someone asked me if SOLR utilizes anything like a "stored procedure"
to make queries faster. Does SOLR support anything such as this?
it's kind of an apples vs orange-juice comparison, ut typcailly when
people talk about DB stored procedures being faster then raw SQL they are
refering to
Scenario 1 : I want to give "priya" with double quotes. The result should
be only priya which is 1.
My programming model normally written as (searchtext*) which searches
all records. So i given ("priya"*). But it was unable to parse in solr. So i
gave ("priya"\*) which gives the result only
* can we set up multiple Solr home directories within the same Solr
instance? (I want to use the same Tomcat Solr instance to support indexing
and searching over multiple independent indexes.)
yes. using JNDI you can configure multiple instances of Solr each with a
seperate solr home. the t
On 28-Aug-07, at 3:04 AM, Stephanie Belton wrote:
Hi,
I need to programmatically put search terms through the query
analyser and
retrieve the result. I thought the easiest way to do this would be
to call
the existing /solr/admin/analysis.jsp, but it would be so much
nicer if
there was
On 28-Aug-07, at 6:19 AM, michael ravits wrote:
hello solrs,
i have an index with 30M records, weights ~50GB. latest trunk
version. heap size 1024mb.
queries work fine until I specify a field to sort results by. even
if the result set consists of only 2 documents, the CPU jumps high
and
On 30-Aug-07, at 9:43 AM, Panbodee Mekpaiboon wrote:
It seems like Solr uses only one index(which will be created under
tag) but I need to create more than one index and it would nice to
be able
to specify the location of each index etc. Is there any way to
manage solr
index on the fly (f
On 30-Aug-07, at 12:09 PM, Lance Norskog wrote:
Is there an app that walks a Lucene index and checks for corruption?
How would we know if our index had become corrupted?
Try asking on [EMAIL PROTECTED]
-Mike
On 30-Aug-07, at 1:41 PM, Giri wrote:
Tom,
Thank you very much for the help.
If I have multiple values, I add them as separate occurrences of
the field
I
am faceting on.
Is this means, for a single record, I can add multiple values for a
field?
for example for the file "sensor" I can se
On 30-Aug-07, at 1:22 PM, Jed Reynolds wrote:
Jed Reynolds wrote:
Apologies if this is in the Lucene FAQ, but I was looking thru the
Lucene syntax and I just didn't see it.
Is there a way to search for documents that have a certain number
of occurrences of a term in the document? Like, I
Tom,
Thank you very much for the help.
>>If I have multiple values, I add them as separate occurrences of the field
I
am faceting on.
Is this means, for a single record, I can add multiple values for a field?
for example for the file "sensor" I can send multiple values?
Let me try this and get
Jed Reynolds wrote:
Apologies if this is in the Lucene FAQ, but I was looking thru the
Lucene syntax and I just didn't see it.
Is there a way to search for documents that have a certain number of
occurrences of a term in the document? Like, I want to find all
documents that have the term Ca
Hi -
I wouldn't facet on a "text" field, I tend to use "string" for the reasons
you describe. e.g. Use
or in your example
If I have multiple values, I add them as separate occurrences of the field I
am faceting on.
If you still need them all in one field for other reasons, use copyField
Is there an app that walks a Lucene index and checks for corruption?
How would we know if our index had become corrupted?
Thanks,
Lance
On 30-Aug-07, at 10:57 AM, Nathaniel E. Powell wrote:
Is there functionality for partitioning Solr indexes onto multiple
machines? For this to work, I suppose that Solr would have to
combine the results from the various machines. I think Nutch does
this with the distributed searcher functi
Is there functionality for partitioning Solr indexes onto multiple machines?
For this to work, I suppose that Solr would have to combine the results from
the various machines. I think Nutch does this with the distributed searcher
functionality.
-Nathan
-Original Message-
From: Mike Kla
On 29-Aug-07, at 10:21 PM, James liu wrote:
Does it affect with doc size?
for example 2 billion docs, 10k doc2 billion docs, but doc size
is 10m.
There might be other places that have 2G limit (see lucene index
format docs), but many things are vints and can grow larger.
Of course
On 30-Aug-07, at 9:51 AM, Andrew Nagy wrote:
Here are a few SOLR performance questions:
1. I have noticed with 500,000+ records that my facets run quite
fast regarding my dataset when there is a large number of matches,
but on a small result set (say 10 - 50) the facet queries become
ver
Apologies if this is in the Lucene FAQ, but I was looking thru the
Lucene syntax and I just didn't see it.
Is there a way to search for documents that have a certain number of
occurrences of a term in the document? Like, I want to find all
documents that have the term Calico mentioned three
Here are a few SOLR performance questions:
1. I have noticed with 500,000+ records that my facets run quite fast regarding
my dataset when there is a large number of matches, but on a small result set
(say 10 - 50) the facet queries become very slow. Any suggestions as to how to
improve this?
It seems like Solr uses only one index(which will be created under
tag) but I need to create more than one index and it would nice to be able
to specify the location of each index etc. Is there any way to manage solr
index on the fly (for Embedded version of Solr)?
Hi,
I am trying to get the facet values from a field that contains multiple
words, for example:
I have a field "keywords"
and values for this: Keywords= relative humidity, air temperature,
atmospheric moisture
Please note: I am combining multiple keywords in to one single field, with
comma
take a look at the unit tests for examples of how to use the api.
also the client is a client API not a client for running queries etc.
http://svn.apache.org/viewvc/lucene/solr/trunk/client/java/solrj/test/
org/apache/solr/client/solrj/
- will
On Aug 30, 2007, at 5:56 AM, Thierry Collogne
I don't think the client can be run directly. We have developed a small
application that uses the client as an interface to solr.
On 30/08/2007, Teruhiko Kurosaka <[EMAIL PROTECTED]> wrote:
>
> Can anyone tell me how to use the Java client ?
> I downloaded the complete source from SVN solr trunk a
38 matches
Mail list logo