On Mon, Oct 12, 2009 at 10:27 AM, Pravin Karne <
pravin_ka...@persistent.co.in> wrote:
> How to set master/slave setup for solr.
>
>
Index documents only on the master. Put the slaves behind a load balancer
and query only on slaves. Setup replication between the master and slaves.
See http://wiki.
On Mon, Oct 12, 2009 at 6:07 AM, Tommy Chheng wrote:
> The dummy data set is composed of 6 docs.
>
> My query is set for 'tommy' with the facet query of Memory_s:1+GB
>
> http://lh:8983/solr/select/?facet=true&facet.field=CPU_s&facet.field=Memory_s&facet.field=Video+Card_s&wt=ruby&facet.query=Memo
Yonik Seeley wrote:
On Sun, Oct 11, 2009 at 6:04 PM, Lance Norskog wrote:
And the other important
thing to know about boost values is that the dynamic range is about
6-8 bits
That's an index-time boost - an 8 bit float with 5 bits of mantissa
and 3 bits of exponent.
Query time boosts are norm
Koji Sekiguchi wrote:
> Hello,
>
> I found that rollback resets adds and docsPending count,
> but doesn't reset cumulative_adds.
>
> $ cd example/exampledocs
> # comment out the line of so avoid committing in post.sh
> $ ./post.sh *.xml
> => docsPending=19, adds=19, cumulative_adds=19
>
> # do rol
On Mon, Oct 12, 2009 at 5:58 AM, Andrzej Bialecki wrote:
> BTW, standard Collectors collect only results
> with positive scores, so if you want to collect results with negative scores
> as well then you need to use a custom Collector.
Solr never discarded non-positive hits, and now Lucene 2.9 no
Is it possible to have two different facet.prefix on the same facet field in
a single query. I wan to get facet counts for two prefix, "xx" and "yy". I
tried using two facet.prefix (ie &facet.prefix=xx&facet.prefix=yy) but the
second one seems to have no effect.
Bill
ok, so fq != facet.query. i thought it was an alias. I'm trying your
suggestion fq=Memory_s:"1 GB" and now it's returning zero documents even
though there is one document that has "tommy" and "Memory_s:1 GB" as
seen in the original pastie(http://pastie.org/650932). I tried the fq
query body wit
I did an experiment that worked. In Solr::Request::Standard, in the
to_hash() method, I changed the commented line below to the two lines
following it.
sort = @params[:sort].collect do |sort|
key = sort.keys[0]
"#{key.to_s} #{sort[key] == :descending ? 'desc' : 'asc'}"
end.j
Paul-
Trunk solr-ruby has this instead:
hash[:sort] = @params[:sort].collect do |sort|
key = sort.keys[0]
"#{key.to_s} #{sort[key] == :descending ? 'desc' : 'asc'}"
end.join(',') if @params[:sort]
The ";sort..." stuff is now deprecated with Solr itself
I suppose the 0.8 gem
Hi there,
I have a 2 node cluster running apache and solr over a shared
partition ontop of DRBD. Think of it like a SAN.
I'm curios as to how I should do load balancing / sharing with Solr in
this setup. I'm already using DNS round robbin for apache.
My Solr installation is on /cluster/Solr.
In my search docs, I have content such as 'powershot' and 'powerShot'.
I would expect 'powerShot' would be searched as 'power', 'shot' and
'powershot', so that results for all these are returned. Instead, only results
for 'power' and 'shot' are returned.
Any suggestions?
In the schema, index an
Thanks for your input, Shalin.
On Sun, Oct 11, 2009 at 12:30 AM, Shalin Shekhar Mangar
wrote:
>> - I can't use a variable like ${shardsParam} in a single shared
>> solrconfig.xml, because the line
>> ${shardsParam}
>> has to be in there, and that forces a (possibly empty) &shards
>> parameter
Yonik Seeley wrote:
On Mon, Oct 12, 2009 at 5:58 AM, Andrzej Bialecki wrote:
BTW, standard Collectors collect only results
with positive scores, so if you want to collect results with negative scores
as well then you need to use a custom Collector.
Solr never discarded non-positive hits, and
Avlesh,
I got it, finally, by doing an OR between the two fields, one with an exact
match keyword and the other is grouped.
q=suggestion:"formula xxx" OR tokenized_suggestion:(formula )
Thanks for all your help!
Rih
On Fri, Oct 9, 2009 at 4:26 PM, R. Tan wrote:
> I ended up with the sam
Hi,
I'm querying with an accented keyword such as "café" but the debug info
shows that it is only searching for "caf". I'm using the ISOLatin1Accent
filter as well.
Query:
http://localhost:8983/solr/select?q=%E9&debugQuery=true
Params return shows this:
true
What am I missing here?
Rih
OK, a hacky but working solution to making one core shard to all
others: have the default parameter *name* vary, so that one core gets
"&shards=foo" and all other cores get "&dummy=foo".
# solr.xml
...
# solrconfig.xml
${shardsValue}
...
Michael
On Mon, O
What tokenizer and filters are you using in what order? See schema.xml.
Also, you may wish to use ASCIIFoldingFilter, which covers more cases
than ISOLatin1AccentFilter.
Michael
On Mon, Oct 12, 2009 at 12:42 PM, R. Tan wrote:
> Hi,
> I'm querying with an accented keyword such as "café" but the
Hi,
I have indexed my xml which contains the following data.
http://www.yahoo.com
yahoomail
yahoo has various links and gives in detail about
the all the links in it
http://www.rediff.com
It is a good website
Rediff has a interesting homepage
http://www.ndtv.com
Ndtv has
Hi,
How should we setup master and slaves in Solr? What configuration files and
parameters should we need to change and how ?
Thanks,
Chaitali
--- On Mon, 10/12/09, Shalin Shekhar Mangar wrote:
From: Shalin Shekhar Mangar
Subject: Re: dose solr sopport distribute index storage ?
To: solr
Hi,
I am pushing data to solr from two different sources nutch and a cms.
I have a data clash in that in nutch a copyField is required to push
the url field to the id field as it is used as the primary lookup in
the nutch solr intergration update. The other cms also uses the url
field but
I've just pushed a new 0.0.8 gem to Rubyforge that includes the fix I
described for the sort parameter.
Erik
On Oct 12, 2009, at 11:03 AM, Paul Rosen wrote:
I did an experiment that worked. In Solr::Request::Standard, in the
to_hash() method, I changed the commented line below to t
On 10/12/2009 10:49 AM, Chaitali Gupta wrote:
Hi,
How should we setup master and slaves in Solr? What configuration files and
parameters should we need to change and how ?
Thanks,
Chaitali
Hi -
I think Shalin was pretty clear on that, it is documented very well at
http://wiki.apache.org/so
Sorry for the hijack, but s replication necessary when using a cluster
file-system such as GFS2. Where the files are the same for any
instance of Solr?
On Mon, Oct 12, 2009 at 8:36 PM, Dan Trainor wrote:
> On 10/12/2009 10:49 AM, Chaitali Gupta wrote:
>>
>> Hi,
>>
>> How should we setup master
You can reverse the sort order. In this case, you want score ascending:
sort=score+asc
If you just want documents without that keyword, then try using the minus
sign:
q=-good
http://wiki.apache.org/solr/CommonQueryParameters
-Nick
On Mon, Oct 12, 2009 at 1:19 PM, bhaskar chandrasekar
wrote:
The easiest way to boost your query is to modify your query string.
q=product:red color:red^10
In the above example, I have boosted the color field. If "red" is found in
that field, it will get a boost of 10. If it is only found in the product
field, then there will be no boost.
Here's more info
On Mon, Oct 12, 2009 at 12:03 PM, Andrzej Bialecki wrote:
>> Solr never discarded non-positive hits, and now Lucene 2.9 no longer
>> does either.
>
> Hmm ... The code that I pasted in my previous email uses
> Searcher.search(Query, int), which in turn uses search(Query, Filter, int),
> and it does
> Hi,
> I am pushing data to solr from two different sources nutch
> and a cms. I have a data clash in that in nutch a copyField
> is required to push the url field to the id field as it is
> used as the primary lookup in the nutch solr
> intergration update. The other cms also uses the url field
Where does the quote come from :)
On Sat, Oct 10, 2009 at 6:38 AM, Israel Ekpo wrote:
> I can't wait...
>
> --
> "Good Enough" is not good enough.
> To give anything less than your best is to sacrifice the gift.
> Quality First. Measure Twice. Cut Once.
>
Is it possible to do searches from within an UpdateRequestProcessor? The
documents in my index reference each other. When a document is deleted, I
would like to update all documents containing a reference to the deleted
document. My initial idea is to use a custom UpdateRequestProcessor. Is
the
Hi,
I'm attempting to optimize a pretty large index, and even though the optimize
request timed out, I watched it using a profiler and saw that the optimize
thread continued executing. Eventually it completed, but in the background I
still see a thread performing a merge:
Lucene Merge Thread #0
Try this in solrconfig.xml:
1
Yes you can stop the process mid-merge. The partially merged files
will be deleted on restart.
We need to update the wiki?
On Mon, Oct 12, 2009 at 4:05 PM, Giovanni Fernandez-Kincade
wrote:
> Hi,
> I'm attempting to optimize a pretty large index, and even tho
Do you have to make a new call to optimize to make it start the merge again?
-Original Message-
From: Jason Rutherglen [mailto:jason.rutherg...@gmail.com]
Sent: Monday, October 12, 2009 7:28 PM
To: solr-user@lucene.apache.org
Subject: Re: Lucene Merge Threads
Try this in solrconfig.xml:
It looks like there is a JIRA covering this:
https://issues.apache.org/jira/browse/SOLR-1387
On Mon, Oct 12, 2009 at 11:00 AM, Bill Au wrote:
> Is it possible to have two different facet.prefix on the same facet field
> in a single query. I wan to get facet counts for two prefix, "xx" and
> "y
I am having trouble generating the xsl file for multivalue entries. I'm not
sure I'm missing something, or if this is how it is supposed to function. I
have to authors and I'd like to have seperate ByLine notes in my
translation.
Here is what solr returns normally
...
Crista Souza
Darrell Dunn
It is my email signature.
It is a sort of hybrid/mashup from different sources.
On Mon, Oct 12, 2009 at 6:49 PM, Michael Masters wrote:
> Where does the quote come from :)
>
> On Sat, Oct 10, 2009 at 6:38 AM, Israel Ekpo wrote:
> > I can't wait...
> >
> > --
> > "Good Enough" is not good enoug
Hi Nicholas,
Thanks for your input.Where exactly the query
q=product:red color:red^10
should be used and defined?.
Help me.
Regards
Bhaskar
--- On Mon, 10/12/09, Nicholas Clark wrote:
From: Nicholas Clark
Subject: Re: Boosting of words
To: solr-user@lucene.apache.org
Date: Monday, Oct
This didn't end up working. I got the following error when I tried to commit:
Oct 12, 2009 8:36:42 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Error loading class '
5
'
at
org.apache.solr.core.SolrResourceLoader.findCla
Hi,
I am using Solr 1.3 for spell checking. I am facing a strange problem of
spell checking index not been generated. When I have less number of
documents (less than 1000) indexed then the spell check index builds, but
when the documents are more (around 40K), then the index for spell checking
doe
On Tue, Oct 13, 2009 at 8:36 AM, Varun Gupta wrote:
> Hi,
>
> I am using Solr 1.3 for spell checking. I am facing a strange problem of
> spell checking index not been generated. When I have less number of
> documents (less than 1000) indexed then the spell check index builds, but
> when the docum
No, there are no exceptions in the logs.
--
Thanks
Varun Gupta
On Tue, Oct 13, 2009 at 8:46 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> On Tue, Oct 13, 2009 at 8:36 AM, Varun Gupta
> wrote:
>
> > Hi,
> >
> > I am using Solr 1.3 for spell checking. I am facing a strange problem
A custom UpdateRequestProcessor is the solution. You can access the
searcher in a UpdateRequestProcessor.
On Tue, Oct 13, 2009 at 4:20 AM, Bill Au wrote:
> Is it possible to do searches from within an UpdateRequestProcessor? The
> documents in my index reference each other. When a document is d
: Maybe I'm missing something, but function queries aren't involved in
: determining whether a document matches or not, only its score. How is a a
: custom function / value-source going to filter?
it's not ... i didn't realize that was the context of the question, i was
just answering the speci
: I had to be brief as my facets are in the order of 100K over 800K documents
: and also if I give the complete schema.xml I was afraid nobody would read my
: long message :-) ..Hence I showed only relevant pieces of the result showing
: different fields having same problem
relevant is good, but
: There is a Solr.PatternTokenizerFactory class which likely fits the bill in
: this case. The related question I have is this - is it possible to have
: multiple Tokenizers in your analysis chain?
No .. Tokenizers consume CharReaders and produce a TokenStream ... what's
needed here is a TokenF
: In the code I'm working with, I generate a cache of calculated values as a
: by-product within a Filter.getDocidSet implementation (and within a Query-ized
: version of the filter and its Scorer method) . These values are keyed off the
: IndexReader's docID values, since that's all that's accessi
Hey
Any reason why it may be happening ??
Regards
Rohan
On Sun, Oct 11, 2009 at 9:25 PM, rohan rai wrote:
>
> Small data set..
>
>
>
> 11
> 11
> 11
>
>
> 22
> 22
> 22
>
>
> 33
> 33
> 33
>
>
>
> data-config
>
>
>
> forE
which version of Solr are you using? the 1 syntax was added recently
On Tue, Oct 13, 2009 at 8:08 AM, Giovanni Fernandez-Kincade
wrote:
> This didn't end up working. I got the following error when I tried to commit:
>
> Oct 12, 2009 8:36:42 PM org.apache.solr.common.SolrException log
> SEVERE: or
47 matches
Mail list logo