Hi,
You can comment out all sections in solrconfig.xml pointing to a cache.
However, there is a cache deep in Lucence - the fieldcache - that can't be
commented out. This cache will always jump into the picture
If I need to do such things, I restart the whole tomcat6 server to flush ALL
cache
So,
The only way - we now have - to integrate ZooKeeper is by using
'-DzkHost=url:port_of_ZooKeeper' when we start up a Solr instance?
+ I've noticed, when a solr instance goes down, the node comes inactive in
Zookeeper - but the node is maintained in the list of nodes. How can you
remove a solr
do you mean queryResultCache? you can comment related paragraph in
solrconfig.xml
see http://wiki.apache.org/solr/SolrCaching
2011/2/8 Isan Fulia :
> Hi,
> My solrConfig file looks like
>
>
>
>
>
> multipartUploadLimitInKB="2048" />
>
>
> default="true" />
>
> class="org.apache.so
I am not sure I understand your question.
But you can boost the result based on one value over another value.
Look at bf
http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_change_the_score_of_a_document_based_on_the_.2Avalue.2A_of_a_field_.28say.2C_.22popularity.22.29
On Wed, Feb 9, 2011 a
It would be good if someone added the hits= on group=true in the log.
We are using this parameter and have build a really cool SOLR log analyzer
(that I am pushing to release to open source).
But it is not as effective if we cannot get group=true to output hits= in
the log - since 90% of our queri
On Thu, Feb 10, 2011 at 4:08 PM, Stijn Vanhoorelbeke
wrote:
> Hi,
>
> I've done some stress testing onto my solr system ( running in the ec2 cloud
> ).
> From what I've noticed during the tests, the QTime drops to just 1 or 2 ms (
> on a index of ~2 million documents ).
>
> My first thought pointe
Optimize will do just what you suggest, although there's a
parameter whose name escapes me controlling how many
segments the index is reduced to so this is configurable.
It's also possible, but kind of unlikely, that the original indexing
process would produce only one segment. You could tell whic
Does optimize merge all segments into 1 segment on the master after the build?
Or after the build, there's only 1 segment.
thanks,
Tri
From: Erick Erickson
To: solr-user@lucene.apache.org
Sent: Thu, February 10, 2011 5:08:44 PM
Subject: Re: running optimize
Optimizing isn't necessary in your scenario, as you don't delete
documents and rebuild the whole thing each time anyway.
As for faster searches, this has been largely been made obsolete
by recent changes in how indexes are built in the first place. Especially
as you can build your index in an hour
Let's see what the queries are. If you're searching for single
terms that don't match many docs that's one thing. If you're looking
at many terms that match many documents, I'd expect larger numbers.
Unless you're hitting the document cache and not searching at all
Best
Erick
On Thu, Feb 10,
Hi,
I've read running optimize is similar to running defrag on a hard disk.
Deleted
docs are removed and segments are reorganized for faster searching.
I have a couple questions.
Is optimize necessary if I never delete documents? I build the index every
hour but we don't delete in between
On Thu, Feb 10, 2011 at 5:51 PM, Ryan McKinley wrote:
>>
>> foo_s:foo\-bar
>> is a valid lucene query (with only a dash between the foo and the
>> bar), and presumably it should be treated the same in edismax.
>> Treating it as foo_s:foo\\-bar (a backslash and a dash between foo and
>> bar) might
>
> foo_s:foo\-bar
> is a valid lucene query (with only a dash between the foo and the
> bar), and presumably it should be treated the same in edismax.
> Treating it as foo_s:foo\\-bar (a backslash and a dash between foo and
> bar) might cause more problems than it's worth?
>
I don't think we shou
On Thu, Feb 10, 2011 at 5:00 PM, Stijn Vanhoorelbeke
wrote:
> I've completed the quick&dirty tutorials of SolrCloud ( see
> http://wiki.apache.org/solr/SolrCloud ).
> The whole concept of SolrCloud and ZooKeeper look indeed very promising.
>
> I found also some info about a 'ZooKeeperComponent' -
: "essentially that FOO:BAR and FOO\:BAR would be equivalent if FOO is
: not the name of a real field according to the IndexSchema"
:
: That part is true, but doesn't say anything about escaping. And for
: some unknown reason, this no longer works.
that's the only part i was refering to.
-Hos
Hi,
I've followed the guide & worked perfect for me.
( I had to execute ant compile - not ant example, but not likely that was
your problem ).
2011/1/2 siddharth
>
> I seemed to have figured out the problem. I think it was an issue with the
> JAVA_HOME being set. The build was failing while com
On Thu, Feb 10, 2011 at 3:05 PM, Ryan McKinley wrote:
> ah -- that makes sense.
>
> Yonik... looks like you were assigned to it last week -- should I take
> a look, or do you already have something in the works?
I got busy on other things, and don't have anything in the works.
I think edismax sho
On Thu, Feb 10, 2011 at 2:52 PM, Chris Hostetter
wrote:
>
> : extending edismax. Perhaps when F: does not match a given field, it
> : could auto escape the rest of the word?
>
> that's actually what yonik initially said it was suppose to do
Hmmm, not really.
"essentially that FOO:BAR and FOO\:BA
Hi,
Is it possible to monitor the QTime of the queries.
I know I could enable logging - but then all of my requests are logged,
making big&nasty logs.
I just want to log the QTime periodically, lets say once every minute.
Is this possible using Solr or can this be set up in tomcat anyway?
Hi,
I've done some stress testing onto my solr system ( running in the ec2 cloud
).
>From what I've noticed during the tests, the QTime drops to just 1 or 2 ms (
on a index of ~2 million documents ).
My first thought pointed me to the different Solr caches; so I've disabled
all of them. Yet QTime
Hi,
I've completed the quick&dirty tutorials of SolrCloud ( see
http://wiki.apache.org/solr/SolrCloud ).
The whole concept of SolrCloud and ZooKeeper look indeed very promising.
I found also some info about a 'ZooKeeperComponent' - From this conponent it
should be possible to configure ZooKeeper
ah -- that makes sense.
Yonik... looks like you were assigned to it last week -- should I take
a look, or do you already have something in the works?
On Thu, Feb 10, 2011 at 2:52 PM, Chris Hostetter
wrote:
>
> : extending edismax. Perhaps when F: does not match a given field, it
> : could auto
Hmmm, never noticed that link before, thanks!
Which shows you how much I can ignore that's perfectly
obvious ...
Works like a champ.
Erick
On Thu, Feb 10, 2011 at 2:05 PM, Shane Perry wrote:
> I tried posting from gmail this morning and had it rejected. When I
> resent as plaintext, it went t
Just for the first part: There's no problem here, the write lock
is to keep simultaneous *writes* from occurring, the slave reading
the index doesn't enter in to it. Note that in Solr, when segments
are created in an index, they are write-once. So basically what
happens when a slave replicates is t
: extending edismax. Perhaps when F: does not match a given field, it
: could auto escape the rest of the word?
that's actually what yonik initially said it was suppose to do, but when i
tried to add a param to let you control which fields would be supported
using the ":" syntax i discovered i
I am using the edismax query parser -- its awesome! works well for
standard dismax type queries, and allows explicit fields when
necessary.
I have hit a snag when people enter something that looks like a windows path:
F:\path\to\a\file
this gets parsed as:
F:\path\to\a\file
F:\path\to\a\file
+
I have had the same problem .. my facet pivots was returning results
something like
Cat-A (3)
Item X
Item Y
only 2 items instead of 3
or even
Cat-B (2)
no items
zero items instead of 2
so the parent level count didnt matched with the returned child pivots ..
but once I set the facet.pivot.m
I tried posting from gmail this morning and had it rejected. When I
resent as plaintext, it went through.
On Thu, Feb 10, 2011 at 11:51 AM, Erick Erickson
wrote:
> Anyone else having problems with the Solr users list suddenly deciding
> everything you send is spam? For the last couple of days I'
Anyone else having problems with the Solr users list suddenly deciding
everything you send is spam? For the last couple of days I've had this
happening from gmail, and as far as I know I haven't changed anything that
would give my mails a different "spam score" which is being exceeded
according to
I don't know: either way works for me via cURL.
I can only say double check your typing (make sure you're passing the
user/password you think you are), and double check server.xml.
Oh, the tomcat roles were tightened up a bit in tomcat 7. If you're using
tomcat 7 (especially if you've upgrad
Exactly Jenny,
*you are not authorized*
means the request cannot be authorized to execute.
Means some calls failed with a security error.
manager/html/reload -> for browsers by humans
manager/reload-> for curl
(at least that's my experience)
paul
Le 10 févr. 2011 à 17:32, Jenny Arduini a écr
Hi Mark, hi all,
I just got a customer request to conduct an analysis on the state of
SolrCloud.
He wants to see SolrCloud part of the next solr 1.5 release and is willing
to sponsor our dev time to close outstanding bugs and open issues that may
prevent the inclusion of SolrCloud in the next r
If I execute this comand in shell:
curl -u :
http://localhost:8080/manager/html/reload?path=/solr
I get this result:
"http://www.w3.org/TR/html4/strict.dtd";>
401 Unauthorized