On Fri, Apr 17, 2009 at 4:46 AM, ristretto.rb wrote:
> 1. I would need to get the source for 1.4 and build it, right? No
> release yet, eh?
Nope.
>
> 2. Any one using 1.4 in production without issue; is this wise? Or
> should I wait?
Running a nightly is always a risky business. Test com
if you are querying using a http request you can add these two parameters:
facet=true
facet.field=field_for_faceting
and optionally this one to set the max number of facets:
facet.limit=facet_limit
I don't know if it's what you need...
On Fri, Apr 17, 2009 at 6:17 AM, Sajith Weerakoon wrote:
Thanks for the feedback. Will read up on upgrading. I actually went
with the trunk, not a nightly.
When you say Test ... Are you suggesting there is a test suite I
should run, or do just do my own testing?
thanks
gene
On Fri, Apr 17, 2009 at 7:26 PM, Shalin Shekhar Mangar
wrote:
> On Fri,
It is fixed in the trunk
On Thu, Apr 16, 2009 at 10:47 PM, Allahbaksh Asadullah
wrote:
> Thanks Noble.Regards,
> Allahbaksh
>
> 2009/4/16 Noble Paul നോബിള് नोब्ळ्
>
>> On Thu, Apr 16, 2009 at 10:34 PM, Allahbaksh Asadullah
>> wrote:
>> > Hi,I have followed the procedure given on this blog to s
Hello,
I am searching for a way to use the Lucene MultiFieldQueryParser in my
SOLR Installation.
Is there a chance to change the "solrQueryParser" ?
In my old Lucene Setting I used to combine many different types of
QueryParser in my Querry...
Or is there a chance to get MultiFieldQueryPars
Hi Christophe,
Did you find a way to fix up your problem, cuz even with replication will
have this problem, lot of update means clear cache and manage that.
I've the same issue, I just wondering if I won't turn off servers during
update ???
How did you fix that ?
Thanks,
sunny
christophe-2
Think there's no search handler that uses MultiFieldQueryParser in Solr. But
check DismaxRequestHandler, probably will do the job. Yo can specify all the
fields where you want to search in and it will build the query using boolean
queries. It includes also many more features:
http://wiki.apache.or
Marc Sturlese schrieb:
Think there's no search handler that uses MultiFieldQueryParser in Solr. But
check DismaxRequestHandler, probably will do the job. Yo can specify all the
fields where you want to search in and it will build the query using boolean
queries. It includes also many more feature
Marc Sturlese schrieb:
Think there's no search handler that uses MultiFieldQueryParser in Solr. But
check DismaxRequestHandler, probably will do the job. Yo can specify all the
fields where you want to search in and it will build the query using boolean
queries. It includes also many more feature
Hi,
Well let me exaplin the scenario in detail.
I have my lucene.jar that i need to work with Solr. So i started by adding
the lucene.jar to the WEB_INF directory of solr.war,added my schema.xml in
conf dir and restarted the solr server.
Now i run my program , add a doc into it. The doc is ad
Hi Noble.
Thank you very much. I will download the latest solr nightly build.
Please note this is the another problem which I think is bug.
I am trying out load balancing feature in Solr 1.4 using LBHTTPSolrServer.
Below is setup
I have three solr server. A, B and C.
Now the problem is if I mak
Well dismax has a q.alt parameter where you can specify a query in "lucene"
sintax. The query must be empty to use q.alt:
http://.../select?q=&q.alt=phone_number:1234567
This would search in the field phone_number independly of what fields you
have configured in teh dismax.
Another way would be t
Marc Sturlese schrieb:
Well dismax has a q.alt parameter where you can specify a query in "lucene"
sintax. The query must be empty to use q.alt:
http://.../select?q=&q.alt=phone_number:1234567
This would search in the field phone_number independly of what fields you
have configured in teh dismax.
Marc Sturlese schrieb:
The only problem I found with q.alt is that it doesn't allow highlighting (or
at least it doesn't showed it for me). If you find out how to do it let me
know.
I use highlighting only with the normal querry !
My q.alt is "*.*"
But its really sad that the dismax dont suppor
The only problem I found with q.alt is that it doesn't allow highlighting (or
at least it doesn't showed it for me). If you find out how to do it let me
know.
Thanks!
Kraus, Ralf | pixelhouse GmbH wrote:
>
> Marc Sturlese schrieb:
>> Well dismax has a q.alt parameter where you can specify a quer
Hey there,
I have seen the new feature of EventListeners of DIH in trunk.
These events are called at the begining and end of the whole indexing
process or at the begining and end of indexing just a document.
My idea is to update a field of a row of a mysl table every time a doc is
index
these are for the beginning and end of the whoke indexing process
On Fri, Apr 17, 2009 at 7:38 PM, Marc Sturlese wrote:
>
> Hey there,
> I have seen the new feature of EventListeners of DIH in trunk.
>
>
>
>
>
>
>
> These events are called at the begining and end of the whole indexing
>
Hey Erik,
I also checked the index using luke and the index shows that
the terms are indexed as they should have been. So that implies that
something is wrong with the querying only and the results are not getting
retrieved.(As i said earlier even the parsed query is the way it should
I would also include the -XX:+HeapDumpOnOutOfMemoryError option to get
a heap dump when the JVM runs out of heap space.
On Thu, Apr 16, 2009 at 9:43 PM, Bryan Talbot wrote:
> If you're using java 5 or 6 jmap is a useful tool in tracking down memory
> leaks.
>
> http://java.sun.com/javase/6/docs
We are currently trying to do the same thing. With the patch unaltered we
can use fq as long as collapsing is turned on. If we just send a normal
document level query with an fq parameter it blows up.
Additionally, it does not appear that the collapse.facet option works at
all.
--
Jeff Newburn
When we instantiate a commonshttpsolrserver - we use the following method.
CommonsHttpSolrServerserver = new CommonsHttpSolrServer(this.endPoint);
how do we do we a 'kill all' of all the underlying httpclient connections ?
server.getHttpClient() returns a HttpClient reference, but I am tryi
In trying to understand the various options for WordDelimiterFilterFactory, I
tried setting all options to 0.
This seems to prevent a number of words from being output at all. In particular
"can't" and "99dxl" don't get output, nor do any wods containing hypens. Is
this correct behavior?
Here
I have a solr index where we removed a field from the schema but it still
had some documents with that field in it.
Queries using the standard response handler had no problem but the
&wt=python handler would break on any query (with fl="*" or asking for that
field directly) with:
SolrHTTPException
Hi Jaco,
On 4/9/2009 at 2:58 PM, Jaco wrote:
> I'm struggling with some ideas, maybe somebody can help me with past
> experiences or tips. I have loaded a dictionary into a Solr index,
> using stemming and some stopwords in analysis part of the schema.
> Each record holds a term from the dictionar
Seems like we could handle this 2 ways... leave out the field if it's
not defined in the schema, or include it and write it out as a string.
I think either would probably be more useful than throwing an error
(which isn't really a request error but rather a schema/indexing
error).
Thoughts?
-Yon
: level one#
: level one#level two#
: level one#level two#level three#
:
: Trying to find the right combination of field type and query to get the
: desired results. Saw some previous posts about hierarchal facets which helped
: in the generating the right query but having an issue using the buil
: How would I set up SNMP monitoring of my Solr server? I've done some
: searching of the wiki and Google and have come up with a blank. Any
: pointers?
it depends on what you want to monitor. if you just want to know what the
JVM is running, this should be fairly easy...
if you wnat to b
OK, we've got 3 people... that's enough for a party? :)
Surely there must be dozens more of you guys out there... c'mon,
accelerate your knowledge! Join us in Seattle!
On Thu, Apr 16, 2009 at 3:27 PM, Bradford Stephens
wrote:
> Greetings,
>
> Would anybody be willing to join a PNW Hadoop and/o
The only thing that comes to mind is running Solr under a profiler (e.g.
YourKit) and figuring out which objects are not getting cleaned up and who's
holding references to them.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: David Baker
httpClient.getHttpConnectionManager().closeIdleConnections();
--Noble
On Sat, Apr 18, 2009 at 1:31 AM, Rakesh Sinha wrote:
> When we instantiate a commonshttpsolrserver - we use the following method.
>
> CommonsHttpSolrServer server = new CommonsHttpSolrServer(this.endPoint);
>
> how do we do
Hi,
We want to create snapshot incrementally.
What we want is every time the snap shooter script runs, it should not create a
snapshot with pre-existing (last snapshot indexes) + delta (newly created
indexes), rather just create a snapshot with the delta (newly created indexes).
Any references
the snapshooter does not really copy any files. They ar just hardlinks
(does not consume disk space) so even a full copy is not very
expensive
On Sat, Apr 18, 2009 at 12:06 PM, Koushik Mitra
wrote:
> Hi,
>
> We want to create snapshot incrementally.
>
> What we want is every time the snap shooter
When we run the snapshooter script, it creates a snapshot folder e.g.
snapshot.20090418064010 and this snapshot folder contains physical index files
which take space on the file system (as shown below). Are we missing anything
here?
-rw-r- 46 test test 59 Apr 17 23:26 _i.tii
-rw-r
33 matches
Mail list logo