I think you may have already posted this same question but please
check VoyagerGIS out. They have some shit-hot software that is geared
specifically towards the archive and retrieval of geospatial data. I
suggest that you check it out!!!
w/r,
Adam
On Sat, Feb 19, 2011 at 2:33 AM, lucene lucene
I don't think there's any way to do this in Solr, although you could write your
own query parser in Java if you wanted to.
You can set "defaults" , "invariants" and "appends" values on your request
handler, but I don't think that's flexible enough to do what you want.
http://wiki.apache.org/s
Hi,
I'm following your suggestions.
Extract of your last step:
>This would give you three different configurations - you would then edit
>the zookeeper info to point each collection (essentially a SolrCore at
>this point) to the right configuration files:
>
>collections/collection1
> config=con
Sure
http://wiki.apache.org/solr/UpdateXmlMessages#A.22delete.22_by_ID_and_by_Query
> Hi,
>
> I'm wondering if it's possible to delete documents in m'y index by date
> range?
>
> I've got a field in my schema: indexed_date in date type and i would like
> to remove docs older than 90 days.
>
> T
Hi,
I'm wondering if it's possible to delete documents in m'y index by date range?
I've got a field in my schema: indexed_date in date type and i would like to
remove docs older than 90 days.
Thanks for your help
Marc
Hello list,
I'm in the process of trying to implement Ajax within my Solr-backed webapp I
have
been reading both the Solrj wiki as well as the tutorial provided via
the google group and various info from the wiki page
https://github.com/evolvingweb/ajax-solr/wiki
I have all solrj jar libraries
CouchDB is a good piece of software for some scenario's and easy to use. It
has update handlers to which you could attach a small program that takes the
input, transforms it to Solr XML and send it over.
CouchDB lucene is a bit different. It lacks the power of Solr but allows and
you need to wr
Hi Vignesh,
I believe that you would have to incorporate GDAL in to Tika in order
to read the file and extract the proper metadata. This is entirely
doable but I don't know how to do it. There are companies out there
that specialize in this sort of thing so hopefully, one of them has
already conta
Hi Vignesh,
I believe that you would have to incorporate GDAL in to Tika in order
to read the file and extract the proper metadata. This is entirely
doable but I don't know how to do it. There are companies out there
that specialize in this sort of thing so hopefully, one of them has
already conta
I am curious to see if anyone has messed around with Solr and the
Couch-Lucene incarnation that is out there...I was passed this article
this morning and it really opened my eyes about CouchDB
http://m.readwriteweb.com/hack/2011/02/hacker-chat-max-ogden.php
Thoughts,
Adam
We use it in production, but the # of docs is only 2.5M.
2011/2/19 François Schiettecatte :
> I use it in a production setting, but I don't have a very large data set or a
> very heavy query load, the reason I use it is for edismax.
>
> François
>
> On Feb 19, 2011, at 9:50 AM, Mark wrote:
>
>>
I'm considering running an embedded instance of Solr in Tomcat (Amazon's
beanstalk). Has anyone done this before? I'd be very interested in how I can
instantiate Embedded solr in Tomcat. Do I need a resource loader to
instantiate? If so, how?
Thanks,
Matt
Hello list,
as Hoss suggests, I'll try to be more detailed.
I wish to use http parameters in my requests that define the precise semantic
of an advanced search.
For example, if I can see from sessions, that a given user is requesting, that
not only public resources but resources private-to-him
I use it in a production setting, but I don't have a very large data set or a
very heavy query load, the reason I use it is for edismax.
François
On Feb 19, 2011, at 9:50 AM, Mark wrote:
> Would I be crazy even to consider putting this in production? Thanks
Would I be crazy even to consider putting this in production? Thanks
Is there a seamless field collapsing patch for 1.4.1?
I see it has been merged into trunk but I tried downloading it to give
it a whirl but it appears that many things have changed and our
application would need some considerable work to get it up an running.
Thanks
Hi List!
I was wondering, after downloading the subversion source with
svn co https://svn.apache.org/repos/asf/lucene/dev/trunk solr_svn
According to the book I am reading I should be able to run the tests
succesfully, so I did a
/var/solr_svn# ant test
And it seems that junit-parallel gives _
> I know that I can use the
> SignatureUpdateProcessorFactory to remove duplicates but I
> would like the duplicates in the index but remove them
> conditionally at query time.
>
> Is there any easy way I could accomplish this?
Closest thing can be group documents by signature field.
http://wiki
there is an http api where I can look at the latest replication and whether
there is an "ERROR" keyword. If so, the latest replication failed.
From: Otis Gospodnetic
To: solr-user@lucene.apache.org
Sent: Wed, February 16, 2011 11:31:26 AM
Subject: Re: slave o
Seems like one way is to write a servlet who's init method creates a TimerTask.
From: Tri Nguyen
To: solr user
Sent: Fri, February 18, 2011 6:02:44 PM
Subject: adding a TimerTask
Hi,
How can I add a TimerTask to Solr?
Tri
20 matches
Mail list logo