I'm trying to reduce memory usage when indexing, and I see that using
the binary format may be a good way to do this. Unfortunately I can't
see a way to do this using the EmbeddedSolrServer since only the
CommonsHttpSolrServer has a setRequestWriter method. If I'm running out
of memory constructin
I'm working on an application that will build indexes directly using the
Lucene API, but will expose them to clients using Solr. I'm seeing
plenty of documentation on how to support date range fields in Solr,
but they all assume that you are inserting documents through Solr rather
than merging alr
I'm are planning out a system with large indexes and wondering what kind
of performance boost I'd see if I split out documents into many cores
rather than using a single core and splitting by a field. I've got about
500GB worth of indexes ranging from 100MB to 50GB each.
I'm assuming if we split
On Thu, Jun 18, 2009 at 4:00 PM, Jonathan Vanasco wrote:
> can anyone give me a suggestion ? i haven't touched java / jetty / tomcat /
> whatever in at least a good 8 years and am lost.
I spent a lot of time trying to get this working too. My conclusion
was simply that the .deb packages for Solr a
Phil Hagelberg writes:
> Noble Paul നോബിള് नोब्ळ् writes:
>
>> if you removed the files while the slave is running , then the slave
>> will not know that you removed the files (assuming it is a *nix box)
>> and it will serve the search requests. But if you restart
Noble Paul നോബിള് नोब्ळ् writes:
> if you removed the files while the slave is running , then the slave
> will not know that you removed the files (assuming it is a *nix box)
> and it will serve the search requests. But if you restart the slave ,
> it should have automatically picked up the cur
Shalin Shekhar Mangar writes:
> You are right. In Solr/Lucene, a commit exposes updates to searchers. So you
> need to call commit on the master for the slave to pick up the changes.
> Replicating changes from the master and then not exposing new documents to
> searchers does not make sense. Howe
Phil Hagelberg writes:
> My only guess as to what's going wrong here is that deleting the
> coreN/data directory is not a good way to "reset" a core back to its
> initial condition. Maybe there's a bit of state somewhere that's making
> the slave think t
rong?
It's also possible that this stuff is still in a heavy state of
development such that it shouldn't be expected to work by casual users,
if that is the case I can go back to the external-script-based
replication features of 1.3.
thanks,
Phil Hagelberg
http://technomancy.us
Is the use of a predefined schema primarily a "type safety" feature?
We're considering using Solr for a data set that is very free-form; will
we get much slower results if the majority of our data is in a dynamic
field such as:
I'm a little unclear on the trade-offs involv
10 matches
Mail list logo