DIH can read item by item. did you use stream="true" in the
XPathEntityProcessor ?
On Sun, Jun 21, 2009 at 9:20 AM, Jianbin Dai wrote:
>
> Can DIH read item by item instead of the whole file before indexing? my
> biggest file size is 6GB, larger than the JVM max ram value.
>
>
> --- On Sat, 6/20
Michael & Matt --
Thanks a ton! Those are exactly what I was looking for!
In an environment where there are developer machines, test, staging,
and production servers there is a need to externalize DIH
configuration options like JDBC connections strings (at least the
database server name), username, password, and base paths for XML and
plain text files.
How are
There is no straight way but there is a way
http://wiki.apache.org/solr/DataImportHandlerFaq#head-c4003ab5af86a200b35cf6846a58913839a5a096
On Mon, Jun 22, 2009 at 6:23 AM, Erik Hatcher
wrote:
>
> In an environment where there are developer machines, test, staging, and
> production servers there
Hi, the problem was my xml configurations. I am now using a java process
(seperate from the webserver) using the EmbeddedSolrServer method and post a
commit to the webserver at the end to sync the two cores which seems to work
well for me. Thanks for the help.
Development Team wrote:
>
> Hi Bre
Hi, I am doing a large batch (thousands) of insertions to my index using an
EmbeddedSolrServer. I was wondering how often should I use server.commit()
as I am trying to avoid unecessary bottlenecks.
Thanks, Brett.
--
View this message in context:
http://www.nabble.com/Sorlj-when-to-commit--tp2