At this scale, your indexing job is prone to break in various ways.
If you want this to be reliable, it should be able to restart in the
middle of an upload, rather than starting over.
On 01/08/2013 10:19 PM, vijeshnair wrote:
Yes Shawn, the batchSize is -1 only and I also have the mergeSchedu
On 1/9/2013 9:41 AM, Shawn Heisey wrote:
With maxThreadCount at 1 and maxMergeCount at 6, I was able to complete
full-import with no problems. All mysql (5.1.61) server-side timeouts
are at their defaults - they don't show up in my.cnf and I haven't
tweaked them anywhere else either.
A full imp
On 1/8/2013 11:19 PM, vijeshnair wrote:
Yes Shawn, the batchSize is -1 only and I also have the mergeScheduler
exactly same as you mentioned. When I had this problem in SOLR 3.4, I did
an extensive googling and gathered much of the tweaks and tuning from
different blogs and forums and configured
> A recent jira issue (LUCENE-4661) changed the maxThreadCount to 1 for
> better performance, so I'm not sure if both of my changes above are
> actually required or if just maxMergeCount will fix it. I commented on
> the issue to find out.
Discussion on the issue has suggested that a maxThreadCou
On 1/8/2013 2:10 AM, vijeshnair wrote:
Solr version : 4.0 (running with 9GB of RAM)
MySQL : 5.5
JDBC : mysql-connector-java-5.1.22-bin.jar
I am trying to run the full import for my catalog data which is roughly
13million of products. The DIH ran smoothly for 18 hours, and processed
roughly 10mil
What you describe sounds right to me and seems consistent with the error
stacktrace.. I would increase the MySQL wait_timeout to 3600 and,
depending on your server, you might want to also increase max_connections.
cheers,
Travis
On Tue, Jan 8, 2013 at 4:10 AM, vijeshnair wrote:
> Solr version