wait_timeout
value and update you the outcome.
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-fails-after-processing-roughly-10million-records-tp4031508p4031779.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 1/9/2013 9:41 AM, Shawn Heisey wrote:
With maxThreadCount at 1 and maxMergeCount at 6, I was able to complete
full-import with no problems. All mysql (5.1.61) server-side timeouts
are at their defaults - they don't show up in my.cnf and I haven't
tweaked them anywhere else either.
A full imp
On 1/8/2013 11:19 PM, vijeshnair wrote:
Yes Shawn, the batchSize is -1 only and I also have the mergeScheduler
exactly same as you mentioned. When I had this problem in SOLR 3.4, I did
an extensive googling and gathered much of the tweaks and tuning from
different blogs and forums and configured
> A recent jira issue (LUCENE-4661) changed the maxThreadCount to 1 for
> better performance, so I'm not sure if both of my changes above are
> actually required or if just maxMergeCount will fix it. I commented on
> the issue to find out.
Discussion on the issue has suggested that a maxThreadCou
On 1/8/2013 2:10 AM, vijeshnair wrote:
Solr version : 4.0 (running with 9GB of RAM)
MySQL : 5.5
JDBC : mysql-connector-java-5.1.22-bin.jar
I am trying to run the full import for my catalog data which is roughly
13million of products. The DIH ran smoothly for 18 hours, and processed
roughly 10mil
n(SQLError.java:1117)
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3589)
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3478)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4019)
> at com.mysql.jdbc.MysqlIO.sendComman
dReadPacket(MysqlIO.java:3489)
... 22 more
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-fails-after-processing-roughly-10million-records-tp4031508.html
Sent from the Solr - User mailing list archive at Nabble.com.