I've seen similar errors when large background merges happen while
looping in a result set. See http://lucene.grantingersoll.com/2008/07/16/mysql-solr-and-communications-link-failure/
On Jan 9, 2009, at 12:50 PM, Mark Miller wrote:
Your basically writing segments more often now, and somehow
Hey Mark,
Sorry I was not enough especific, I wanted to mean that I have and I always
had autoCommit=false.
I will do some more traces and test. Will post if I have any new important
thing to mention.
Thanks.
Marc Sturlese wrote:
>
> Hey Shalin,
>
> In the begining (when the error was appeari
Your basically writing segments more often now, and somehow avoiding a
longer merge I think. Also, likely, deduplication is probably adding
enough extra data to your index to hit a sweet spot where a merge is too
long. Or something to that effect - MySql is especially sensitive to
timeouts when
Hey Shalin,
In the begining (when the error was appearing) i had
32
and no maxBufferedDocs set
Now I have:
32
50
I think taht setting maxBufferedDocs to 50 I am forcing more disk writting
than I would like... but at least it works fine (but a bit slower,opiously).
I keep saying that the most
On Fri, Jan 9, 2009 at 9:23 PM, Marc Sturlese wrote:
>
> hey there,
> I hadn't autoCommit set to true but I have it sorted! The error stopped
> appearing after setting the property maxBufferedDocs in solrconfig.xml. I
> can't exactly undersand why but it just worked.
> Anyway, maxBufferedDocs
hey there,
I hadn't autoCommit set to true but I have it sorted! The error stopped
appearing after setting the property maxBufferedDocs in solrconfig.xml. I
can't exactly undersand why but it just worked.
Anyway, maxBufferedDocs is deprecaded, would ramBufferSizeMB do the same?
Thanks
Marc
I can't imagine why dedupe would have anything to do with this, other
than what was said, it perhaps is taking a bit longer to get a document
to the db, and it times out (maybe a long signature calculation?). Have
you tried changing your MySql settings to allow for a longer timeout?
(sorry, I'm
Hey there,
I am stack in this problem sine 3 days ago and no idea how to sort it.
I am using the nighlty from a week ago, mysql and this driver and url:
driver="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost/my_db"
I can use deduplication patch with indexs of 200.000 docs and no problem.
Whe
Thanks I will have a look to my JdbcDataSource. Anyway it's weird because
using the 1.3 release I don't have that problem...
Shalin Shekhar Mangar wrote:
>
> Yes, initially I figured that we are accidentally re-using a closed data
> source. But Noble has pinned it right. I guess you can try look
Yes, initially I figured that we are accidentally re-using a closed data
source. But Noble has pinned it right. I guess you can try looking into your
JDBC driver's documentation for a setting which increases the connection
alive-ness.
On Mon, Jan 5, 2009 at 5:29 PM, Noble Paul നോബിള് नोब्ळ् <
nob
I guess the indexing of a doc is taking too long (may be because of
the de-dup patch) and the resultset gets closed automaticallly (timed
out)
--Noble
On Mon, Jan 5, 2009 at 5:14 PM, Marc Sturlese wrote:
>
> Donig this fix I get the same error :(
>
> I am going to try to set up the last nigthly b
Donig this fix I get the same error :(
I am going to try to set up the last nigthly build... let's see if I have
better luck.
The thing is it stop indexing at the doc num 150.000 aprox... and give me
that mysql exception error... Without DeDuplication patch I can index 2
milion docs without prob
Yes I meant the 05/01/2008 build. The fix is a one line change
Add the following as the last line of DataConfig.Entity.clearCache()
dataSrc = null;
On Mon, Jan 5, 2009 at 4:22 PM, Marc Sturlese wrote:
>
> Shalin you mean I should test the 05/01/2008 nighlty? maybe with this one
> works? If the
Shalin you mean I should test the 05/01/2008 nighlty? maybe with this one
works? If the fix you did is not really big can u tell me where in the
source is and what is it for? (I have been debuging and tracing a lot the
dataimporthandler source and I I would like to know what the imporovement is
ab
Yeah looks like but... if I don't use the DeDuplication patch everything
works perfect. I can create my indexed using full import and delta import
without problems. The JdbcDataSource of the nightly is pretty similar to the
1.3 release...
The DeDuplication patch doesn't touch the dataimporthandl
Yeah looks like but... if I don't use the DeDuplication patch everything
works perfect. I can create my indexed using full import and delta import
without problems. The JdbcDataSource of the nightly is pretty similar to the
1.3 release...
The DeDuplication patch doesn't touch the dataimporthandle
Marc, I've just committed a fix which may have caused the bug. Can you use
svn trunk (or the next nightly build) and confirm?
On Mon, Jan 5, 2009 at 3:10 PM, Noble Paul നോബിള് नोब्ळ् <
noble.p...@gmail.com> wrote:
> looks like a bug w/ DIH with the recent fixes.
> --Noble
>
> On Mon, Jan 5, 2009
looks like a bug w/ DIH with the recent fixes.
--Noble
On Mon, Jan 5, 2009 at 2:36 PM, Marc Sturlese wrote:
>
> Hey there,
> I was using the Deduplication patch with Solr 1.3 release and everything was
> working perfectly. Now I upgraded to a nigthly build (20th december) to be
> able to use new
18 matches
Mail list logo