On 10/8/2012 9:37 AM, Katherine Marsden wrote:
On 10/4/2012 3:29 PM, markvarvil wrote:
Using Derby *10.5*, and running in *network server mode*, we have
encountered an issue where the lock_table contained locks, yet these
locks are not being deleted; Even after the system sat idle
overnight, the locks entries persisted in the lock_table the next
morning, and caused the system to function improperly. We are using
the default values for lock and deadlock timeouts. Our application is
using JPA and Optimistic Locking. What would cause the locks to NOT
be cleared and/or released (even after a lengthy timeout period)?
Hi Mark,
Likely there are transactions that have not been committed or rolled
back that are holding these locks.
The lock timeout is a setting that will determine how long a new
transaction will wait for existing locks to be released before the
new transaction times out itself. It does not affect how long the
original transaction will hold the locks which will be until the
transaction is committed or rolled back.
One thing to do is to make sure that all transactions are committed
and rolled back, especially in exception
circumstances. A frequent case I have seen when transactions and
connections are left open is that transactions are not rolled back
before attempting to close a connection. Then the exception that is
thrown on close is caught and ignored. A global approach to this
particular issue might be to replace Connection.close() calls with a
method that always rolls back before closing the connection.
I neglected to mention that a good tool to see the current session info
is NetworkServerControl runtimeinfo. Also if you set
derby.stream.error.logSeverityLevel=0 and
derby.language.logStatementText=true and run your application to the
point of the orphaned transactions you can get more context. grep the
log for "ERROR 25001. This specifically is:
ERROR 25001:errorCode 20000:Cannot close a connection while a
transaction is still active.