dimas-b commented on code in PR #1496:
URL: https://github.com/apache/polaris/pull/1496#discussion_r2067437261


##########
extension/persistence/relational-jdbc/src/main/java/org/apache/polaris/extension/persistence/relational/jdbc/JdbcBasePersistenceImpl.java:
##########
@@ -93,11 +93,7 @@ public void writeEntity(
       boolean nameOrParentChanged,
       PolarisBaseEntity originalEntity) {
     try {
-      datasourceOperations.runWithinTransaction(
-          statement -> {
-            persistEntity(callCtx, entity, originalEntity, statement);
-            return true;
-          });
+      persistEntity(callCtx, entity, originalEntity, datasourceOperations);

Review Comment:
   My point is that this change will not fix any latent concurrency issues, but 
may make them less likely to occur... which is not a good fix, IMHO.
   
   I believe we should make JDBC Persistence 100% correct in the face of `could 
not serialize access due to read/write dependencies among transactions` errors. 
After we achieve correctness, then optimizing the more frequent call paths 
would make sense.
   
   Now the question becomes: what is the right way to handle `could not 
serialize access` errors? 
   
   I tend to think we should re-try the operation at the JDBC level a few times 
(with a timeout) before failing the Persistence API call. I believe retries 
should help because the serializability conflict in PogresSQL may occur when it 
internally coarsens the detection data, so these errors may be "false positive" 
and actual changes may be successful on retry.
   
   If the changes are truly in conflict with another transaction, then on retry 
we're likely to hit the optimistic version mismatch, which would be normal 
given our data model (so it should not lead to 500 errors at the REST API 
level).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to