slfan1989 commented on code in PR #5673:
URL: https://github.com/apache/hadoop/pull/5673#discussion_r1209399807
##########
hadoop-yarn-project/hadoop-yarn/bin/FederationStateStore/MySQL/FederationStateStoreStoredProcs.sql:
##########
@@ -92,12 +92,9 @@ CREATE PROCEDURE sp_addApplicationHomeSubCluster(
IN applicationContext_IN BLOB,
OUT storedHomeSubCluster_OUT varchar(256), OUT rowCount_OUT int)
BEGIN
- INSERT INTO applicationsHomeSubCluster
- (applicationId, homeSubCluster, createTime, applicationContext)
- (SELECT applicationId_IN, homeSubCluster_IN, NOW(), applicationContext_IN
- FROM applicationsHomeSubCluster
- WHERE applicationId = applicationId_IN
- HAVING COUNT(*) = 0 );
+ INSERT IGNORE INTO applicationsHomeSubCluster(
Review Comment:
Thank you very much for helping to review this pr!
`YARN-8337` describes a scenario where we use the Mysql database, and when
multiple sessions call the sp_addApplicationHomeSubCluster stored procedure at
the same time, a deadlock may occur.
MySQL Gap lock is a type of gap lock used to protect a range of gaps instead
of specific data rows. We can look at the following picture.
<img width="684" alt="image"
src="https://github.com/apache/hadoop/assets/55643692/bef31a2d-20b7-40b8-a007-0b3d07bffc65">
When we need to write data to MySQL, for example
`application_1526528662010_001201`.
- Step1, we need to apply for a gap lock there is a gap lock between the
previous record(`application_1526528662010_001199`) and infinity. Prevent other
transactions from inserting new records in this range.
- Step2, After applying for the gap lock, `application_1526528662010_001201`
will apply for the row lock and complete the record writing.
If we have 3 different Sessions applying to insert different records at the
same time, SessionA writes `application_1526528662010_001201`, SessionB writes
`application_1526528662010_001202`, and SessionC writes
`application_1526528662010_001203`. They may hold different gap locks, but
need to acquire row locks at the same time to continue execution, resulting in
a deadlock.
But fortunately, there is a timeout period for different sessions, and the
deadlock condition will be terminated. These three applications can finally
write successfully, but this will affect the performance of data writing.
`INSERT IGNORE INTO` allows to ignore errors in the insert process and
continue to perform subsequent operations when errors are encountered. This
means that `INSERT IGNORE INTO` can more easily handle duplicate data or
insertion errors, and can improve the throughput of insert operations.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]