yuqi1129 commented on code in PR #10908:
URL: https://github.com/apache/gravitino/pull/10908#discussion_r3165750012
##########
server-common/src/main/java/org/apache/gravitino/server/authorization/jcasbin/JcasbinAuthorizer.java:
##########
@@ -89,6 +94,16 @@ public class JcasbinAuthorizer implements
GravitinoAuthorizer {
private Cache<Long, Optional<Long>> ownerRel;
+ /**
+ * Per-role policy index that mirrors what we hand to the JCasbin enforcers.
Lookups in the hot
+ * path go through this map so an authorize call costs O(roles_per_user)
hash probes instead of
+ * scanning every policy line in the enforcer. Lifecycle is tied to {@link
#loadedRoles}: entries
+ * are written before {@code loadedRoles.put} and removed from {@code
loadedRoles}'s removal
+ * listener so the two caches can never disagree on what a role grants.
+ */
+ private final ConcurrentHashMap<Long, Map<PolicyKey, Effect>> roleIndex =
+ new ConcurrentHashMap<>();
Review Comment:
Good catch. I fixed this by removing the separate roleIndex state and
storing the per-role policy index directly as the loadedRoles cache value.
The previous implementation had two independent states: loadedRoles only
recorded whether a role was loaded, while roleIndex stored the actual policy
index. A concurrent load and invalidate could make them inconsistent, or
resurrect a stale roleIndex after invalidation.
Now the loaded marker and the policy index are the same cache entry. Role
loading uses loadedRoles.get(roleId, mappingFunction), so concurrent loads for
the same role share one computed index, and handleRolePrivilegeChange
invalidates the same entry that authorization reads from.
I also added tests to cover role cache invalidation, reload after privilege
changes, the loaded role policy-index value, and concurrent cache loading for
the same role.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]