yuqi1129 opened a new issue, #10823:
URL: https://github.com/apache/gravitino/issues/10823

   ### Describe the subtask
   
   After the per-request dedup work for `authorize` / `isOwner` / 
`resolveMetadataId` / `resolveOwnerId` in `JcasbinAuthorizer`, flame-graph 
sampling on an auth-enabled server shows the next biggest CPU consumer in the 
auth path is `JcasbinAuthorizer.isMetalakeUser` — **~10.8% of total samples 
(1812 / 16812)**, comparable to the whole `authorize()` call (~11%).
   
   The current implementation calls 
`accessControlDispatcher().getUser(metalake, currentUserName)` on every 
invocation, which issues a DB query and is **not** deduplicated within a single 
HTTP request and **not** cached across requests:
   
   ```java
   // server-common/.../JcasbinAuthorizer.java
   public boolean isMetalakeUser(String metalake) {
     ...
     return GravitinoEnv.getInstance().accessControlDispatcher()
         .getUser(metalake, currentUserName) != null;
   }
   ```
   
   `isMetalakeUser` is invoked from `AuthorizationExpressionConverter` (every 
expression containing `METALAKE_USER`) and `AuthorizationUtils`, so a single 
API call can trigger multiple redundant lookups.
   
   **Proposal:**
   - Reuse the existing per-request `userInfoCache` in 
`AuthorizationRequestContext` (populated by `loadUserInfo`): existence of 
`UserAuthInfo` for `metalake::user` already implies the user is a metalake 
user, so `isMetalakeUser` can be derived without an extra DB hit.
   - For callers that don't already have a `requestContext`, extend the 
interface to accept one (same pattern as `authorize` / `isOwner`).
   
   **Expected impact:** eliminate the ~10% CPU cost of `isMetalakeUser` on warm 
auth paths, and remove the extra DB round-trip it incurs per request.
   
   ### Parent issue
   
   #9898


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to