tomscut commented on code in PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#discussion_r847843114
##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##########
@@ -1380,8 +1437,9 @@ private static boolean isExpectedValue(Object
expectedValue, Object value) {
final CallerContext originContext = CallerContext.getCurrent();
for (final T location : locations) {
String nsId = location.getNameserviceId();
+ boolean isObserverRead = observerReadEnabled && isReadCall(m);
final List<? extends FederationNamenodeContext> namenodes =
- getNamenodesForNameservice(nsId);
+ msync(nsId, ugi, isObserverRead);
Review Comment:
> @tomscut for "Here's how we do it.", is there a link you meant to attach?
I mean, in our cluster, we ran into this problem, and this is how we solved
it.
##########
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java:
##########
@@ -79,6 +79,8 @@
String DFS_NAMENODE_HTTPS_ADDRESS_KEY = "dfs.namenode.https-address";
String DFS_HA_NAMENODES_KEY_PREFIX = "dfs.ha.namenodes";
int DFS_NAMENODE_RPC_PORT_DEFAULT = 8020;
+ String DFS_OBSERVER_READ_ENABLE = "dfs.observer.read.enable";
+ boolean DFS_OBSERVER_READ_ENABLE_DEFAULT = true;
Review Comment:
Thank you for your explanation. I understand.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]