[ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18028410#comment-18028410
 ] 

ASF GitHub Bot commented on HADOOP-18972:
-----------------------------------------

steveloughran commented on code in PR #6272:
URL: https://github.com/apache/hadoop/pull/6272#discussion_r2414436981


##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSaslPropertiesResolver.java:
##########
@@ -0,0 +1,75 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.security;
+
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.util.Map;
+import javax.security.sasl.Sasl;
+
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_RPC_PROTECTION;
+import static org.junit.Assert.assertEquals;
+
+public class TestSaslPropertiesResolver extends AbstractHadoopTestBase {
+
+  private static final InetAddress LOCALHOST = new 
InetSocketAddress("127.0.0.1", 1).getAddress();
+
+  private SaslPropertiesResolver resolver;
+
+  @Before
+  public void setup() {
+    Configuration conf = new Configuration();
+    conf.set(HADOOP_RPC_PROTECTION, "privacy");
+    resolver = new SaslPropertiesResolver();
+    resolver.setConf(conf);
+  }
+
+  @Test
+  public void testResolverDoesNotMutate() {
+    assertPrivacyQop(resolver.getDefaultProperties());
+    setAuthenticationQop(resolver.getDefaultProperties());
+    // After changing the map returned by SaslPropertiesResolver, future maps 
are not affected
+    assertPrivacyQop(resolver.getDefaultProperties());
+
+    assertPrivacyQop(resolver.getClientProperties(LOCALHOST));
+    setAuthenticationQop(resolver.getClientProperties(LOCALHOST));
+    // After changing the map returned by SaslPropertiesResolver, future maps 
are not affected
+    assertPrivacyQop(resolver.getClientProperties(LOCALHOST));
+
+    assertPrivacyQop(resolver.getServerProperties(LOCALHOST));
+    setAuthenticationQop(resolver.getServerProperties(LOCALHOST));
+    // After changing the map returned by SaslPropertiesResolver, future maps 
are not affected
+    assertPrivacyQop(resolver.getServerProperties(LOCALHOST));
+  }
+
+  private static void setAuthenticationQop(Map<String, String> saslProperties) 
{
+    saslProperties.put(Sasl.QOP, 
SaslRpcServer.QualityOfProtection.AUTHENTICATION.getSaslQop());
+  }
+
+  private static void assertPrivacyQop(Map<String, String> saslProperties) {
+    assertEquals(SaslRpcServer.QualityOfProtection.PRIVACY.getSaslQop(),

Review Comment:
   use AssertJ assertequals



##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSaslPropertiesResolver.java:
##########
@@ -0,0 +1,75 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.security;
+
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.util.Map;
+import javax.security.sasl.Sasl;
+
+import org.junit.Before;

Review Comment:
   we are now junit5 tests on trunk now. Either move or just pull all of 
setup() into `testResolverDoesNotMutate()`





> Bug in SaslPropertiesResolver allows mutation of internal state
> ---------------------------------------------------------------
>
>                 Key: HADOOP-18972
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18972
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Charles Connell
>            Priority: Minor
>              Labels: pull-request-available
>
> When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} want to get a 
> SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver#getClientProperties()}}, and they get back a 
> {{Map<String, String>}}. Every call gets the same {{Map}} object back, and 
> then the callers sometimes call 
> [put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
>  on it. This means that future users of {{SaslPropertiesResolver}} get back 
> the wrong information.
> I propose that {{SaslPropertiesResolver}} should pass a copy of its internal 
> map, so that users can safety modify them.
> I discovered this problem in my company's testing environment as we began to 
> enable {{dfs.data.transfer.protection}} on our DataNodes, while our NameNodes 
> were using {{IngressPortBasedResolver}} to give out block tokens with 
> different QOPs depending on the port used. Then our HDFS client applications 
> became unable to read or write to HDFS because they could not find a QOP in 
> common with the DataNodes during SASL handshake. With multiple threads 
> executing SASL handshakes at the same time, the properties map used in 
> {{SaslDataTransferServer}} in a DataNode could be clobbered during usage, 
> since the same map was used by all threads. Also, future clients that do not 
> have a QOP embedded in their block tokens would connect to a server with the 
> wrong SASL properties map. I think that one or both of these issues explains 
> the problem that I saw. I eliminated this unsafety and saw the problem go 
> away.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to