pranavsaxena-microsoft commented on code in PR #4039:
URL: https://github.com/apache/hadoop/pull/4039#discussion_r1057102194
##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##########
@@ -314,13 +327,46 @@ public void sendRequest(byte[] buffer, int offset, int
length) throws IOExceptio
if (this.isTraceEnabled) {
startTime = System.nanoTime();
}
- try (OutputStream outputStream = this.connection.getOutputStream()) {
- // update bytes sent before they are sent so we may observe
- // attempted sends as well as successful sends via the
- // accompanying statusCode
+ OutputStream outputStream = null;
+ // Updates the expected bytes to be sent based on length.
+ this.expectedBytesToBeSent = length;
+ try {
+ try {
+ /* Without expect header enabled, if getOutputStream() throws
+ an exception, it gets caught by the restOperation. But with
+ expect header enabled we return back without throwing an exception
+ for the correct response code processing.
+ */
+ outputStream = this.connection.getOutputStream();
+ } catch (IOException e) {
+ /* If getOutputStream fails with an exception and expect header
+ is enabled, we return back without throwing an exception to
+ the caller. The caller is responsible for setting the correct
status code.
+ If expect header is not enabled, we throw back the exception.
+ */
+ String expectHeader = this.connection.getRequestProperty(EXPECT);
+ if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE)) {
+ LOG.debug("Getting output stream failed with expect header enabled,
returning back "
+ + ExceptionUtils.getStackTrace(e));
+ return;
Review Comment:
Are you saying that server in this case would send some status with >400,
and that would be caught in
https://github.com/anmolanmol1234/hadoop/blob/HADOOP-18146/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java#L412-L416?
If yes, what if getOutputStream fails due to openSSL handshake like:
```
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at
org.wildfly.openssl.OpenSSLSocket.runHandshake(OpenSSLSocket.java:306)
at
org.wildfly.openssl.OpenSSLSocket.startHandshake(OpenSSLSocket.java:210)
at
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:197)
at
sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1342)
at
sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1317)
at
sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:264)
at
org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.sendRequest(AbfsHttpOperation.java:300)
```
Also, it would be stretch of logic accrosss multiple places: in
sendRequest(), processresponse(), abfsclient.append(). It would be awesome if
we flow an exception from this point to abfsclient. What you feel?
##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -681,35 +685,36 @@ public AbfsRestOperation append(final String path, final
byte[] buffer,
abfsUriQueryBuilder, cachedSasToken);
final URL url = createRequestUrl(path, abfsUriQueryBuilder.toString());
- final AbfsRestOperation op = new AbfsRestOperation(
- AbfsRestOperationType.Append,
- this,
- HTTP_METHOD_PUT,
- url,
- requestHeaders,
- buffer,
- reqParams.getoffset(),
- reqParams.getLength(),
- sasTokenForReuse);
+ final AbfsRestOperation op =
getAbfsRestOperationForAppend(AbfsRestOperationType.Append,
+ HTTP_METHOD_PUT, url, requestHeaders, buffer, reqParams.getoffset(),
+ reqParams.getLength(), sasTokenForReuse);
try {
op.execute(tracingContext);
} catch (AzureBlobFileSystemException e) {
+ // If the http response code indicates a user error we retry
+ // the same append request with expect header disabled.
+ // When "100-continue" header is enabled but a non Http 100 response
comes,
+ // JDK fails to provide all response headers.
Review Comment:
From code I can comprehend:
```
// If the http response code indicates a user error we retry
// the same append request with expect header disabled.
```
but I am not able to understand:
```
// When "100-continue" header is enabled but a non Http 100 response
comes,
// JDK fails to provide all response headers.
// This handling is to avoid breaking of backward compatibility
// if someone has taken dependency on the exception message,
// which is created using the error string present in the response
header.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]