[ 
https://issues.apache.org/jira/browse/HADOOP-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Farshid updated HADOOP-15248:
-----------------------------
    Description: 
 

I'm trying to read a file thorugh {{s3a}} from a bucket in us-east-2 (Ohio) and 
I'm getting 400 Bad Request response:

_com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
Service: Amazon S3, AWS Request ID: [removed], AWS Error Code: null, AWS Error 
Message: Bad Request, S3 Extended Request ID: [removed]_

Since my code works with another bucket in Sydney, it seems to be a signing API 
version issue (Ohio supports only 4, Sydney supports 2 and 4). So I tried 
setting the endpoint by adding this to {{spark-submit}} as suggested in other 
posts:

_--conf "spark.hadoop.fs.s3a.endpoint=s3.us-east-2.amazonaws.com"_ 

But that didn't make any difference. I also tried adding the same to a conf 
file and passing it using {{--properties-file [file_path]}}

_spark.hadoop.fs.s3a.endpoint               s3.us-east-2.amazonaws.com_

No difference. I still get the same error for Ohio (and it doesn't work with 
Sydney any more, for obvious reasons).

  was:
 

I'm trying to read a file thorugh {{s3a}} from a bucket in us-east-2 (Ohio) and 
I'm getting 400 Bad Request response:

_com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
Service: Amazon S3, AWS Request ID: [removed], AWS Error Code: null, AWS Error 
Message: Bad Request, S3 Extended Request ID: [removed]_

Since my code works with another bucket in Sydney, it seems to be a signing API 
version issue (Ohio supports only 4, Sydney supports 2 and 4). So I tried 
setting the endpoint by adding this to {{spark-submit}} as suggested in other 
posts:

_--conf "spark.hadoop.fs.s3a.endpoint=s3.us-east-2.amazonaws.com"_ 

But that didn't make any difference. I also tried adding the same to a conf 
file and passing it using {{--properties-file [file_path]}}

_spark.hadoop.fs.s3a.endpoint s3.us-east-2.amazonaws.com_

No difference. I still get the same error for Ohio (and it doesn't work with 
Sydney any more, for obvious reasons).


> 400 Bad Request while trying to access S3 through Spark
> -------------------------------------------------------
>
>                 Key: HADOOP-15248
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15248
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.7.3
>         Environment: macOS 10.13.3 (17D47)
> Spark 2.2.1
> Hadoop 2.7.3
>            Reporter: Farshid
>            Priority: Blocker
>
>  
> I'm trying to read a file thorugh {{s3a}} from a bucket in us-east-2 (Ohio) 
> and I'm getting 400 Bad Request response:
> _com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
> Service: Amazon S3, AWS Request ID: [removed], AWS Error Code: null, AWS 
> Error Message: Bad Request, S3 Extended Request ID: [removed]_
> Since my code works with another bucket in Sydney, it seems to be a signing 
> API version issue (Ohio supports only 4, Sydney supports 2 and 4). So I tried 
> setting the endpoint by adding this to {{spark-submit}} as suggested in other 
> posts:
> _--conf "spark.hadoop.fs.s3a.endpoint=s3.us-east-2.amazonaws.com"_ 
> But that didn't make any difference. I also tried adding the same to a conf 
> file and passing it using {{--properties-file [file_path]}}
> _spark.hadoop.fs.s3a.endpoint               s3.us-east-2.amazonaws.com_
> No difference. I still get the same error for Ohio (and it doesn't work with 
> Sydney any more, for obvious reasons).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to