[
https://issues.apache.org/jira/browse/HADOOP-10135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835480#comment-13835480
]
Hadoop QA commented on HADOOP-10135:
------------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12616407/HADOOP-10135-1.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
hadoop-tools/hadoop-openstack.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HADOOP-Build/3324//testReport/
Console output:
https://builds.apache.org/job/PreCommit-HADOOP-Build/3324//console
This message is automatically generated.
> writes to swift fs over partition size leave temp files and empty output file
> -----------------------------------------------------------------------------
>
> Key: HADOOP-10135
> URL: https://issues.apache.org/jira/browse/HADOOP-10135
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs
> Affects Versions: 3.0.0
> Reporter: David Dobbins
> Attachments: HADOOP-10135-1.patch, HADOOP-10135.patch
>
>
> The OpenStack/swift filesystem produces incorrect output when the written
> objects exceed the configured partition size. After job completion, the
> expected files in the swift container have length == 0 and a collection of
> temporary files remain with names that appear to be URLs.
> This can be replicated with teragen against the minicluster using the
> following command line:
> bin/hadoop jar
> ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar teragen
> 100000 swift://mycontainer.myservice/teradata
> Where core-site.xml contains:
> <property>
> <name>fs.swift.impl</name>
> <value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value>
> </property>
> <property>
> <name>fs.swift.partsize</name>
> <value>1024</value>
> </property>
> <property>
> <name>fs.swift.service.myservice.auth.url</name>
> <value>https://auth.api.rackspacecloud.com/v2.0/tokens</value>
> </property>
> <property>
> <name>fs.swift.service.myservice.username</name>
> <value>[[your-cloud-username]]</value>
> </property>
> <property>
> <name>fs.swift.service.myservice.region</name>
> <value>DFW</value>
> </property>
> <property>
> <name>fs.swift.service.myservice.apikey</name>
> <value>[[your-api-key]]</value>
> </property>
> <property>
> <name>fs.swift.service.myservice.public</name>
> <value>true</value>
> </property>
> Container "mycontainer" should have a collection of objects with names
> starting with "teradata/part-m-00000". Instead, that file is empty and there
> is a collection of objects with names like
> "swift://mycontainer.myservice/teradata/_temporary/0/_temporary/attempt_local415043862_0001_m_000000_0/part-m-00000/000010"
--
This message was sent by Atlassian JIRA
(v6.1#6144)