Hi Zhankun,
There is "chuser" option, but this command only change the owner in hdfs
folder. hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot
MySnapshot -copy-to hdfs://srv2:<hdfs_port>/hbase -chuser MyUser -chgroup
MyGroup -chmod 700 -mappers 16/data/hadoop/logs/userlogs is a local folder
create by nodemanager.
Since the service is configured to start with user magnews, he creates the
folder with owner magnews.I tried putting both magnews and hbase in the same
user group, but it didn't work anyway.
Surely it would be reasonable to create the magnews user locally and use that
user to run the command. But is there no way around this problem?The folder has
magnews as owner, but anyway all users have write permissions in that folder.
Il martedì 25 gennaio 2022, 04:57:43 CET, Zhankun Tang <[email protected]>
ha scritto:
Hi Hamado,Does the `ExportSnapshot` has option to use the same `magnews` user?
Maybe you can try to create a same name user in local and use that to run this
command again.
BR,Zhankun
On Tue, 25 Jan 2022 at 07:14, Hamado Dene <[email protected]> wrote:
Hello hadoop community,
I'm trying to export an hbase snapshot from one hadoop cluster to another
hadoop cluster using the export command:hbase
org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot MySnapshot -copy-to
hdfs: // srv2: <hdfs_port> / hbase -mappers 16
When I execute the command, nodemanager side generates the exception:
2022-01-24 18:54:23,693 ERROR
org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat: Error aggregating
log file. Log file :
/data/hadoop/logs/userlogs/application_1643046451755_0001/container_e34_1643046451755_0001_01_000022/syslog.
Owner 'magnews' for path
/data/hadoop/logs/userlogs/application_1643046451755_0001/container_e34_1643046451755_0001_01_000022/syslog
did not match expected owner 'hbase'java.io.IOException: Owner 'magnews' for
path
/data/hadoop/logs/userlogs/application_1643046451755_0001/container_e34_1643046451755_0001_01_000022/syslog
did not match expected owner 'hbase' at
org.apache.hadoop.io.SecureIOUtils.checkStat(SecureIOUtils.java:284) at
org.apache.hadoop.io.SecureIOUtils.forceSecureOpenForRead(SecureIOUtils.java:218)
at
org.apache.hadoop.io.SecureIOUtils.openForRead(SecureIOUtils.java:203)
at
org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogValue.secureOpenFile(AggregatedLogFormat.java:278)
at
org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogValue.write(AggregatedLogFormat.java:230)
at
org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.append(AggregatedLogFormat.java:470)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl$ContainerLogAggregator.doContainerLogAggregation(AppLogAggregatorImpl.java:659)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:347)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:548)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:504)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:404)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
My hadoop cluster runs with the magnews user, but on the Hbase side, I launched
the execution clearly with the hbase user.Is there any way to handle this?
Thanks,
Hamado Dene