[
https://issues.apache.org/jira/browse/HADOOP-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15570101#comment-15570101
]
Andrew Wang commented on HADOOP-13717:
--------------------------------------
For a bit more context, we have some code that starts the balancer in the
foreground, without a log4j.properties file or setting $HADOOP_LOG_DIR.
verify_logdir then checks the default location ($HADOOP_HOME/logs), which is
not writable, and fails.
Other commands that do not support daemonization skip all these checks. In this
case, I'm not trying to run the balancer as a daemon. Is it reasonable to also
skip these checks in this situation? Looking more at the code, I think what I
want is to go directly to hadoop_java_exec rather than the daemon logic.
> Shell scripts call hadoop_verify_logdir even when command is not started as
> daemon
> ----------------------------------------------------------------------------------
>
> Key: HADOOP-13717
> URL: https://issues.apache.org/jira/browse/HADOOP-13717
> Project: Hadoop Common
> Issue Type: Bug
> Components: scripts
> Affects Versions: 3.0.0-alpha1
> Reporter: Andrew Wang
>
> Issue found when working with the HDFS balancer.
> In {{hadoop_daemon_handler}}, it calls {{hadoop_verify_logdir}} even for the
> "default" case which calls {{hadoop_start_daemon}}. {{daemon_outfile}} which
> specifies the log location isn't even used here, since the command is being
> started in the foreground.
> I think we can push the {{hadoop_verify_logdir}} call down into
> {{hadoop_start_daemon_wrapper}} instead, which does use the outfile.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]