[
https://issues.apache.org/jira/browse/MAPREDUCE-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734040#comment-16734040
]
Bibin A Chundatt commented on MAPREDUCE-7169:
---------------------------------------------
[~uranus]
Looking at the code . Currently for the new taskAttempt container request, we
use all {{datalocalHosts}}
TaskAttemptImpl#RequestContainerTransition
{code}
taskAttempt.eventHandler.handle(new ContainerRequestEvent(
taskAttempt.attemptId, taskAttempt.resourceCapability,
taskAttempt.dataLocalHosts.toArray(
new String[taskAttempt.dataLocalHosts.size()]),
taskAttempt.dataLocalRacks.toArray(
new String[taskAttempt.dataLocalRacks.size()])));
{code}
In async scheduling high probability for containers getting allocated to same
node.
We should skip the nodes on which previous task attempt was lauched,when
Avataar is *SPECULTIVE*
> Speculative attempts should not run on the same node
> ----------------------------------------------------
>
> Key: MAPREDUCE-7169
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7169
> Project: Hadoop Map/Reduce
> Issue Type: New Feature
> Components: yarn
> Affects Versions: 2.7.2
> Reporter: Lee chen
> Assignee: Zhaohui Xin
> Priority: Major
> Attachments: image-2018-12-03-09-54-07-859.png
>
>
> I found in all versions of yarn, Speculative Execution may set the
> speculative task to the node of original task.What i have read is only it
> will try to have one more task attempt. haven't seen any place mentioning not
> on same node.It is unreasonable.If the node have some problems lead to tasks
> execution will be very slow. and then placement the speculative task to same
> node cannot help the problematic task.
> In our cluster (version 2.7.2,2700 nodes),this phenomenon appear
> almost everyday.
> !image-2018-12-03-09-54-07-859.png!
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]