[
https://issues.apache.org/jira/browse/MAPREDUCE-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17848689#comment-17848689
]
Dhruv Kachhadia commented on MAPREDUCE-6925:
--------------------------------------------
Is the fix for this coming soon?
a simple MR application with more than 120 counters fails wierdly when i set
mapreduce.job.counters.max different to default value:
Two different scenarios:
1. mapred-configs: max counters is set to 200
{code:java}
24/05/21 12:32:37 INFO mapreduce.Job: Task Id :
attempt_1716282029465_0013_m_000000_0, Status : FAILED
Error:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcServerException):
IPC server unable to read call parameters: Too many counters: 121 max=120
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1612)
at org.apache.hadoop.ipc.Client.call(Client.java:1558)
at org.apache.hadoop.ipc.Client.call(Client.java:1455)
at
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:251)
at com.sun.proxy.$Proxy8.statusUpdate(Unknown Source)
at org.apache.hadoop.mapred.Task.statusUpdate(Task.java:1317)
at org.apache.hadoop.mapred.Task.sendLastUpdate(Task.java:1350)
at org.apache.hadoop.mapred.Task.done(Task.java:1273)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:352)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:191)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:183)
{code}
2. Mapred-configs: Max counters set to 2
{code:java}
24/05/21 12:43:44 INFO mapreduce.Job: Task Id :
attempt_1716295266630_0002_m_000000_2, Status : FAILED
Error: org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
counters: 3 max=2
at
org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:101)
at
org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:108)
at
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounter(AbstractCounterGroup.java:78)
at
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounterImpl(AbstractCounterGroup.java:95)
at
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounterImpl(AbstractCounterGroup.java:123)
at
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:113)
at
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:130)
at
org.apache.hadoop.mapred.Counters$Group.findCounter(Counters.java:369)
at
org.apache.hadoop.mapred.Counters$Group.getCounterForName(Counters.java:314)
at org.apache.hadoop.mapred.Counters.findCounter(Counters.java:479)
at org.apache.hadoop.mapred.Counters.findCounter(Counters.java:60)
at
org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:167)
at org.apache.hadoop.mapred.Task$TaskReporter.getCounter(Task.java:711)
at org.apache.hadoop.mapred.Task$TaskReporter.getCounter(Task.java:647)
at
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.getCounter(TaskAttemptContextImpl.java:71)
at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.getCounter(WrappedMapper.java:96)
at org.apache.hadoop.examples.MyCounter$Map.map(MyCounter.java:51)
at org.apache.hadoop.examples.MyCounter$Map.map(MyCounter.java:43)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:800){code}
2nd option should be the ideal exception
So i feel the limits should be made same everywhereÂ
> CLONE - Make Counter limits consistent across JobClient, MRAppMaster, and
> YarnChild
> -----------------------------------------------------------------------------------
>
> Key: MAPREDUCE-6925
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6925
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: applicationmaster, client, task
> Affects Versions: 2.4.0
> Reporter: Gera Shegalov
> Assignee: Gera Shegalov
> Priority: Major
>
> Currently, counter limits "mapreduce.job.counters.*" handled by
> {{org.apache.hadoop.mapreduce.counters.Limits}} are initialized
> asymmetrically: on the client side, and on the AM, job.xml is ignored whereas
> it's taken into account in YarnChild.
> It would be good to make the Limits job-configurable, such that max
> counters/groups is only increased when needed. With the current Limits
> implementation relying on static constants, it's going to be challenging for
> tools that submit jobs concurrently without resorting to class loading
> isolation.
> The patch that I am uploading is not perfect but demonstrates the issue.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]