we deploy hdfs in our company.  we meet a unormal situation.
we set heap memory to 280G, but we actually consume 450G.

*we seen this by pmap.*











*   2ae91c000000 rw-p 00000000  00:00         0 294174720 293705824
293705824  293705824 293705824    0      0         0198c000 rw-p 00000000
 00:00         0 173517492 173513320 173513320  173513320 173513320    0
   0 [heap]    2b2fa8000000 rw-p 00000000  00:00         0  11026824
 11007512  11007512   11007512  11007512    0      0     2b2f62000000 rw-p
00000000  00:00         0   1146880   1014280   1014280    1014280
1014280    0      0     2ae9072b0000 rwxp 00000000  00:00         0
71808     69280     69280      69280     69280    0      0     2ae917d88000
rw-p 00000000  00:00         0     25696     24700     24700      24700
24700    0      0     2b325732f000 rw-p 00000000  00:00         0     16384
    16328     16328      16328     16328    0      0     2ae91aeb9000 rw-p
00000000  00:00         0      9988      8972      8972       8972
 8972    0      0     2b3249063000 rw-p 00000000  00:00         0      9216
     8204      8204       8204      8204    0      0     2ae905332000 r-xp
00000000  08:02   8391747     13168      6808      2825       6804
0    0      0 libjvm.so*


* additional information:*
*hadoop version: 2.7.2*
* -Xms280G *
*-Xmx280G *
*-XX:MaxDirectMemorySize=10G *
*-XX:MetaspaceSize=128M*
*-XX:+UseG1GC*

any idea will be appreciated

Reply via email to