Hadoop执行reduce失败,提示Container is running beyond virtual memory limits. Killing container. Exit code is 143 问题的可能解决办法

提示信息如下:

18/01/01 21:11:40 INFO mapreduce.Job: Task Id : attempt_1514811676808_0001_m_000000_0, Status : FAILED
[2018-01-01 13:11:38.817]Container [pid=6319,containerID=container_1514811676808_0001_01_000002] is running beyond virtual memory limits. Current usage: 236.3 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1514811676808_0001_01_000002 :
...
[2018-01-01 13:11:38.872]Container killed on request. Exit code is 143[2018-01-01 13:11:38.879]
[2018-01-01 13:11:38.922]Container exited with a non-zero exit code 143. [2018-01-01 13:11:38.924]
...

一大片信息,一直反复的出现,最后大多数次数会执行失败了… 还是有那么几次成功的…

开始看提示,“beyond virtual memory limits”,貌似是说内存不够,好吧,加内存,因为是开的虚拟机,就加了几百兆,发现还是出问题,然后加1G,最后加到电脑的内存都被吃完了,依然出现这个错误…

就很纳闷了,怎么回事,我就执行了一个 wordcount 的示例,怎么会吃掉几个G的内存,hadoop也太能吃了吧,内存终结者吗???wordcount 我用JAVA写也就十几M内存顶天了吧…

继续阅读