代码之家  ›  专栏  ›  技术社区  ›  Judy

初始化VM时发生Datanode+错误初始堆太小

  •  0
  • Judy  · 技术社区  · 6 年前

    我们重新启动集群上的数据节点

    ambari集群中有15台数据节点机器 而每个datanode机器都有128G RAM

    版本-(HDP-2.6.4和ambari版本2.6.1)

    但是datanode未能在以下错误上启动

    Error occurred during initialization of VM
    Too small initial heap
    

    这很奇怪,因为dtnode_heapsize是8G(DataNode最大Java堆大小=8G) 从日志中我们也可以看到

    InitialHeapSize=8192 -XX:MaxHeapSize=8192
    

    所以我们不明白这是怎么回事

    剂量-与剂量相关的初始堆大小 DataNode最大Java堆大小 ?

    来自datanode计算机的日志

    Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
    Memory: 4k page, physical 197804180k(12923340k free), swap 16777212k(16613164k free)
    CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:GCLogFileSize=1024000 -XX:InitialHeapSize=8192 -XX:MaxHeapSize=8192 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:NumberOfGCLogFiles=5 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseGCLogFileRotation -XX:+UseParNewGC 
    ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker01.sys242.com.out <==
    Error occurred during initialization of VM
    Too small initial heap
    ulimit -a for user hdfs
    core file size          (blocks, -c) unlimited
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 0
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 772550
    max locked memory       (kbytes, -l) 64
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 128000
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 8192
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 65536
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited
    

    另一个日志示例:

    resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start datanode'' returned 1. starting datanode, logging to 
    Error occurred during initialization of VM
    Too small initial heap
    
    1 回复  |  直到 6 年前
        1
  •  2
  •   Dmitriusan    6 年前

    您提供的值以字节为单位指定。应该是 InitialHeapSize=8192m -XX:MaxHeapSize=8192m

    看见 https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html

    推荐文章