你好,游客 登录 注册 搜索
背景:
阅读新闻

Hadoop HA 高可用集群部署搭建

[日期:2016-08-11] 来源:Linux社区  作者:i2seo [字体: ]

在m1上格式化zookeeper,第33行的日志表示创建成功。

  1. root@m1:/home/Hadoop# /home/hadoop/hadoop-2.2.0/bin/hdfs zkfc -formatZK
  2. 14/07/2700:31:59 INFO tools.DFSZKFailoverController:Failover controller configuredforNameNodeNameNode at m1/192.168.1.50:9000
  3. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/201217:52 GMT
  4. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:host.name=m1
  5. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.version=1.7.0_65
  6. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.vendor=OracleCorporation
  7. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
  8. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.class.path=/home/hadoop/hadoop-2.2.0/etc/hadoop:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/hadoop/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
  9. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop-2.2.0/lib/native
  10. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
  11. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
  12. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
  13. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
  14. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:os.version=3.11.0-15-generic
  15. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:user.name=root
  16. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
  17. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
  18. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=m1:2181,m2:2181,s1:2181,s2:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@5990054a
  19. 14/07/27 00:32:00 INFO zookeeper.ClientCnxn: Opening socket connection to server m1/192.168.1.50:2181. Will not attempt to authenticate using SASL (unknown error)
  20. 14/07/27 00:32:00 INFO zookeeper.ClientCnxn: Socket connection established to m1/192.168.1.50:2181, initiating session
  21. 14/07/27 00:32:00 INFO zookeeper.ClientCnxn: Session establishment complete on server m1/192.168.1.50:2181, sessionid = 0x147737cd5d30001, negotiated timeout = 5000
  22. ===============================================
  23. The configured parent znode /hadoop-ha/mycluster already exists.
  24. Are you sure you want to clear all failover information from
  25. ZooKeeper?
  26. WARNING: Before proceeding, ensure that all HDFS services and
  27. failover controllers are stopped!
  28. ===============================================
  29. Proceed formatting /hadoop-ha/mycluster? (Y or N) 14/07/27 00:32:00 INFO ha.ActiveStandbyElector: Session connected.
  30. y
  31. 14/07/27 00:32:13 INFO ha.ActiveStandbyElector: Recursively deleting /hadoop-ha/mycluster from ZK...
  32. 14/07/27 00:32:13 INFO ha.ActiveStandbyElector: Successfully deleted /hadoop-ha/mycluster from ZK.
  33. 14/07/27 00:32:13 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
  34. 14/07/27 00:32:13 INFO zookeeper.ClientCnxn: EventThread shut down
  35. 14/07/27 00:32:13 INFO zookeeper.ZooKeeper: Session: 0x147737cd5d30001 closed
  36. root@m1:/home/hadoop#

5)验证zkfc是否格式化成功,如果多了一个hadoop-ha包就是成功了。
  1. root@m1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkCli.sh
  2. [zk: localhost:2181(CONNECTED)0] ls/
  3. [hadoop-ha, zookeeper]
  4. [zk: localhost:2181(CONNECTED)1]

启动JournalNode集群

1)依次在m1,s1,s2上面执行

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start journalnode
  2. starting journalnode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-journalnode-m1.out
  3. root@m1:/home/hadoop# jps
  4. 2884JournalNode
  5. 2553QuorumPeerMain
  6. 2922Jps
  7. root@m1:/home/hadoop#

    2)格式化集群的一个NameNode(m1),有两种方法,我使用的是第一种 
    方法一:

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/bin/hdfs namenode –format

    方法二:

  1. root@m1:/home/hadoop/hadoop-2.2.0/bin/hdfs namenode -format -clusterId m1

    3)在m1上启动刚才格式化的 namenode

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start namenode

    执行命令后,浏览:http://m1:50070/dfshealth.jsp可以看到m1的状态 
     
     
  4)在s1机器上,将m1的数据复制到s1上来,在s1上执行

  1. root@s1:/home/hadoop# /home/hadoop/hadoop-2.2.0/bin/hdfs namenode –bootstrapStandby

    5)启动s1上的namenode,执行命令后

  1. root@s1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start namenode

    浏览:http://s1:50070/dfshealth.jsp可以看到s1的状态。这个时候在网址上可以发现m1和m2的状态都是standby。 

启动所有的datanode,在m1上执行

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemons.sh start datanode
  2. s2: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-datanode-s2.out
  3. s1: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-datanode-s1.out
  4. root@m1:/home/hadoop#

启动yarn,在m1上执行以下命令

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/start-yarn.sh
  2. starting yarn daemons
  3. starting resourcemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-root-resourcemanager-m1.out
  4. s1: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-root-nodemanager-s1.out
  5. s2: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-root-nodemanager-s2.out
  6. root@m1:/home/hadoop#
然后浏览:http://m1:8088/cluster, 可以看到效果 

启动 ZooKeeperFailoverCotroller,在m1,m2机器上依次执行以下命令,这个时候再浏览50070端口,可以发现m1变成active状态了,而m2还是standby状态

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start zkfc
  2. starting zkfc, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-zkfc-m1.out
  3. root@m1:/home/hadoop#
测试HDFS是否可用
 
  1. root@m1:/home/hadoop/hadoop-2.2.0/bin# /home/hadoop/hadoop-2.2.0/bin/hdfs dfs -ls /

测试YARN是否可用,我们来做一个经典的例子,统计刚才放入input下面的hadoop.cmd的单词频率

  1. root@m1:/home/hadoop/hadoop-2.2.0/bin# /home/hadoop/hadoop-2.2.0/bin/hadoop jar /home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output
  2. 14/07/2701:22:41 INFO client.RMProxy:Connecting to ResourceManager at m1/192.168.1.50:8032
  3. 14/07/2701:22:43 INFO input.FileInputFormat:Total input paths to process :1
  4. 14/07/2701:22:44 INFO mapreduce.JobSubmitter: number of splits:1
  5. 14/07/2701:22:44 INFO Configuration.deprecation: user.name is deprecated.Instead,use mapreduce.job.user.name
  6. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.jar is deprecated.Instead,use mapreduce.job.jar
  7. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.output.value.classis deprecated.Instead,use mapreduce.job.output.value.class
  8. 14/07/2701:22:44 INFO Configuration.deprecation: mapreduce.combine.classis deprecated.Instead,use mapreduce.job.combine.class
  9. 14/07/2701:22:44 INFO Configuration.deprecation: mapreduce.map.classis deprecated.Instead,use mapreduce.job.map.class
  10. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.job.name is deprecated.Instead,use mapreduce.job.name
  11. 14/07/2701:22:44 INFO Configuration.deprecation: mapreduce.reduce.classis deprecated.Instead,use mapreduce.job.reduce.class
  12. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.input.dir is deprecated.Instead,use mapreduce.input.fileinputformat.inputdir
  13. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.output.dir is deprecated.Instead,use mapreduce.output.fileoutputformat.outputdir
  14. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.map.tasks is deprecated.Instead,use mapreduce.job.maps
  15. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.output.key.classis deprecated.Instead,use mapreduce.job.output.key.class
  16. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.working.dir is deprecated.Instead,use mapreduce.job.working.dir
  17. 14/07/2701:22:45 INFO mapreduce.JobSubmitter:Submitting tokens for job: job_1406394452186_0001
  18. 14/07/2701:22:46 INFO impl.YarnClientImpl:Submitted application application_1406394452186_0001 to ResourceManager at m1/192.168.1.50:8032
  19. 14/07/2701:22:46 INFO mapreduce.Job:The url to track the job: http://m1:8088/proxy/application_1406394452186_0001/
  20. 14/07/2701:22:46 INFO mapreduce.Job:Running job: job_1406394452186_0001
  21. 14/07/2701:23:10 INFO mapreduce.Job:Job job_1406394452186_0001 running in uber mode :false
  22. 14/07/2701:23:10 INFO mapreduce.Job: map 0% reduce 0%
  23. 14/07/2701:23:31 INFO mapreduce.Job: map 100% reduce 0%
  24. 14/07/2701:23:48 INFO mapreduce.Job: map 100% reduce 100%
  25. 14/07/2701:23:48 INFO mapreduce.Job:Job job_1406394452186_0001 completed successfully
  26. 14/07/2701:23:49 INFO mapreduce.Job:Counters:43
  27. FileSystemCounters
  28. FILE:Number of bytes read=6574
  29. FILE:Number of bytes written=175057
  30. FILE:Number of read operations=0
  31. FILE:Number of large read operations=0
  32. FILE:Number of write operations=0
  33. HDFS:Number of bytes read=7628
  34. HDFS:Number of bytes written=5088
  35. HDFS:Number of read operations=6
  36. HDFS:Number of large read operations=0
  37. HDFS:Number of write operations=2
  38. JobCounters
  39. Launched map tasks=1
  40. Launched reduce tasks=1
  41. Data-local map tasks=1
  42. Total time spent by all maps in occupied slots (ms)=18062
  43. Total time spent by all reduces in occupied slots (ms)=14807
  44. Map-ReduceFramework
  45. Map input records=240
  46. Map output records=827
  47. Map output bytes=9965
  48. Map output materialized bytes=6574
  49. Input split bytes=98
  50. Combine input records=827
  51. Combine output records=373
  52. Reduce input groups=373
  53. Reduce shuffle bytes=6574
  54. Reduce input records=373
  55. Reduce output records=373
  56. SpilledRecords=746
  57. ShuffledMaps=1
  58. FailedShuffles=0
  59. MergedMap outputs=1
  60. GC time elapsed (ms)=335
  61. CPU time spent (ms)=2960
  62. Physical memory (bytes) snapshot=270057472
  63. Virtual memory (bytes) snapshot=1990762496
  64. Total committed heap usage (bytes)=136450048
  65. ShuffleErrors
  66. BAD_ID=0
  67. CONNECTION=0
  68. IO_ERROR=0
  69. WRONG_LENGTH=0
  70. WRONG_MAP=0
  71. WRONG_REDUCE=0
  72. FileInputFormatCounters
  73. BytesRead=7530
  74. FileOutputFormatCounters
  75. BytesWritten=5088
  76. root@m1:/home/hadoop/hadoop-2.2.0/bin#

验证HA的高可用性,故障转移,刚才我们用浏览器打开m1和s1的50070端口,已经看到m1的状态是active,s1的状态是standby 

a)我们在s1上kill掉namenode进程
  1. root@m1:/home/hadoop/hadoop-2.2.0/bin# jps
  2. 5492Jps
  3. 2884JournalNode
  4. 4375DFSZKFailoverController
  5. 2553QuorumPeerMain
  6. 3898NameNode
  7. 4075ResourceManager
  8. root@m1:/home/hadoop/hadoop-2.2.0/bin# kill -9 3898
  9. root@m1:/home/hadoop/hadoop-2.2.0/bin# jps
  10. 2884JournalNode
  11. 4375DFSZKFailoverController
  12. 2553QuorumPeerMain
  13. 4075ResourceManager
  14. 5627Jps
  15. root@m1:/home/hadoop/hadoop-2.2.0/bin#
再浏览m1和m2的50070端口,发现m1是打不开,而m2是active状态。 
这时候在m2上的HDFS和mapreduce还是可以正常运行的,虽然m1上的namenode进程已经被kill掉,但不影响使用这就是故障转移的优势! 
大笑是不是感觉风格有点不同了,没错。
和后面的是copy的。

更多Hadoop相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

本文永久更新链接地址http://www.linuxidc.com/Linux/2016-08/134187.htm

linux
相关资讯       Hadoop HA  Hadoop高可用集群 
本文评论   查看全部评论 (0)
表情: 表情 姓名: 字数

       

评论声明
  • 尊重网上道德,遵守中华人民共和国的各项有关法律法规
  • 承担一切因您的行为而直接或间接导致的民事或刑事法律责任
  • 本站管理人员有权保留或删除其管辖留言中的任意内容
  • 本站有权在网站内转载或引用您的评论
  • 参与本评论即表明您已经阅读并接受上述条款