4、启动HDFS
(1)启动JournalNode:
[Hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh start journalnode starting journalnode, logging to /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/logs/hadoop-puppet-journalnode-BigData-03.out
验证JournalNode:
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ jps 5652 QuorumPeerMain 9076 Jps 9029 JournalNode
停止JournalNode:
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh stop journalnode stoping journalnode
(2)NameNode 格式化:
结点Hadoop-NN-01:hdfs namenode -format
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hdfs namenode -format
(3)同步NameNode元数据:
同步Hadoop-NN-01元数据到Hadoop-NN-02
主要是:dfs.namenode.name.dir,dfs.namenode.edits.dir还应该确保共享存储目录下(dfs.namenode.shared.edits.dir ) 包含NameNode 所有的元数据。
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ scp -P 6000 -r data/ hadoopuser@Hadoop-NN-02:/home/hadoopuser/hadoop-2.6.0-cdh5.6.0
(4)初始化ZFCK:
创建ZNode,记录状态信息。
结点Hadoop-NN-01:hdfs zkfc -formatZK
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hdfs zkfc -formatZK
(5)启动
集群启动法:Hadoop-NN-01: start-dfs.sh
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-dfs.sh
单进程启动法:
<1>NameNode(Hadoop-NN-01,Hadoop-NN-02):hadoop-daemon.sh start namenode
<2>DataNode(Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03):hadoop-daemon.sh start datanode
<3>JournalNode(Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03):hadoop-daemon.sh start journalnode
<4>ZKFC(Hadoop-NN-01,Hadoop-NN-02):hadoop-daemon.sh start zkfc
(6)验证
<1>进程
NameNode:jps
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ jps 9329 JournalNode 9875 NameNode 10155 DFSZKFailoverController 10223 Jps
DataNode:jps
[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ jps 9498 Jps 9019 JournalNode 9389 DataNode 5613 QuorumPeerMain
<2>页面:
Active结点:http://192.168.254.151:50070
(7)停止:stop-dfs.sh
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-dfs.sh
5、启动Yarn
(1)启动
<1>集群启动
Hadoop-NN-01启动Yarn,命令所在目录:$HADOOP_HOME/sbin
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-yarn.sh
Hadoop-NN-02备机启动RM:
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daemon.sh start resourcemanager
<2>单进程启动
ResourceManager(Hadoop-NN-01,Hadoop-NN-02):yarn-daemon.sh start resourcemanager
DataNode(Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03):yarn-daemon.sh start nodemanager
(2)验证
<1>进程:
JobTracker:Hadoop-NN-01,Hadoop-NN-02
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ jps 9329 JournalNode 9875 NameNode 10355 ResourceManager 10646 Jps 10155 DFSZKFailoverController
TaskTracker:Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03
[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ jps 9552 NodeManager 9680 Jps 9019 JournalNode 9389 DataNode 5613 QuorumPeerMain
<2>页面
ResourceManger(Active):192.168.254.151:23188
ResourceManager(Standby):192.168.254.152:23188
(3)停止
Hadoop-NN-01:stop-yarn.sh
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-yarn.sh Hadoop-NN-02:yarn-daemon.sh stop resourcemanager [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daeman.sh stop resourcemanager
附:Hadoop常用命令总结
#第1步 启动zookeeper [hadoopuser@Linux01 ~]$ zkServer.sh start [hadoopuser@Linux01 ~]$ zkServer.sh stop #停止 #第2步 启动JournalNode: [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh start journalnode starting journalnode, logging to /home/hadoopuser/hadoop-dir/hadoop-2.6.0-cdh5.6.0/logs/hadoop-puppet-journalnode-BigData-03.out #两个namenode [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh stop journalnode stoping journalnode #停止 #第3步 启动DFS: [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-dfs.sh [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-dfs.sh #停止 #第4步 启动Yarn: #Hadoop-NN-01启动Yarn [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-yarn.sh [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-yarn.sh #停止 #Hadoop-NN-02备机启动RM [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daemon.sh start resourcemanager [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daemon.sh stop resourcemanager #停止 #如果安装了HBase #Hadoop-NN-01启动HBase的Thrift Server: [hadoopuser@Linux01 bin]$ hbase-daemon.sh start thrift [hadoopuser@Linux01 bin]$ hbase-daemon.sh stop thrift #停止 #Hadoop-NN-01启动HBase: [hadoopuser@Linux01 bin]$ hbase/bin/start-hbase.sh [hadoopuser@Linux01 bin]$ hbase/bin/stop-hbase.sh #停止 #如果安装了RHive #Hadoop-NN-01启动Rserve: [hadoopuser@Linux01 ~]$ Rserve --RS-conf /usr/local/lib64/R/Rserv.conf #停止 直接kill #Hadoop-NN-01启动hive远程服务(rhive是通过thrift连接hiveserver的,需要要启动后台thrift服务): [hadoopuser@Linux01 ~]$ nohup hive --service hiveserver2 & #注意这里是hiveserver2
附:Hadoop常用环境变量配置
# JAVA export JAVA_HOME=/usr/java/jdk1.8.0_73 export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar # MYSQL export PATH=/usr/local/mysql/bin:/usr/local/mysql/lib:$PATH # Hive export HIVE_HOME=/home/hadoopuser/hive export PATH=$PATH:$HIVE_HOME/bin # Hadoop export HADOOP_HOME=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0 export HADOOP_CONF_DIR=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/etc/hadoop export HADOOP_CMD=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/bin/hadoop export HADOOP_STREAMING=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/tools/lib/hadoop-streaming-2.6.0-cdh5.6.0.jar export JAVA_LIBRARY_PATH=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/lib/native/ export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin # R export R_HOME=/usr/local/lib64/R export PATH=$PATH:$R_HOME/bin export RHIVE_DATA=/usr/local/lib64/R/rhive/data export CLASSPATH=.:/usr/local/lib64/R/library/rJava/jri export LD_LIBRARY_PATH=/usr/local/lib64/R/library/rJava/jri export RServe_HOME=/usr/local/lib64/R/library/Rserve # thrift export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig/ # HBase export HBASE_HOME=/usr/local/hbase export PATH=$PATH:$HBASE_HOME/bin # Zookeeper export ZOOKEEPER_HOME=/home/hadoopuser/zookeeper-3.4.5-cdh5.6.0 export PATH=$PATH:$ZOOKEEPER_HOME/bin # Sqoop2 export SQOOP2_HOME=/home/hadoopuser/sqoop2-1.99.5-cdh5.6.0 export CATALINA_BASE=$SQOOP2_HOME/server export PATH=$PATH:$SQOOP2_HOME/bin # Scala export SCALA_HOME=/usr/local/scala export PATH=$PATH:${SCALA_HOME}/bin # Spark export SPARK_HOME=/home/hadoopuser/spark-1.5.0-cdh5.6.0 export PATH=$PATH:${SPARK_HOME}/bin # Storm export STORM_HOME=/home/hadoopuser/apache-storm-0.9.6 export PATH=$PATH:$STORM_HOME/bin #kafka export KAFKA_HOME=/home/hadoopuser/kafka_2.10-0.9.0.1 export PATH=$PATH:$KAFKA_HOME/bin
更多Hadoop相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13
本文永久更新链接地址:http://www.linuxidc.com/Linux/2016-05/131867.htm