手机版
你好,游客 登录 注册
背景:
阅读新闻

Apache Hadoop 2.2.0 HDFS HA + YARN多机部署

[日期:2014-09-07] 来源:Linux社区  作者:Gandalf_lee [字体: ]

部署逻辑架构:

Apache Hadoop 2.2.0 HDFS HA + YARN多机部署

HDFS HA部署物理架构

Apache Hadoop 2.2.0 HDFS HA + YARN多机部署

注意:

JournalNode使用资源很少,即使在实际的生产环境中,也是把JournalNode和DataNode部署在同一台机器上;

生产环境中,建议主备NameNode各单独一台机器。

YARN部署架构:

Apache Hadoop 2.2.0 HDFS HA + YARN多机部署

个人实验环境部署图:

Apache Hadoop 2.2.0 HDFS HA + YARN多机部署

Ubuntu12 32bit

apache Hadoop 2.2.0

jdk1.7

===============================================

Ubuntu 13.04上搭建Hadoop环境 http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +Hadoop 1.2.1版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htm

Ubuntu上搭建Hadoop环境(单机模式+伪分布模式) http://www.linuxidc.com/Linux/2013-01/77681.htm

Ubuntu下Hadoop环境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm

单机版搭建Hadoop环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm

搭建Hadoop环境(在Winodws环境下用虚拟机虚拟两个Ubuntu系统进行搭建) http://www.linuxidc.com/Linux/2011-12/48894.htm

===============================================

准备工作
1.在4台机器都配置hosts;
2.配置NameNode节点可以免密码登录到其余所有节点,只需要单向免密登录即可,无需双向;
免密码登录仅仅在启动、停止集群时使用。
3.安装jdk
4.创建专门的账号,不要用root账号部署、管理hadoop



部署hadoop
第一步:把hadoop安装包解压到每一个节点(可以解压到一个节点,然后完成后续第2步的配置后,再scp拷贝到其余节点)的固定目录下(各节点目录统一),比如/home/yarn/Hadoop/hadoop-2.2.0
第二步:修改配置文件(只需在一个节点上配置,配置好后再用scp分发到其余节点)
配置文件路径:etc/hadoop/
hadoop-env.sh
修改JDK路径,在文件中搜索以下行,将JAVA_HOME设置为JDK安装路径即可:
# The java implementation to use.
export JAVA_HOME=/usr/lib/jvm/java-6-sun

core-site.xml
指定Active NameNode的host名/ip和端口号,端口号可以根据自己的需要修改:
<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://SY-0217:8020</value>
</property>
</configuration>
注意:以上配置的SY-0217是固定host,只适用于手动切换主备NameNode的场景,如果需要通过ZooKeeper来自动切换,则需要配置逻辑名称,后面会详述。

mapred-site.xml
 
<configuration>
<!-- MR YARN Application properties -->
<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
  <description>The runtime framework for executing MapReduce jobs.
  Can be one of local, classic or yarn.
  </description>
</property>
 
<!--
jobhistory properties
jobhistory server,可以通过它查看已经运行完的应用程序的信息。
-->
<property>
  <name>mapreduce.jobhistory.address</name>
  <value>SY-0355:10020</value>
  <description>MapReduce JobHistory Server IPC host:port</description>
</property>
 
<property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>SY-0355:19888</value>
  <description>MapReduce JobHistory Server Web UI host:port</description>
</property>
</configuration>

hdfs-site.xml
非常关键的配置文件!
<configuration>
 
<property>
  <name>dfs.nameservices</name>
  <value>hadoop-test</value>
  <description>
    指定命名空间名称,可随意起名
    Comma-separated list of nameservices.
  </description>
</property>
 
<property>
  <name>dfs.ha.namenodes.hadoop-test</name>
  <value>nn1,nn2</value>
  <description>
    在命名空间下指定NameNode逻辑名
    The prefix for a given nameservice, contains a comma-separated
    list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).
  </description>
</property>
 
<property>
  <name>dfs.namenode.rpc-address.hadoop-test.nn1</name>
  <value>SY-0217:8020</value>
  <description>
    为“命名空间名.NameNode逻辑名”配置rpc地址
    RPC address for nomenode1 of hadoop-test
  </description>
</property>
 
<property>
  <name>dfs.namenode.rpc-address.hadoop-test.nn2</name>
  <value>SY-0355:8020</value>
  <description>
    为“命名空间名.NameNode逻辑名”配置rpc地址
    RPC address for nomenode2 of hadoop-test
  </description>
</property>
 
<property>
  <name>dfs.namenode.http-address.hadoop-test.nn1</name>
  <value>SY-0217:50070</value>
  <description>
    为“命名空间名.NameNode逻辑名”配置http地址
    The address and the base port where the dfs namenode1 web ui will listen on.
  </description>
</property>
 
<property>
  <name>dfs.namenode.http-address.hadoop-test.nn2</name>
  <value>SY-0355:50070</value>
  <description>
    为“命名空间名.NameNode逻辑名”配置http地址
    The address and the base port where the dfs namenode2 web ui will listen on.
  </description>
</property>
 
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:///home/dongxicheng/hadoop/hdfs/name</value>
  <description>
    配置NameNode元数据存放的路径;
    如果机器上有多块硬盘的话,推荐配置多个路径,用逗号分隔。
Determines where on the local filesystem the DFS name node
      should store the name table(fsimage).  If this is a comma-delimited list
      of directories then the name table is replicated in all of the
      directories, for redundancy. </description>
</property>
 
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:///home/dongxicheng/hadoop/hdfs/data</value>
  <description>
        配置DataNode数据存放的路径;
    如果机器上有多块硬盘的话,推荐配置多个路径,用逗号分隔。
Determines where on the local filesystem an DFS data node
  should store its blocks.  If this is a comma-delimited
  list of directories, then data will be stored in all named
  directories, typically on different devices.
  Directories that do not exist are ignored.
  </description>
</property>
 
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://SY-0355:8485;SY-0225:8485;SY-0226:8485/hadoop-journal</value>
  <description>
    配置JournalNode,包含三部分:
(1)qjournal是协议,无需修改;
(2)然后就是三台部署JournalNode的主机host/ip:端口,三台机器之间用分号分隔;
(3)最后的hadoop-journal是journalnode的命名空间,可以随意取名。
A directory on shared storage between the multiple namenodes
  in an HA cluster. This directory will be written by the active and read
  by the standby in order to keep the namespaces synchronized. This directory
  does not need to be listed in dfs.namenode.edits.dir above. It should be
  left empty in a non-HA cluster.
  </description>
</property>
 
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/home/dongxicheng/hadoop/hdfs/journal/</value>
  <description>
    journalnode的本地数据存放目录,指定一个路径就够。
  </description>
</property>
 
<property>
  <name>dfs.ha.automatic-failover.enabled</name>
  <value>false</value>
  <description>
    是否自动切换。由于没有配置ZooKeeper,所以不能实现自动切换,所以这里配置的是false。
    Whether automatic failover is enabled. See the HDFS High
    Availability documentation for details on automatic HA
    configuration.
  </description>
</property>
 
</configuration>

yarn-site.xml
<configuration>
 
  <!-- Resource Manager Configs -->
  <property>
    <description>
    指定ResourceManager
    The hostname of the RM.</description>
    <name>yarn.resourcemanager.hostname</name>
    <value>master</value>
  </property>    
 
  <property>
    <description>The address of the applications manager interface in the RM.</description>
    <name>yarn.resourcemanager.address</name>
    <value>${yarn.resourcemanager.hostname}:8032</value>
  </property>
 
  <property>
    <description>The address of the scheduler interface.</description>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>${yarn.resourcemanager.hostname}:8030</value>
  </property>
 
  <property>
    <description>The http address of the RM web application.</description>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>${yarn.resourcemanager.hostname}:8088</value>
  </property>
 
  <property>
    <description>The https adddress of the RM web application.</description>
    <name>yarn.resourcemanager.webapp.https.address</name>
    <value>${yarn.resourcemanager.hostname}:8090</value>
  </property>
 
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>${yarn.resourcemanager.hostname}:8031</value>
  </property>
 
  <property>
    <description>The address of the RM admin interface.</description>
    <name>yarn.resourcemanager.admin.address</name>
    <value>${yarn.resourcemanager.hostname}:8033</value>
  </property>
 
  <property>
    <description>
指定fairscheduler调度器
The class to use as the resource scheduler.
</description>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
 
  <property>
    <description>
指定fairscheduler调度器配置文件路径
fair-scheduler conf location
</description>
    <name>yarn.scheduler.fair.allocation.file</name>
    <value>${yarn.home.dir}/etc/hadoop/fairscheduler.xml</value>
  </property>
 
  <property>
    <description>
指定nodemanager的本地工作目录,推荐配置多个路径,用逗号分隔
List of directories to store localized files in. An 
      application's localized file directory will be found in:
      ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
      Individual containers' work directories, called container_${contid}, will
      be subdirectories of this.
   </description>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/home/yarn/Hadoop/yarn/local</value>
  </property>
 
  <property>
    <description>Whether to enable log aggregation</description>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
  </property>
 
  <property>
    <description>Where to aggregate logs to.</description>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/home/yarn/Hadoop/yarn/tmp/logs</value>
  </property>
 
  <property>
    <description>
每个nodemanager上可以用的内存大小
Amount of physical memory, in MB, that can be allocated for containers.
 注意:我的NM虚拟机是1G内存,1核CPU,当该值配置小于1024时,NM是无法启动的!会报错:
NodeManager from  slavenode2 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.
    </description>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>1024</value>
  </property>
 
  <property>
    <description>
每个nodemanager上可用的CPU核数
Number of CPU cores that can be allocated 
    for containers.</description>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
    <value>1</value>
  </property>
 
  <property>
    <description>the valid service name should only contain a-zA-Z0-9_ and can not start with numbers</description>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
 
</configuration>

更多详情见请继续阅读下一页的精彩内容http://www.linuxidc.com/Linux/2014-09/106289p2.htm

linux
相关资讯       Hadoop 2.2.0  YARN 
本文评论   查看全部评论 (0)
表情: 表情 姓名: 字数

       

评论声明
  • 尊重网上道德,遵守中华人民共和国的各项有关法律法规
  • 承担一切因您的行为而直接或间接导致的民事或刑事法律责任
  • 本站管理人员有权保留或删除其管辖留言中的任意内容
  • 本站有权在网站内转载或引用您的评论
  • 参与本评论即表明您已经阅读并接受上述条款