手机版
你好,游客 登录 注册
背景:
阅读新闻

Hadoop2.5.0伪分布式环境搭建

[日期:2019-04-30] 来源:Linux社区  作者:tangxc8282 [字体: ]

本章主要介绍下在Linux系统下的Hadoop2.5.0伪分布式环境搭建步骤。首先要搭建Hadoop伪分布式环境,需要完成一些前置依赖工作,包括创建用户、安装JDK、关闭防火墙等。

一、创建hadoop用户

使用root账户创建hadoop用户,为了在实验环境下便于操作,赋予hadoop用户sudo权限。具体操作代码如下:

useradd hadoop # 添加hadoop用户
passwd hadoop # 设置密码
visudo
hadoop ALL=(root)NOPASSWD:ALL

二、Hadoop伪分布式环境搭建

1、关闭Linux中的防火墙和selinux

禁用selinux,代码如下:

sudo vi /etc/sysconfig/selinux # 打开selinux配置文件
SELINUX=disabled # 修改SELINUX属性值为disabled

关闭防火墙,代码如下:

sudo service iptables status # 查看防火墙状态
sudo service iptables stop # 关闭防火墙
sudo chkconfig iptables off # 关闭防火墙开机启动设置

2、安装jdk

首先,查看系统中是否有安装自带的jdk,如果存在,则先卸载,代码如下:

rpm -qa | grep java # 查看是否有安装jdk
sudo rpm -e --nodeps java-1.6.0-openjdk-1.6.0.0-1.50.1.11.5.el6_3.x86_64 tzdata-java-2012j-1.el6.noarch java-1.7.0-openjdk-1.7.0.9-2.3.4.1.el6_3.x86_64 # 卸载自带jdk

接着,安装jdk,步骤如下:

step1.解压安装包:

tar -zxf jdk-7u67-linux-x64.tar.gz -C /usr/local/

step2.配置环境变量及检查是否安装成功:

sudo vi /etc/profile # 打开profile文件
##JAVA_HOME
export JAVA_HOME=/usr/local/jdk1.7.0_67
export PATH=$PATH:$JAVA_HOME/bin

# 生效文件
source /etc/profile # 使用root用户操作

# 查看是否配置成功
java -version

3、安装hadoop

step1:解压hadoop安装包

tar -zxvf /opt/software/hadoop-2.5.0.tar.gz -C /opt/software/

建议:将/opt/software/hadoop-2.5.0/share下的doc目录删除。

step2:修改etc/hadoop目录下hadoop-env.sh、mapred-env.sh、yarn-env.sh三个配置文件中的JAVA_HOME

export JAVA_HOME=/usr/local/jdk1.7.0_67

step3:修改core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>name</name>
        <value>my-study-cluster</value>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://bigdata01:8020</value>
    </property>
        <!-- 指定Hadoop系统生成文件的临时目录地址 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/software/hadoop-2.5.0/data/tmp</value>
    </property>
    <property>
        <name>fs.trash.interval</name>
        <value>1440</value>
    </property>
    <property>
        <name>hadoop.http.staticuser.user</name>
        <value>hadoop</value>
    </property>
        <property>
                <name>hadoop.proxyuser.hadoop.hosts</name>
                <value>bigdata01</value>
        </property>
        <property>
                <name>hadoop.proxyuser.hadoop.groups</name>
                <value>*</value>
        </property>
</configuration>

step4:修改hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/opt/software/hadoop-2.5.0/data/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/opt/software/hadoop-2.5.0/data/data</value>
    </property>
</configuration>

step5:修改mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>bigdata01:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>bigdata01:19888</value>
    </property>
</configuration>

step6:修改yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>bigdata01</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>106800</value>
    </property>
    <property>
        <name>yarn.log.server.url</name>
        <value>http://bigdata01:19888/jobhistory/job/</value>
    </property>
</configuration>

step7:修改slaves文件

bigdata01

step8:格式化namenode

bin/hdfs namenode -format

step9:启动进程

## 方式一:单独启动一个进程
# 启动namenode
sbin/hadoop-daemon.sh start namenode
# 启动datanode
sbin/hadoop-daemon.sh start datanode
# 启动resourcemanager
sbin/yarn-daemon.sh start resourcemanager
# 启动nodemanager
sbin/yarn-daemon.sh start nodemanager
# 启动secondarynamenode
sbin/hadoop-daemon.sh start secondarynamenode
# 启动历史服务器
sbin/mr-jobhistory-daemon.sh start historyserver

## 方式二:
sbin/start-dfs.sh # 启动namenode、datanode、secondarynamenode
sbin/start-yarn.sh # 启动resourcemanager、nodemanager
sbin/mr-jobhistory-daemon.sh start historyserver # 启动历史服务器

step10:检查

1.通过浏览器访问HDFS的外部UI界面,加上外部交互端口号:50070

  http://bigdata01:50070

2.通过浏览器访问YARN的外部UI界面,加上外部交互端口号:8088

  http://bigdata01:8088

3.执行Wordcount程序

  bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount input output

  注:输入输出目录自定义

结束!

以上为Hadoop2.5.0伪分布式环境搭建步骤,如有问题,请指出,谢谢!

更多Hadoop相关信息见Hadoop 专题页面 https://www.linuxidc.com/topicnews.aspx?tid=13

Linux公社的RSS地址https://www.linuxidc.com/rssFeed.aspx

本文永久更新链接地址https://www.linuxidc.com/Linux/2019-04/158390.htm

linux
相关资讯       Hadoop伪分布式 
本文评论   查看全部评论 (0)
表情: 表情 姓名: 字数

       

评论声明
  • 尊重网上道德,遵守中华人民共和国的各项有关法律法规
  • 承担一切因您的行为而直接或间接导致的民事或刑事法律责任
  • 本站管理人员有权保留或删除其管辖留言中的任意内容
  • 本站有权在网站内转载或引用您的评论
  • 参与本评论即表明您已经阅读并接受上述条款