五、Hadoop伪分布式配置
5.1 编辑文件: etc/hadoop/hadoop-env.sh (注:JAVA_HOME如果有值就用自己的JAVA_HOME替代)
# set to the root ofyour Java installation
export JAVA_HOME=/usr/java/latest
# Assuming your installation directory is/usr/local/hadoop
export HADOOP_PREFIX=/usr/local/hadoop
5.2 增加hadoop环境变量
export HADOOP_HOME=/usr/local/cdh/hadoop
5.3
编辑文件 etc/hadoop/core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
编辑 etc/hadoop/hdfs-site.xml(/usr/local/cdh/hadoop/data/dfs/name目录一定要手工创建再格式化,不然出错)
<configuration>
<property>
<!--开启web hdfs-->
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/cdh/hadoop/data/dfs/name</value>
<description> namenode 存放name table(fsimage)本地目录(需要修改)</description>
</property>
<property>
<name>dfs.namenode.edits.dir</name>
<value>${dfs.namenode.name.dir}</value>
<description>namenode粗放 transactionfile(edits)本地目录(需要修改)</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/cdh/hadoop/data/dfs/data</value>
<description>datanode存放block本地目录(需要修改)</description>
</property>
</configuration>
编辑 :etc/hadoop/mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
编辑:etc/hadoop/yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
六:启动及验证安装是否成功
格式化:要先格式化HDFS:
bin/hdfs namenode -format
启动:
sbin/start-dfs.sh
sbin/start-yarn.sh
查看进程:jps
7448 ResourceManager
8277 SecondaryNameNode
7547 NodeManager
8079 DataNode
7975 NameNode
8401 Jps
1. 打开浏览器
NameNode - http://localhost:50070/
2. 创建文件夹
3. $bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username>
4. Copy 文件
$ bin/hdfs dfs -put etc/hadoop input
5. 运行作业
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0-cdh5.1.0.jar grep input output 'dfs[a-z.]+'
6. 查看输出
$ bin/hdfs dfs -get output output
$ cat output/*
更多Hadoop相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13
本文永久更新链接地址:http://www.linuxidc.com/Linux/2014-09/106372.htm