手机版
你好,游客 登录 注册
背景:
阅读新闻

Hadoop的Client搭建-即集群外主机访问Hadoop

[日期:2018-03-01] 来源:Linux社区  作者:Linux [字体: ]

Hadoop的Client搭建-即集群外主机访问Hadoop

1、增加主机映射(与namenode的映射一样):

增加最后一行

[root@localhost ~]# su - root

[root@localhost ~]# vi /etc/hosts
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1        localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.48.129    hadoop-master
[root@localhost ~]#


2、新建用户hadoop

建立hadoop用户组

新建用户,useradd -d /usr/hadoop -g hadoop -m hadoop (新建用户hadoop指定用户主目录/usr/hadoop 及所属组hadoop)

passwd hadoop 设置hadoop密码(这里设置密码为hadoop)

[root@localhost ~]# groupadd hadoop 
[root@localhost ~]# useradd -d /usr/hadoop -g hadoop -m hadoop
[root@localhost ~]# passwd hadoop

3、配置jdk环境

本次安装的是hadoop-2.7.5,需要JDK 7以上版本。若已安装可跳过。

JDK安装可参考:http://www.linuxidc.com/Linux/2017-01/139874.htm 或 CentOS7.2安装JDK1.7 http://www.linuxidc.com/Linux/2016-11/137398.htm

或者直接拷贝master上的JDK文件更有利于保持版本的一致性。

[root@localhost java]# su - root
[root@localhost java]# mkdir -p /usr/java
[root@localhost java]# scp -r hadoop@hadoop-master:/usr/java/jdk1.7.0_79 /usr/java
[root@localhost java]# ll
total 12
drwxr-xr-x. 8 root root 4096 Feb 13 01:34 default
drwxr-xr-x. 8 root root 4096 Feb 13 01:34 jdk1.7.0_79
drwxr-xr-x. 8 root root 4096 Feb 13 01:34 latest

设置Java及hadoop环境变量

确保/usr/java/jdk1.7.0.79存在

su - root

vi /etc/profile

确保/usr/java/jdk1.7.0.79存在

unset i
unset -f pathmunge
JAVA_HOME=/usr/java/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=/usr/hadoop/hadoop-2.7.5/bin:$JAVA_HOME/bin:$PATH

设置生效(重要)

[root@localhost ~]# source /etc/profile
[root@localhost ~]#

JDK安装后确认:

[hadoop@localhost ~]$ java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
[hadoop@localhost ~]$

4、设置hadoop的环境变量

拷贝namenode上已配置好的hadoop目录到当前主机

[root@localhost ~]# su - hadoop
Last login: Sat Feb 24 14:04:55 CST 2018 on pts/1
[hadoop@localhost ~]$ pwd
/usr/hadoop
[hadoop@localhost ~]$ scp -r hadoop@hadoop-master:/usr/hadoop/hadoop-2.7.5 .
The authenticity of host 'hadoop-master (192.168.48.129)' can't be established.
ECDSA key fingerprint is 1e:cd:d1:3d:b0:5b:62:45:a3:63:df:c7:7a:0f:b8:7c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop-master,192.168.48.129' (ECDSA) to the list of known hosts.
hadoop@hadoop-master's password:

[hadoop@localhost ~]$ ll
total 0
drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Desktop
drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Documents
drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Downloads
drwxr-xr-x 10 hadoop hadoop 150 Feb 24 14:30 hadoop-2.7.5
drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Music
drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Pictures
drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Public
drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Templates
drwxr-xr-x  2 hadoop hadoop  6 Feb 24 11:32 Videos
[hadoop@localhost ~]$

到此,Hadoop的客户端安装就算完成了,接下来就可以使用了。

执行hadoop命令结果如下,

[hadoop@localhost ~]$ hadoop
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
  CLASSNAME            run the class named CLASSNAME
 or
  where COMMAND is one of:
  fs                  run a generic filesystem user client
  version              print the version
  jar <jar>            run a jar file
                      note: please use "yarn jar" to launch
                            YARN applications, not this command.
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
  credential          interact with credential providers
                      Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
  trace                view and modify Hadoop tracing settings

Most commands print help when invoked w/o parameters.
[hadoop@localhost ~]$

5、使用hadoop

创建本地文件

[hadoop@localhost ~]$ hdfs dfs -ls
Found 1 items
drwxr-xr-x  - hadoop supergroup          0 2018-02-22 23:41 output
[hadoop@localhost ~]$ vi my-local.txt
hello boy!
yehyeh

上传本地文件至集群

[hadoop@localhost ~]$ hdfs dfs -mkdir upload
[hadoop@localhost ~]$ hdfs dfs -ls upload
[hadoop@localhost ~]$ hdfs dfs -ls
Found 2 items
drwxr-xr-x  - hadoop supergroup          0 2018-02-22 23:41 output
drwxr-xr-x  - hadoop supergroup          0 2018-02-23 22:38 upload
[hadoop@localhost ~]$ hdfs dfs -ls upload
[hadoop@localhost ~]$ hdfs dfs -put my-local.txt upload
[hadoop@localhost ~]$ hdfs dfs -ls upload
Found 1 items
-rw-r--r--  3 hadoop supergroup        18 2018-02-23 22:45 upload/my-local.txt
[hadoop@localhost ~]$ hdfs dfs -cat upload/my-local.txt
hello boy!
yehyeh
[hadoop@localhost ~]$

ps:注意本地java版本与master拷贝过来的文件中/etc/hadoop-env.sh配置的JAVA_HOME是否要保持一致没有验证过,本文是保持一致的。

Hadoop2.3-HA高可用集群环境搭建  http://www.linuxidc.com/Linux/2017-03/142155.htm

Hadoop项目之基于CentOS7的Cloudera 5.10.1(CDH)的安装部署  http://www.linuxidc.com/Linux/2017-04/143095.htm

Hadoop2.7.2集群搭建详解(高可用)  http://www.linuxidc.com/Linux/2017-03/142052.htm

使用Ambari来部署Hadoop集群(搭建内网HDP源)  http://www.linuxidc.com/Linux/2017-03/142136.htm

Ubuntu 14.04下Hadoop集群安装  http://www.linuxidc.com/Linux/2017-02/140783.htm

CentOS 6.7安装Hadoop 2.7.2  http://www.linuxidc.com/Linux/2017-08/146232.htm

Ubuntu 16.04上构建分布式Hadoop-2.7.3集群  http://www.linuxidc.com/Linux/2017-07/145503.htm

CentOS 7 下 Hadoop 2.6.4 分布式集群环境搭建  http://www.linuxidc.com/Linux/2017-06/144932.htm

Hadoop2.7.3+Spark2.1.0完全分布式集群搭建过程  http://www.linuxidc.com/Linux/2017-06/144926.htm

更多Hadoop相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

本文永久更新链接地址https://www.linuxidc.com/Linux/2018-03/151129.htm

linux
本文评论   查看全部评论 (0)
表情: 表情 姓名: 字数

       

评论声明
  • 尊重网上道德,遵守中华人民共和国的各项有关法律法规
  • 承担一切因您的行为而直接或间接导致的民事或刑事法律责任
  • 本站管理人员有权保留或删除其管辖留言中的任意内容
  • 本站有权在网站内转载或引用您的评论
  • 参与本评论即表明您已经阅读并接受上述条款