Hadoop实战篇(2)
作者 | WenasWei
前言
在上一篇的Hadoop实战篇介绍过了Hadoop-离线批处理技术的本地模式和伪集群模式装置,接下来持续学习 Hadoop 集群模式装置; 将从以下几点介绍:
- Linux 主机部署布局
- Zookeeper 注册核心装置
- 集群模式装置
- Hadoop 的目录构造阐明和命令帮忙文档
- 集群动静减少和删除节点
一 Linux环境的配置与装置Hadoop
Hadoop集群部署布局:
Hadoop须要应用到 Linux 环境上的一些根本的配置须要,Hadoop 用户组和用户增加,免密登录操作,JDK装置
1.1 VMWare中Ubuntu网络配置
在应用 VMWare 装置 Ubuntu18.04-Linux 操作系统下时产生系统配置问题能够通过分享的博文进行配置,CSDN跳转链接: VMWare中Ubuntu网络配置
其中蕴含了以下几个重要操作步骤:
- Ubuntu零碎信息与批改主机名
- Windows设置VMWare的NAT网络
- Linux网关设置与配置动态IP
- Linux批改hosts文件
- Linux免明码登录
1.2 Hadoop 用户组和用户增加
1.2.1 增加Hadoop用户组和用户
以 root 用户登录 Linux-Ubuntu 18.04虚拟机,执行命令:
$ groupadd hadoop$ useradd -r -g hadoop hadoop
1.2.2 赋予Hadoop用户目录权限
将 /usr/local
目录权限赋予 Hadoop 用户, 命令如下:
$ chown -R hadoop.hadoop /usr/local/$ chown -R hadoop.hadoop /tmp/$ chown -R hadoop.hadoop /home/
1.2.3 赋予Hadoop用户sodu权限
编辑/etc/sudoers
文件,在root ALL=(ALL:ALL) ALL
下增加hadoop ALL=(ALL:ALL) ALL
$ vi /etc/sudoersDefaults env_resetDefaults mail_badpassDefaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"root ALL=(ALL:ALL) ALLhadoop ALL=(ALL:ALL) ALL%admin ALL=(ALL) ALL%sudo ALL=(ALL:ALL) ALL
1.2.4 赋予Hadoop用户登录明码
$ passwd hadoopEnter new UNIX password: 输出新密码Retype new UNIX password: 确认新密码passwd: password updated successfully
1.3 JDK装置
Linux装置JDK能够参照分享的博文《Logstash-数据流引擎》-<第三节:Logstash装置>--(第二大节: 3.2 Linux装置JDK进行)装置配置到每一台主机上,CSDN跳转链接: Logstash-数据流引擎
1.4 Hadoop官网下载
官网下载:https://hadoop.apache.org/rel... Binary download
- 应用 wget 命名下载(下载目录是当前目录):
例如:version3.3.0 https://mirrors.bfsu.edu.cn/a...
$ wget https://mirrors.bfsu.edu.cn/apache/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz
- 解压、挪动到你想要搁置的文件夹:
/usr/local
$ mv ./hadoop-3.3.0.tar.gz /usr/local$ cd /usr/local$ tar -zvxf hadoop-3.3.0.tar.gz
1.5 配置Hadoop环境
- 批改配置文件
/etc/profile
:
$ vi /etc/profile# 类同JDK配置增加export JAVA_HOME=/usr/local/java/jdk1.8.0_152export JRE_HOME=/usr/local/java/jdk1.8.0_152/jreexport CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/libexport HADOOP_HOME=/usr/local/hadoop-3.3.0export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
- 使配置文件失效
$ source /etc/profile
- 查看Hadoop配置是否胜利
$ hadoop versionHadoop 3.3.0Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r aa96f1871bfd858f9bac59cf2a81ec470da649afCompiled by brahma on 2020-07-06T18:44ZCompiled with protoc 3.7.1From source with checksum 5dc29b802d6ccd77b262ef9d04d19c4This command was run using /usr/local/hadoop-3.3.0/share/hadoop/common/hadoop-common-3.3.0.jar
从后果能够看出,Hadoop版本为 Hadoop 3.3.0
,阐明 Hadoop 环境装置并配置胜利。
二 Zookeeper注册核心
Zookeeper注册核心章节次要介绍如下:
- Zookeeper介绍
- Zookeeper下载安装
- Zookeeper配置文件
- 启动zookeeper集群验证
Zookeeper 集群主机布局:
- hadoop1
- hadoop2
- hadoop3
2.1 Zookeeper介绍
ZooKeeper是一个分布式的,开放源码的分布式应用程序协调服务,是Google的Chubby一个开源的实现,是Hadoop和Hbase的重要组件。它是一个为分布式应用提供一致性服务的软件,提供的性能包含:配置保护、域名服务、分布式同步、组服务等。
ZooKeeper的指标就是封装好简单易出错的要害服务,将简略易用的接口和性能高效、性能稳固的零碎提供给用户,其中蕴含一个简略的原语集,提供Java和C的接口,ZooKeeper代码版本中,提供了分布式独享锁、选举、队列的接口。其中散布锁和队列有Java和C两个版本,选举只有Java版本。
Zookeeper负责服务的协调调度, 当客户端发动申请时, 返回正确的服务器地址。
2.2 Zookeeper下载安装
Linux(执行主机-hadoop1) 下载 apache-zookeeper-3.5.9-bin 版本包, 挪动到装置目录: /usr/local/
,解压并重命名为: zookeeper-3.5.9
:
$ wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.9/apache-zookeeper-3.5.9-bin.tar.gz$ mv apache-zookeeper-3.5.9-bin.tar.gz /usr/local/$ cd /usr/local/$ tar -zxvf apache-zookeeper-3.5.9-bin.tar.gz$ mv apache-zookeeper-3.5.9-bin zookeeper-3.5.9
离线装置能够到指定官网下载版本包上传装置,官网地址: http://zookeeper.apache.org/r...
如图所示:
2.3 Zookeeper配置文件
2.3.1 配置Zookeeper环境变量
配置Zookeeper环境变量,须要在 /etc/profile
配置文件中批改增加,具体配置如下:
export JAVA_HOME=/usr/local/java/jdk1.8.0_152export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/libexport ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.9export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOME/bin:$ZOOKEEPER_HOME
批改实现后刷新环境变量配置文件:
$ source /etc/profile
其中曾经蕴含装置了JDK配置,如无装置JDK能够查看Hadoop实战篇(1)中的装置
2.3.2 Zookeeper配置文件
- 在zookeeper根目录
/usr/local/zookeeper-3.5.9
下创立文件夹:data
和dataLog
$ cd /usr/local/zookeeper-3.5.9$ mkdir data$ mkdir dataLog
切换到新建的 data 目录下,创立 myid 文件,增加具体内容为数字1,如下所示:
$ cd /usr/local/zookeeper-3.5.9/data$ vi myid# 增加内容数字11$ cat myid1
- 进入conf目录中批改配置文件
复制配置文件 zoo_sample.cfg
并且批改名称为 zoo.cfg
:
$ cp zoo_sample.cfg zoo.cfg
批改 zoo.cfg 文件,批改内容如下:
tickTime=2000initLimit=10syncLimit=5dataDir=/usr/local/zookeeper-3.5.9/datadataLogDir=/usr/local/zookeeper-3.5.9/dataLogclientPort=2181server.1=hadoop1:2888:3888server.2=hadoop2:2888:3888server.3=hadoop3:2888:3888
2.3.3 将Zookeeper和零碎环境变量拷贝到其余服务器
- 依据对服务器的布局,将 Zookeeper 和配置文件
/etc/profile
拷贝到 hadoop2 和 hadoop3 主机上:
$ scp -r zookeeper-3.5.9/ root@hadoop2:/usr/local/$ scp -r zookeeper-3.5.9/ root@hadoop3:/usr/local/$ scp /etc/profile root@hadoop2:/etc/$ scp /etc/profile root@hadoop3:/etc/
- 登陆到 hadoop2 和 hadoop3
别离批改 hadoop2 主机和 hadoop3 主机上的配置文件内容:/usr/local/zookeeper-3.5.9/data/myid
为 2 和 3
并刷新环境变量配置文件:
$ source /etc/profile
2.4 启动zookeeper集群
在 hadoop1、hadoop2 和 hadoop3 三台服务器上别离启动 Zookeeper 服务器并查看 Zookeeper 运行状态
- hadoop1主机:
$ cd /usr/local/zookeeper-3.5.9/bin/$ ./zkServer.sh start$ ./zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper-3.5.9/bin/../conf/zoo.cfgClient port found: 2181. Client address: localhost. Client SSL: false.Mode: follower
阐明hadoop1主机上 zookeeper 的运行状态是 follower
- hadoop2主机:
$ cd /usr/local/zookeeper-3.5.9/bin/$ ./zkServer.sh start$ ./zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper-3.5.9/bin/../conf/zoo.cfgClient port found: 2181. Client address: localhost. Client SSL: false.Mode: leader
阐明hadoop2主机上 zookeeper 的运行状态是 leader
- hadoop3主机:
$ cd /usr/local/zookeeper-3.5.9/bin/$ ./zkServer.sh start$ ./zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper-3.5.9/bin/../conf/zoo.cfgClient port found: 2181. Client address: localhost. Client SSL: false.Mode: follower
阐明hadoop3主机上 zookeeper 的运行状态是 follower
三 集群模式装置
3.1 Hadoop配置文件批改
3.1.1 批改配置 hadoop-env.sh
主机hadoop1 配置 hadoop-env.sh
,在 hadoop-env.sh 文件中指定 JAVA_HOME 的装置目录: /usr/local/hadoop-3.3.0/etc/hadoop
,配置如下:
export JAVA_HOME=/usr/local/java/jdk1.8.0_152# 配置容许应用 root 账户权限export HDFS_DATANODE_USER=root export HADOOP_SECURE_USER=root export HDFS_NAMENODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=rootexport HADOOP_SHELL_EXECNAME=rootexport HDFS_JOURNALNODE_USER=rootexport HDFS_ZKFC_USER=root
3.1.2 批改配置 core-site.xml
主机hadoop1 配置 core-site.xml
,在 core-site.xml
文件中指定 Zookeeper 的集群节点 /usr/local/hadoop-3.3.0/etc/hadoop
,配置如下:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://ns/</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop-3.3.0/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>ha.zookeeper.quorum</name> <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value> </property></configuration>
3.1.3 批改配置 hdfs-site.xml
主机hadoop1 配置 hdfs-site.xml
,在 hdfs-site.xml
文件中指定 namenodes 节点, /usr/local/hadoop-3.3.0/etc/hadoop
,配置如下:
<configuration> <property> <name>dfs.nameservices</name> <value>ns</value> </property> <property> <name>dfs.ha.namenodes.ns</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址,nn1所在地址 --> <property> <name>dfs.namenode.rpc-address.ns.nn1</name> <value>hadoop1:9000</value> </property> <!-- nn1的http通信地址,内部拜访地址 --> <property> <name>dfs.namenode.http-address.ns.nn1</name> <value>hadoop1:9870</value> </property> <!-- nn2的RPC通信地址,nn2所在地址 --> <property> <name>dfs.namenode.rpc-address.ns.nn2</name> <value>hadoop2:9000</value> </property> <!-- nn2的http通信地址,内部拜访地址 --> <property> <name>dfs.namenode.http-address.ns.nn2</name> <value>hadoop2:9870</value> </property> <!-- 指定NameNode的元数据在JournalNode日志上的寄存地位(个别和zookeeper部署在一起) --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop5:8485;hadoop6:8485;hadoop7:8485/ns</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的地位 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/usr/local/hadoop-3.3.0/data/journal</value> </property> <!--客户端通过代理拜访namenode,拜访文件系统,HDFS 客户端与Active 节点通信的Java 类,应用其确定Active 节点是否沉闷 --> <property> <name>dfs.client.failover.proxy.provider.ns</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!--这是配置主动切换的办法,有多种应用办法,具体能够看官网,这里是近程登录杀死的办法 --> <property> <name>dfs.ha.fencing.methods</name> <!-- 这个参数的值能够有多种,你也能够换成shell(/bin/true)试试,也是能够的,这个脚本do nothing 返回0 --> <value>sshfence</value> </property> <!-- 这个是应用sshfence隔离机制时才须要配置ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时工夫,这个属性同上,如果你是用脚本的办法切换,这个应该是能够不配置的 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> <!-- 这个是开启主动故障转移,如果你没有主动故障转移,这个能够先不配 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property></configuration>
3.1.4 批改配置 mapred-site.xml
主机hadoop1 配置 mapred-site.xml
,在 mapred-site.xml
文件中指定 mapreduce 信息, /usr/local/hadoop-3.3.0/etc/hadoop
,配置如下:
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop1:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop1:19888</value> </property>
3.1.5 批改配置 yarn-site.xml
主机hadoop1 配置 yarn-site.xml
,在 yarn-site.xml
文件中指定 ResourceManaeger 节点, /usr/local/hadoop-3.3.0/etc/hadoop
,配置如下:
<configuration><property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value></property><property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value></property><property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value></property><property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop3</value></property><property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoop4</value></property><property> <name>yarn.resourcemanager.address.rm1</name> <value>hadoop3:8032</value></property><property> <name>yarn.resourcemanager.scheduler.address.rm1</name> <value>hadoop3:8030</value></property><property> <name>yarn.resourcemanager.webapp.address.rm1</name> <value>hadoop3:8088</value></property><property> <name>yarn.resourcemanager.resource-tracker.address.rm1</name> <value>hadoop3:8031</value></property><property> <name>yarn.resourcemanager.admin.address.rm1</name> <value>hadoop3:8033</value></property><property> <name>yarn.resourcemanager.ha.admin.address.rm1</name> <value>hadoop3:23142</value></property><property> <name>yarn.resourcemanager.address.rm2</name> <value>hadoop4:8032</value></property><property> <name>yarn.resourcemanager.scheduler.address.rm2</name> <value>hadoop4:8030</value></property><property> <name>yarn.resourcemanager.webapp.address.rm2</name> <value>hadoop4:8088</value></property><property> <name>yarn.resourcemanager.resource-tracker.address.rm2</name> <value>hadoop4:8031</value></property><property> <name>yarn.resourcemanager.admin.address.rm2</name> <value>hadoop4:8033</value></property><property> <name>yarn.resourcemanager.ha.admin.address.rm2</name> <value>hadoop4:23142</value></property><property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value></property> <!-- 资源调度模型 --><property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <!-- 开启mapreduce两头过程压缩 --><property> <name>mapreduce.map.output.compress</name> <value>true</value></property></configuration>
3.1.6 批改 workers 文件
主机hadoop1 配置 workers
,在 workers
文件中指定 DataNode 节点, /usr/local/hadoop-3.3.0/etc/hadoop
,配置如下:
hadoop5hadoop6hadoop7
3.2 Hadoop节点拷贝配置
3.2.1 将配置好的Hadoop复制到其余节点
将在 hadoop1 上装置并配置好的 Hadoop 复制到其余服务器上,操作如下所示:
scp -r /usr/local/hadoop-3.3.0/ hadoop2:/usr/local/scp -r /usr/local/hadoop-3.3.0/ hadoop3:/usr/local/scp -r /usr/local/hadoop-3.3.0/ hadoop4:/usr/local/scp -r /usr/local/hadoop-3.3.0/ hadoop5:/usr/local/scp -r /usr/local/hadoop-3.3.0/ hadoop6:/usr/local/scp -r /usr/local/hadoop-3.3.0/ hadoop7:/usr/local/
3.2.2 复制 hadoop1 上的零碎环境变量
将在 hadoop1 上装置并配置好的零碎环境变量复制到其余服务器上,操作如下所示:
sudo scp /etc/profile hadoop2:/etc/sudo scp /etc/profile hadoop3:/etc/sudo scp /etc/profile hadoop4:/etc/sudo scp /etc/profile hadoop5:/etc/sudo scp /etc/profile hadoop6:/etc/sudo scp /etc/profile hadoop7:/etc/
使零碎环境变量失效
source /etc/profilehadoop version
3.3 启动 hadoop 集群(1)
启动 hadoop 集群步骤分为:
- 启动并验证 journalnode 过程
- 格式化HDFS
- 格式化ZKFC
- 启动并验证 NameNode 过程
- 同步元数据信息
- 启动并验证备用NameNode 过程
- 启动并验证DataNode过程
- 启动并验证YARN
- 启动并验证ZKFC
- 查看每台服务器上运行的运行信息
3.3.1 启动并验证 journalnode 过程
(1)启动 journalnode 过程,在hadoop1服务器上执行如下命令,启动 journalnode 过程:
hdfs --workers --daemon start journalnode
(2)验证 journalnode 过程是否启动胜利,别离在hadoop5、hadoop6和hadoop7三台服务器上别离执行 jps
命令,执行后果如下
- hadoop5服务器:
root@hadoop5:~# jps17322 Jps14939 JournalNode
- hadoop6服务器:
root@hadoop6:~# jps13577 JournalNode15407 Jps
- hadoop7服务器:
root@hadoop7:~# jps13412 JournalNode15212 Jps
3.3.2 格式化HDFS
- 在hadoop1服务器上执行如下命令,格式化HDFS:
hdfs namenode -format
- 执行后,命令行会输入胜利信息:
common.Storage: Storage directory /usr/local/hadoop-3.3.0/tmp/dfs/name has been successfully formatted.
3.3.3 格式化ZKFC
- 在hadoop1服务器上执行如下命令,格式化 ZKFC :
hdfs zkfc -formatZK
- 执行后,命令行会输入胜利信息:
ha.ActiveStandbyElector: Successfuly created /hadoop-ha/ns in ZK.
3.3.4 启动并验证 NameNode 过程
- 启动NameNode过程,在hadoop1服务器上执行
hdfs --daemon start namenode
- 验证NameNode过程启动胜利,
# jps26721 NameNode50317 Jps
3.3.5 同步元数据信息
- 在hadoop2服务器上执行如下命令,进行元数据信息的同步操作:
hdfs namenode -bootstrapStandby
- 执行后,命令行胜利信息:
common.Storage: Storage directory /usr/local/hadoop-3.3.0/tmp/dfs/name has been successfully formatted.
3.3.6 启动并验证备用NameNode过程
- 在hadoop2服务器上执行如下命令,启动备用NameNode 过程:
hdfs --daemon start namenode
- 验证备用的NameNode 过程
# jps21482 NameNode50317 Jps
3.3.7 启动并验证DataNode过程
- 在hadoop1服务器上执行如下命令,启动DataNode过程
hdfs --workers --daemon start datanode
- 验证DataNode过程在hadoop5、hadoop6和hadoop7服务器上执行:
hadoop5服务器:
# jps31713 Jps16435 DataNode14939 JournalNode15406 NodeManager
hadoop6服务器:
# jps13744 NodeManager13577 JournalNode29806 Jps14526 DataNode
hadoop7服务器:
# jps29188 Jps14324 DataNode13412 JournalNode13580 NodeManager
3.3.8 启动并验证YARN
- 在hadoop1服务器上执行如下命令,启动YARN
start-yarn.sh
- 在hadoop3、hadoop4服务器上执行jps命令,验证YARN启动胜利
hadoop3服务器:
# jps21937 Jps8070 ResourceManager7430 QuorumPeerMain
hadoop4服务器:
# jps6000 ResourceManager20183 Jps
3.3.9 启动并验证ZKFC
- 在hadoop1服务器上执行如下命令,启动ZKFC:
hdfs --workers daemon start zkfc
- 在hadoop1和hadoop2上执行jps命令,验证DFSZKFailoveController过程启动胜利
hadoop1服务器:
# jps26721 NameNode14851 QuorumPeerMain50563 Jps27336 DFSZKFailoverController
hadoop2服务器:
# jps21825 DFSZKFailoverController39399 Jps15832 QuorumPeerMain21482 NameNode
3.3.10 查看每台服务器上运行的运行信息
- hadoop1服务器:
# jps26721 NameNode14851 QuorumPeerMain50563 Jps27336 DFSZKFailoverController
- hadoop2服务器:
# jps21825 DFSZKFailoverController39399 Jps15832 QuorumPeerMain21482 NameNode
- hadoop3服务器:
# jps8070 ResourceManager7430 QuorumPeerMain21950 Jps
- hadoop4服务器:
# jps6000 ResourceManager20197 Jps
- hadoop5服务器:
# jps16435 DataNode31735 Jps14939 JournalNode15406 NodeManager
- hadoop6服务器:
# jps13744 NodeManager13577 JournalNode29833 Jps14526 DataNode
- hadoop7服务器:
# jps14324 DataNode13412 JournalNode29211 Jps13580 NodeManager
3.4 启动 hadoop 集群(2)
- 格式化HDFS
- 复制元数据信息
- 格式化ZKFC
- 启动HDFS
- 启动YARN
- 查看每台服务器上运行的运行信息
3.4.1 格式化HDFS
在hadoop1服务器上格式化HDFS,如下所示:
hdfs namenode -format
3.4.2 复制元数据信息
将hadoop1服务器上的 /usr/local/hadoop-3.3.0/tmp
目录复制到hadoop2服务器上 /usr/local/hadoop-3.3.0
目录下,在hadoop1服务器上执行如下命令:
scp -r /usr/local/hadoop-3.3.0/tmp hadoop2:/usr/local/hadoop-3.3.0/
3.4.3 格式化ZKFC
在hadoop1服务器上格式化ZKFC,如下所示:
hdfs zkfc -formatZK
3.4.4 启动HDFS
在hadoop1服务器上通过启动脚本启动HDFS,如下所示:
start-dfs.sh
3.4.5 启动YARN
在hadoop1服务器上通过启动脚本启动YARN,如下所示:
start-yarn.sh
3.4.6 查看每台服务器上运行的运行信息
- hadoop1服务器:
# jps26721 NameNode14851 QuorumPeerMain50563 Jps27336 DFSZKFailoverController
- hadoop2服务器:
# jps21825 DFSZKFailoverController39399 Jps15832 QuorumPeerMain21482 NameNode
- hadoop3服务器:
# jps8070 ResourceManager7430 QuorumPeerMain21950 Jps
- hadoop4服务器:
# jps6000 ResourceManager20197 Jps
- hadoop5服务器:
# jps16435 DataNode31735 Jps14939 JournalNode15406 NodeManager
- hadoop6服务器:
# jps13744 NodeManager13577 JournalNode29833 Jps14526 DataNode
- hadoop7服务器:
# jps14324 DataNode13412 JournalNode29211 Jps13580 NodeManager
四 Hadoop 的目录构造阐明和命令帮忙文档
4.1 Hadoop 的目录构造阐明
应用命令“ls”查看Hadoop 3.3.0上面的目录,如下所示:
-bash-4.1$ lsbin etc include lib libexec LICENSE.txt NOTICE.txt README.txt sbin share
上面就简略介绍下每个目录的作用:
- bin:bin目录是Hadoop最根本的治理脚本和应用脚本所在的目录,这些脚本是sbin目录下治理脚本的根底实现,用户能够间接应用这些脚本治理和应用Hadoop
- etc:Hadoop配置文件所在的目录,包含:core-site.xml、hdfs-site.xml、mapred-site.xml和yarn-site.xml等配置文件。
- include:对外提供的编程库头文件(具体的动静库和动态库在lib目录中),这些文件都是用C++定义的,通常用于C++程序拜访HDFS或者编写MapReduce程序。
- lib:蕴含了Hadoop对外提供的编程动静库和动态库,与include目录中的头文件联合应用。
- libexec:各个服务对应的shell配置文件所在的目录,可用于配置日志输入目录、启动参数(比方JVM参数)等根本信息。
- sbin:Hadoop治理脚本所在目录,次要蕴含HDFS和YARN中各类服务启动/敞开的脚本。
- share:Hadoop各个模块编译后的Jar包所在目录,这个目录中也蕴含了Hadoop文档。
4.2 Hadoop 命令帮忙文档
- 1、查看指定目录下内容
hdfs dfs –ls [文件目录]hdfs dfs -ls -R / //显式目录构造eg: hdfs dfs –ls /user/wangkai.pt
- 2、关上某个已存在文件
hdfs dfs –cat [file_path]eg:hdfs dfs -cat /user/wangkai.pt/data.txt
- 3、将本地文件存储至hadoop
hdfs dfs –put [本地地址] [hadoop目录]hdfs dfs –put /home/t/file.txt /user/t
- 4、将本地文件夹存储至hadoop
hdfs dfs –put [本地目录] [hadoop目录] hdfs dfs –put /home/t/dir_name /user/t(dir_name是文件夹名)
- 5、将hadoop上某个文件down至本地已有目录下
hadoop dfs -get [文件目录] [本地目录]hadoop dfs –get /user/t/ok.txt /home/t
- 6、删除hadoop上指定文件
hdfs dfs –rm [文件地址]hdfs dfs –rm /user/t/ok.txt
- 7、删除hadoop上指定文件夹(蕴含子目录等)
hdfs dfs –rm [目录地址]hdfs dfs –rmr /user/t
- 8、在hadoop指定目录内创立新目录
hdfs dfs –mkdir /user/thdfs dfs -mkdir - p /user/centos/hadoop
- 9、在hadoop指定目录下新建一个空文件
应用touchz命令:
hdfs dfs -touchz /user/new.txt
- 10、将hadoop上某个文件重命名
应用mv命令:
hdfs dfs –mv /user/test.txt /user/ok.txt (将test.txt重命名为ok.txt)
- 11、将hadoop指定目录下所有内容保留为一个文件,同时down至本地
hdfs dfs –getmerge /user /home/t
- 12、将正在运行的hadoop作业kill掉
hadoop job –kill [job-id]
- 13.查看帮忙
hdfs dfs -help
五 集群动静减少和删除节点
5.1 动静增加 DataNode 和 NodeManager
5.1.1 查看集群的状态
- 在hadoop1服务器上查看HDFS各节点状态,如下所示:
# hdfs dfsadmin -reportConfigured Capacity: 60028796928 (55.91 GB)Present Capacity: 45182173184 (42.08 GB)DFS Remaining: 45178265600 (42.08 GB)DFS Used: 3907584 (3.73 MB)DFS Used%: 0.01%Replicated Blocks: Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0Erasure Coded Block Groups: Low redundancy block groups: 0 Block groups with corrupt internal blocks: 0 Missing block groups: 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0-------------------------------------------------Live datanodes (3):Name: 192.168.254.134:9866 (hadoop5)Hostname: hadoop5Decommission Status : NormalConfigured Capacity: 20009598976 (18.64 GB)DFS Used: 1302528 (1.24 MB)Non DFS Used: 4072615936 (3.79 GB)DFS Remaining: 15060099072 (14.03 GB)DFS Used%: 0.01%DFS Remaining%: 75.26%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Thu Nov 18 14:23:05 CST 2021Last Block Report: Thu Nov 18 13:42:32 CST 2021Num of Blocks: 16Name: 192.168.254.135:9866 (hadoop6)Hostname: hadoop6Decommission Status : NormalConfigured Capacity: 20009598976 (18.64 GB)DFS Used: 1302528 (1.24 MB)Non DFS Used: 4082216960 (3.80 GB)DFS Remaining: 15050498048 (14.02 GB)DFS Used%: 0.01%DFS Remaining%: 75.22%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Thu Nov 18 14:23:06 CST 2021Last Block Report: Thu Nov 18 08:58:22 CST 2021Num of Blocks: 16Name: 192.168.254.136:9866 (hadoop7)Hostname: hadoop7Decommission Status : NormalConfigured Capacity: 20009598976 (18.64 GB)DFS Used: 1302528 (1.24 MB)Non DFS Used: 4065046528 (3.79 GB)DFS Remaining: 15067668480 (14.03 GB)DFS Used%: 0.01%DFS Remaining%: 75.30%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Thu Nov 18 14:23:05 CST 2021Last Block Report: Thu Nov 18 14:09:59 CST 2021Num of Blocks: 16
能够看到,增加DataNode之前,DataNode总共有3个,别离在hadoop5、hadoop6和hadoop7服务器上
- 查看YARN各节点的状态
# yarn node -listTotal Nodes:3 Node-Id Node-State Node-Http-Address Number-of-Running-Containers hadoop5:34211 RUNNING hadoop5:8042 0 hadoop7:43419 RUNNING hadoop7:8042 0 hadoop6:36501 RUNNING hadoop6:8042 0
能够看到,增加NodeManager之前,NodeManger 过程运行在hadoop5、hadoop6和hadoop7服务器上
5.1.2 动静增加 DataNode 和 NodeManager
- 在hadoop集群所有节点中的workers文件中新增hadoop4节点,以后批改主机hadoop1:
# vi /usr/local/hadoop-3.3.0/etc/hadoop/workershadoop4hadoop5hadoop6hadoop7
- 将批改的文件拷贝到其余节点上
# scp /usr/local/hadoop-3.3.0/etc/hadoop/workers hadoop2:/usr/local/hadoop-3.3.0/etc/hadoop/# scp /usr/local/hadoop-3.3.0/etc/hadoop/workers hadoop3:/usr/local/hadoop-3.3.0/etc/hadoop/# scp /usr/local/hadoop-3.3.0/etc/hadoop/workers hadoop4:/usr/local/hadoop-3.3.0/etc/hadoop/# scp /usr/local/hadoop-3.3.0/etc/hadoop/workers hadoop5:/usr/local/hadoop-3.3.0/etc/hadoop/# scp /usr/local/hadoop-3.3.0/etc/hadoop/workers hadoop6:/usr/local/hadoop-3.3.0/etc/hadoop/# scp /usr/local/hadoop-3.3.0/etc/hadoop/workers hadoop7:/usr/local/hadoop-3.3.0/etc/hadoop/
- 启动hadoop4服务器上DataNode和NodeManager,如下所示:
# hdfs --daemon start datanode# yarn --daemin start nodemanager
- 刷新节点,在hadoop1服务器上执行如下命令,刷新Hadoop集群节点:
# hdfs dfsadmin -refreshNodes# start-balancer.sh
- 查看hadoop4节点上的运行过程:
# jps20768 NodeManager6000 ResourceManager20465 DataNode20910 Jps
5.1.3 再次查看集群的状态
- 在hadoop1服务器上查看HDFS各节点状态,如下所示:
# hdfs dfsadmin -reportConfigured Capacity: 80038395904 (74.54 GB)Present Capacity: 60257288192 (56.12 GB)DFS Remaining: 60253356032 (56.12 GB)DFS Used: 3932160 (3.75 MB)DFS Used%: 0.01%Replicated Blocks: Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0Erasure Coded Block Groups: Low redundancy block groups: 0 Block groups with corrupt internal blocks: 0 Missing block groups: 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0-------------------------------------------------Live datanodes (4):Name: 192.168.254.133:9866 (hadoop4)Hostname: hadoop4Decommission Status : NormalConfigured Capacity: 20009598976 (18.64 GB)DFS Used: 24576 (24 KB)Non DFS Used: 4058525696 (3.78 GB)DFS Remaining: 15075467264 (14.04 GB)DFS Used%: 0.00%DFS Remaining%: 75.34%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Thu Nov 18 15:12:30 CST 2021Last Block Report: Thu Nov 18 15:10:49 CST 2021Num of Blocks: 0Name: 192.168.254.134:9866 (hadoop5)Hostname: hadoop5Decommission Status : NormalConfigured Capacity: 20009598976 (18.64 GB)DFS Used: 1302528 (1.24 MB)Non DFS Used: 4072738816 (3.79 GB)DFS Remaining: 15059976192 (14.03 GB)DFS Used%: 0.01%DFS Remaining%: 75.26%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Thu Nov 18 15:12:33 CST 2021Last Block Report: Thu Nov 18 13:42:32 CST 2021Num of Blocks: 16Name: 192.168.254.135:9866 (hadoop6)Hostname: hadoop6Decommission Status : NormalConfigured Capacity: 20009598976 (18.64 GB)DFS Used: 1302528 (1.24 MB)Non DFS Used: 4082335744 (3.80 GB)DFS Remaining: 15050379264 (14.02 GB)DFS Used%: 0.01%DFS Remaining%: 75.22%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Thu Nov 18 15:12:31 CST 2021Last Block Report: Thu Nov 18 14:58:22 CST 2021Num of Blocks: 16Name: 192.168.254.136:9866 (hadoop7)Hostname: hadoop7Decommission Status : NormalConfigured Capacity: 20009598976 (18.64 GB)DFS Used: 1302528 (1.24 MB)Non DFS Used: 4065181696 (3.79 GB)DFS Remaining: 15067533312 (14.03 GB)DFS Used%: 0.01%DFS Remaining%: 75.30%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Thu Nov 18 15:12:33 CST 2021Last Block Report: Thu Nov 18 14:09:59 CST 2021Num of Blocks: 16
能够看到,增加DataNode之前,DataNode总共有3个,别离在hadoop4、hadoop5、hadoop6和hadoop7服务器上
- 查看YARN各节点的状态
# yarn node -listTotal Nodes:4 Node-Id Node-State Node-Http-Address Number-of-Running-Containers hadoop5:34211 RUNNING hadoop5:8042 0 hadoop4:36431 RUNNING hadoop4:8042 0 hadoop7:43419 RUNNING hadoop7:8042 0 hadoop6:36501 RUNNING hadoop6:8042 0
5.2 动静删除DataNode和NodeManager
##### 5.2.1 删除DataNode和NodeManager
- 进行hadoop4下面的DataNode和NodeManager过程,在hadoop4上执行
# hdfs --daemon stop datanode # yarn --daemon stop nodemanager
- 删除hadoop集群每台主机的workers文件中的hadoop4配置信息
# vi /usr/local/hadoop-3.3.0/etc/hadoop/workershadoop5hadoop6hadoop7
- 刷新节点,在hadoop1服务器上执行如下命令,刷新hadoop集群节点:
# hdfs dfsadmin -refreshNodes # start-balancer.sh
参考文档:
- [1] 逸非羽.CSDN: https://www.cnblogs.com/yifei... ,2019-06-18.
- [2] Hadoop官网: https://hadoop.apache.org/
- [3] 冰河.海量数据处理与大数据技术实站 [M].第1版.北京: 北京大学出版社,2020-09