1、背景

最近在学习hadoop,本文记录一下,怎么在Centos7零碎上搭建一个3个节点的hadoop集群。

2、集群布局

hadoop集群是由2个集群形成的,别离是hdfs集群和yarn集群。2个集群都是主从构造。

2.1 hdfs集群布局

ip地址主机名部署服务
192.168.121.140hadoop01NameNode,DataNode,JobHistoryServer
192.168.121.141hadoop02DataNode
192.168.121.142hadoop03DataNode,SecondaryNameNode

2.2 yarn集群布局

ip地址主机名部署服务
192.168.121.140hadoop01NodeManager
192.168.121.141hadoop02ResourceManager,NodeManager
192.168.121.142hadoop03NodeManager

3、集群搭建步骤

3.1 装置JDK

装置jdk步骤较为简单,此处省略。须要留神的是hadoop须要的jdk版本。 https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Java+Versions

3.2 批改主机名和host映射

ip地址主机名
192.168.121.140hadoop01
192.168.121.141hadoop02
192.168.121.142hadoop03

3台机器上同时执行如下命令

# 此处批改主机名,3台机器的主机名须要都不同[root@hadoop01 ~]# vim /etc/hostname[root@hadoop01 ~]# cat /etc/hostnamehadoop01[root@hadoop01 ~]# vim /etc/hosts[root@hadoop01 ~]# cat /etc/hosts | grep hadoop*192.168.121.140 hadoop01192.168.121.141 hadoop02192.168.121.142 hadoop03

3.3 配置工夫同步

集群中的工夫最好保持一致,否则可能会有问题。此处我本地搭建,虚拟机是能够链接外网,间接配置和外网工夫同步。如果不能链接外网,则集群中的3台服务器,让另外的2台和其中的一台放弃工夫同步。

3台机器同时执行如下命令

# 将centos7的时区设置成上海[root@hadoop01 ~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime# 装置ntp[root@hadoop01 ~]# yum install ntp已加载插件:fastestmirrorLoading mirror speeds from cached hostfilebase                                           | 3.6 kB     00:00extras                                         | 2.9 kB     00:00updates                                        | 2.9 kB     00:00软件包 ntp-4.2.6p5-29.el7.centos.2.aarch64 已装置并且是最新版本毋庸任何解决# 将ntp设置成缺省启动[root@hadoop01 ~]# systemctl enable ntpd# 重启ntp服务[root@hadoop01 ~]# service ntpd restartRedirecting to /bin/systemctl restart ntpd.service# 对准工夫[root@hadoop01 ~]# ntpdate asia.pool.ntp.org19 Feb 12:36:22 ntpdate[1904]: the NTP socket is in use, exiting# 对准硬件工夫和零碎工夫[root@hadoop01 ~]# /sbin/hwclock --systohc# 查看工夫[root@hadoop01 ~]# timedatectl      Local time: 日 2023-02-19 12:36:35 CST  Universal time: 日 2023-02-19 04:36:35 UTC        RTC time: 日 2023-02-19 04:36:35       Time zone: Asia/Shanghai (CST, +0800)     NTP enabled: yesNTP synchronized: no RTC in local TZ: no      DST active: n/a# 开始主动工夫和近程ntp工夫进行同步[root@hadoop01 ~]# timedatectl set-ntp true

3.4 敞开防火墙

3台机器上同时敞开防火墙,如果不敞开的话,则须要放行hadoop可能用到的所有端口等。

# 敞开防火墙[root@hadoop01 ~]# systemctl stop firewalldsystemctl stop firewalld# 敞开防火墙开机自启[root@hadoop01 ~]# systemctl disable firewalld.serviceRemoved symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.[root@hadoop01 ~]#

3.5 配置ssh免密登录

3.5.1 新建hadoop部署用户
[root@hadoop01 ~]# useradd hadoopdeploy[root@hadoop01 ~]# passwd hadoopdeploy更改用户 hadoopdeploy 的明码 。新的 明码:有效的明码: 明码蕴含用户名在某些中央从新输出新的 明码:passwd:所有的身份验证令牌曾经胜利更新。[root@hadoop01 ~]# vim /etc/sudoers[root@hadoop01 ~]# cat /etc/sudoers | grep hadoopdeployhadoopdeploy    ALL=(ALL)       NOPASSWD: ALL[root@hadoop01 ~]#

3.5.2 配置hadoopdeploy用户到任意一台机器都免密登录

配置3台机器,从任意一台到本身和另外2台都进行免密登录。

以后机器以后用户免密登录的机器免密登录的用户
hadoop01hadoopdeployhadoop01,hadoop02,hadoop03hadoopdeploy
hadoop02hadoopdeployhadoop01,hadoop02,hadoop03hadoopdeploy
hadoop03hadoopdeployhadoop01,hadoop02,hadoop03hadoopdeploy

此处演示从 hadoop01hadoop01,hadoop02,hadoop03免密登录的shell

# 切换到 hadoopdeploy 用户[root@hadoop01 ~]# su - hadoopdeployLast login: Sun Feb 19 13:05:43 CST 2023 on pts/0# 生成公私钥对,下方的提醒间接3个回车即可[hadoopdeploy@hadoop01 ~]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/hadoopdeploy/.ssh/id_rsa):Created directory '/home/hadoopdeploy/.ssh'.Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /home/hadoopdeploy/.ssh/id_rsa.Your public key has been saved in /home/hadoopdeploy/.ssh/id_rsa.pub.The key fingerprint is:SHA256:PFvgTUirtNLwzDIDs+SD0RIzMPt0y1km5B7rY16h1/E hadoopdeploy@hadoop01The key's randomart image is:+---[RSA 2048]----+|B   .   .        || B o   . o       ||+ * * + + .      || O B / = +       ||. = @ O S o      ||   o * o *       ||    = o o E      ||   o +           ||    .            |+----[SHA256]-----+[hadoopdeploy@hadoop01 ~]$ ssh-copy-id hadoop01...[hadoopdeploy@hadoop01 ~]$ ssh-copy-id hadoop02...[hadoopdeploy@hadoop01 ~]$ ssh-copy-id hadoop03

3.7 配置hadoop

此处如无非凡阐明,都是应用的hadoopdeploy用户来操作。

3.7.1 创立目录(3台机器都执行)
# 创立 /opt/bigdata 目录[hadoopdeploy@hadoop01 ~]$ sudo mkdir /opt/bigdata# 将 /opt/bigdata/ 目录及它下方所有的子目录的所属者和所属组都给 hadoopdeploy[hadoopdeploy@hadoop01 ~]$ sudo chown -R hadoopdeploy:hadoopdeploy /opt/bigdata/[hadoopdeploy@hadoop01 ~]$ ll /opttotal 0drwxr-xr-x. 2 hadoopdeploy hadoopdeploy  6 Feb 19 13:15 bigdata
3.7.2 下载hadoop并解压(hadoop01操作)
# 进入目录[hadoopdeploy@hadoop01 ~]$ cd /opt/bigdata/# 下载[hadoopdeploy@hadoop01 ~]$ https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.3.4/hadoop-3.3.4.tar.gz# 解压并压缩[hadoopdeploy@hadoop01 bigdata]$ tar -zxvf hadoop-3.3.4.tar.gz && rm -rvf hadoop-3.3.4.tar.gz
3.7.3 配置hadoop环境变量(hadoop01操作)

# 进入hadoop目录[hadoopdeploy@hadoop01 hadoop-3.3.4]$ cd /opt/bigdata/hadoop-3.3.4/# 切换到root用户[hadoopdeploy@hadoop01 hadoop-3.3.4]$ su - rootPassword:Last login: Sun Feb 19 13:06:41 CST 2023 on pts/0[root@hadoop01 ~]# vim /etc/profile# 查看hadoop环境变量配置[root@hadoop01 ~]# tail -n 3 /etc/profile# 配置HADOOPexport HADOOP_HOME=/opt/bigdata/hadoop-3.3.4/export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH# 让环境变量失效[root@hadoop01 ~]# source /etc/profile
3.7.4 hadoop的配置文件分类(hadoop01操作)

hadoop中配置文件大略有这么3大类。

  • 默认的只读配置文件: core-default.xml, hdfs-default.xml, yarn-default.xml and mapred-default.xml.
  • 自定义配置文件: etc/hadoop/core-site.xml, etc/hadoop/hdfs-site.xml, etc/hadoop/yarn-site.xml and etc/hadoop/mapred-site.xml 会笼罩默认的配置。
  • 环境配置文件: etc/hadoop/hadoop-env.sh and optionally the etc/hadoop/mapred-env.sh and etc/hadoop/yarn-env.sh 比方配置NameNode的启动参数HDFS_NAMENODE_OPTS等。

3.7.5 配置 hadoop-env.sh(hadoop01操作)
# 切换到hadoopdeploy用户[root@hadoop01 ~]# su - hadoopdeployLast login: Sun Feb 19 14:22:50 CST 2023 on pts/0# 进入到hadoop的配置目录[hadoopdeploy@hadoop01 ~]$ cd /opt/bigdata/hadoop-3.3.4/etc/hadoop/[hadoopdeploy@hadoop01 hadoop]$ vim hadoop-env.sh# 减少如下内容export JAVA_HOME=/usr/local/jdk8export HDFS_NAMENODE_USER=hadoopdeployexport HDFS_DATANODE_USER=hadoopdeployexport HDFS_SECONDARYNAMENODE_USER=hadoopdeployexport YARN_RESOURCEMANAGER_USER=hadoopdeployexport YARN_NODEMANAGER_USER=hadoopdeploy
3.7.6 配置core-site.xml文件(hadoop01操作)(外围配置文件)

默认配置文件门路:https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml

vim /opt/bigdata/hadoop-3.3.4/etc/hadoop/core-site.xml

<configuration>    <!-- 指定NameNode的地址 -->    <property>        <name>fs.defaultFS</name>        <value>hdfs://hadoop01:8020</value>    </property>    <!-- 指定hadoop数据的存储目录 -->    <property>        <name>hadoop.tmp.dir</name>        <value>/opt/bigdata/hadoop-3.3.4/data</value>    </property>    <!-- 配置HDFS网页登录应用的动态用户为hadoopdeploy,如果不配置的话,当在hdfs页面点击删除时>看看后果 -->    <property>        <name>hadoop.http.staticuser.user</name>        <value>hadoopdeploy</value>    </property>    <!-- 文件垃圾桶保留工夫 -->    <property>        <name>fs.trash.interval</name>        <value>1440</value>    </property></configuration>
3.7.7 配置hdfs-site.xml文件(hadoop01操作)(hdfs配置文件)

默认配置文件门路:https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

vim /opt/bigdata/hadoop-3.3.4/etc/hadoop/hdfs-site.xml

<configuration>    <!-- 配置2个正本 -->    <property>        <name>dfs.replication</name>        <value>2</value>    </property>    <!-- nn web端拜访地址-->    <property>        <name>dfs.namenode.http-address</name>        <value>hadoop01:9870</value>    </property>    <!-- snn web端拜访地址-->    <property>        <name>dfs.namenode.secondary.http-address</name>        <value>hadoop03:9868</value>    </property></configuration>
3.7.8 配置yarn-site.xml文件(hadoop01操作)(yarn配置文件)

默认配置文件门路:https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml

vim /opt/bigdata/hadoop-3.3.4/etc/hadoop/yarn-site.xml

<configuration>    <!-- Site specific YARN configuration properties -->    <!-- 指定ResourceManager的地址 -->    <property>        <name>yarn.resourcemanager.hostname</name>        <value>hadoop02</value>    </property>        <!-- 指定MR走shuffle -->    <property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>    </property>        <!-- 是否对容器施行物理内存限度 -->    <property>        <name>yarn.nodemanager.pmem-check-enabled</name>        <value>false</value>    </property>        <!-- 是否对容器施行虚拟内存限度 -->    <property>        <name>yarn.nodemanager.vmem-check-enabled</name>        <value>false</value>    </property>        <!-- 设置 yarn 历史服务器地址 -->    <property>        <name>yarn.log.server.url</name>        <value>http://hadoop02:19888/jobhistory/logs</value>    </property>        <!-- 开启日志汇集-->    <property>        <name>yarn.log-aggregation-enable</name>        <value>true</value>    </property>        <!-- 汇集日志保留的工夫7天 -->    <property>        <name>yarn.log-aggregation.retain-seconds</name>        <value>604800</value>    </property></configuration>
3.7.9 配置mapred-site.xml文件(hadoop01操作)(mapreduce配置文件)

默认配置文件门路:https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml

vim /opt/bigdata/hadoop-3.3.4/etc/hadoop/yarn-site.xml

<configuration>    <!-- 设置 MR 程序默认运行模式:yarn 集群模式,local 本地模式-->    <property>        <name>mapreduce.framework.name</name>        <value>yarn</value>    </property>        <!-- MR 程序历史服务地址 -->    <property>        <name>mapreduce.jobhistory.address</name>        <value>hadoop01:10020</value>    </property>        <!-- MR 程序历史服务器 web 端地址 -->    <property>        <name>mapreduce.jobhistory.webapp.address</name>        <value>hadoop01:19888</value>    </property>        <property>        <name>yarn.app.mapreduce.am.env</name>        <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>    </property>        <property>        <name>mapreduce.map.env</name>        <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>    </property>        <property>        <name>mapreduce.reduce.env</name>        <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>    </property></configuration>
3.7.10 配置workers文件(hadoop01操作)

vim /opt/bigdata/hadoop-3.3.4/etc/hadoop/workers

hadoop01hadoop02hadoop03

workers配置文件中不要有多余的空格或换行。

3.7.11 3台机器hadoop配置同步(hadoop01操作)
1、同步hadoop文件
# 同步 hadoop 文件[hadoopdeploy@hadoop01 hadoop]$ scp -r /opt/bigdata/hadoop-3.3.4/ hadoopdeploy@hadoop02:/opt/bigdata/hadoop-3.3.4[hadoopdeploy@hadoop01 hadoop]$ scp -r /opt/bigdata/hadoop-3.3.4/ hadoopdeploy@hadoop03:/opt/bigdata/hadoop-3.3.4
2、hadoop02和hadoop03设置hadoop的环境变量
[hadoopdeploy@hadoop03 bigdata]$ su - rootPassword:Last login: Sun Feb 19 13:07:40 CST 2023 on pts/0[root@hadoop03 ~]# vim /etc/profile[root@hadoop03 ~]# tail -n 4 /etc/profile# 配置HADOOPexport HADOOP_HOME=/opt/bigdata/hadoop-3.3.4/export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH[root@hadoop03 ~]# source /etc/profile

3、启动集群

3.1 集群格式化

当是第一次启动集群时,须要对hdfs进行格式化,在NameNode节点操作。

[hadoopdeploy@hadoop01 hadoop]$ hdfs namenode -format

3.2 集群启动

启动集群有2种形式

  • 形式一: 每台机器一一启动过程,比方:启动NameNode,启动DataNode,能够做到准确管制每个过程的启动。
  • 形式二: 配置好各个机器之间的免密登录并且配置好 workers 文件,通过脚本一键启动。

3.2.1 一一启动过程

# HDFS 集群[hadoopdeploy@hadoop01 hadoop]$ hdfs --daemon start namenode | datanode | secondarynamenode# YARN 集群[hadoopdeploy@hadoop01 hadoop]$ hdfs yarn --daemon start resourcemanager | nodemanager | proxyserver

3.2.2 脚本一键启动

  • start-dfs.sh 一键启动hdfs集群的所有过程
  • start-yarn.sh 一键启动yarn集群的所有过程
  • start-all.sh 一键启动hdfs和yarn集群的所有过程

3.3 启动集群

3.3.1 启动hdfs集群

须要在NameNode这台机器上启动

# 改脚本启动集群中的 NameNode、DataNode和SecondaryNameNode[hadoopdeploy@hadoop01 hadoop]$ start-dfs.sh

3.3.2 启动yarn集群

须要在ResourceManager这台机器上启动

# 该脚本启动集群中的 ResourceManager 和 NodeManager 过程[hadoopdeploy@hadoop02 hadoop]$ start-yarn.sh

3.3.3 启动JobHistoryServer

[hadoopdeploy@hadoop01 hadoop]$ mapred --daemon start historyserver

3.4 查看各个机器上启动的服务是否和咱们布局的统一


能够看到是统一的。

3.5 拜访页面

3.5.1 拜访NameNode ui (hdfs集群)


如果这个时候通过 hadoop fs 命令能够上传文件,然而在这个web界面上能够创立文件夹,然而上传文件报错,此处就须要在拜访ui界面的这个电脑的hosts文件中,将部署hadoop的那几台的电脑的ip 和hostname 在本机上进行映射

3.5.2 拜访SecondaryNameNode ui

3.5.3 查看ResourceManager ui(yarn集群)

3.5.4 拜访jobhistory

4、参考链接

1、https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Java+Versions
2、https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html