Hadoop-HA集群搭建

44次阅读

共计 13065 个字符,预计需要花费 33 分钟才能阅读完成。

Hadoop HA 集群简介

本教程用于搭建 Hadoop HA 集群,关于 HA 集群有以下几点说明:

  • 在 hadoop2.0 中通常由两个 NameNode 组成,一个处于 active 状态,另一个处于 standby 状态。Active NameNode 对外提供服务,Standby NameNode 不对外提供服务,仅同步 active namenode 的状态,以便能够在它失败时快速进行切换。hadoop2.0 官方提供了两种 HDFS HA 的解决方案,一种是 NFS,另一种是 QJM。这里我们使用简单的 QJM。在该方案中,主备 NameNode 之间通过一组 JournalNode 同步元数据信息,一条数据只要成功写入多数 JournalNode 即认为写入成功。通常配置奇数个 JournalNode。
  • hadoop-2.2.0 中依然存在一个问题,就是 ResourceManager 只有一个,存在单点故障,从 hadoop-2.4.1 开始解决了这个问题,有两个 ResourceManager,一个是 Active,一个是 Standby,状态由 zookeeper 进行协调。
  • 这里还配置了一个 zookeeper 集群,用于 ZKFC(DFSZKFailoverController)故障转移,当 Active NameNode/ResourceManager 挂掉了,会自动切换 Standby NameNode/ResourceManager 为 Active 状态

版本介绍

software version
OS CentOS-7-x86_64-DVD-1810.iso
Hadoop hadoop-2.8.4
Zookeeper zookeeper-3.4.10

系统设置

集群角色分配
node actor
master1 NameNode、DFSZKFailoverController(zkfc)、ResourceManager
master2 NameNode、DFSZKFailoverController(zkfc)、ResourceManager
node1 DataNode、NodeManager、JournalNode、QuorumPeerMain
node2 DataNode、NodeManager、JournalNode、QuorumPeerMain
node3 DataNode、NodeManager、JournalNode、QuorumPeerMain

配置 hosts [all]
192.168.56.101  node1
192.168.56.102  node2
192.168.56.103  node3
192.168.56.201  master1
192.168.56.202  master2
新增 Hadoop 用户 [all]
useradd hadoop
passwd hadoop
chmod -v u+w /etc/sudoers
vi /etc/sudoers

hadoop  ALL=(ALL)       ALL
chmod -v u-w /etc/sudoers

修改 hostname [all]
hostnamectl set-hostname $hostname [master1|master2|node1|node2|node3]
systemctl reboot -i

免密登录 [all]
ssh-keygen -t rsa

cat .ssh/id_rsa.pub >> .ssh/authorized_keys 

scp .ssh/authorized_keys $next_node:~/.ssh/
sudo vi /etc/ssh/sshd_config

RSAAuthentication yes
StrictModes no
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
systemctl restart sshd.service 

关闭并禁用防火墙 [all]
sudo systemctl stop firewalld 

sudo firewall-cmd --state 

systemctl disable firewalld.service 
security 策略 [all]
vi /etc/selinux/config 

SELINUX=disabled

软件安装

安装 JDK [all]
sudo mkdir -p /opt/env

sudo chown -R hadoop:hadoop /opt/env

tar -xvf jdk-8u121-linux-i586.tar.gz
sudo vi /etc/profile

export JAVA_HOME=/opt/env/jdk1.8.0_121
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH
source /etc/profile

安装 ZK [node1 node2 node3]
sudo mkdir -p /opt/zookeeper

sudo chown -R hadoop:hadoop /opt/zookeeper

tar -zxvf /tmp/zookeeper-3.4.10.tar.gz -C /opt/zookeeper/

sudo chown -R hadoop:hadoop /opt/zookeeper
vi conf/zoo.cfg 

dataDir=/opt/zookeeper/zookeeper-3.4.10/data
......
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
mkdir data 

echo $zk_id [1|2|3] > data/myid

安装 Hadoop [all]
sudo mkdir -p /opt/hadoop/data

sudo chown -R hadoop:hadoop /opt/hadoop/

tar -zxvf hadoop-2.8.4.tar.gz -C /opt/hadoop/

mkdir journaldata
配置 Hadoop [all]
vi core-site.xml 

<configuration>
    <!-- 指定 hdfs 的 nameservice 为 ns1 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://ns1/</value>
    </property>

    <!-- 指定 hadoop 临时目录 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop/data</value>
    </property>

    <!-- 指定 zookeeper 地址 -->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>node1:2181,node2:2181,node3:2181</value>
    </property>
    <property>
        <name>ha.zookeeper.session-timeout.ms</name>
        <value>3000</value>
    </property>
</configuration>
vi hdfs-site.xml 

<configuration>
    <!-- 指定 hdfs 的 nameservice 为 ns1,需要和 core-site.xml 中的保持一致 -->
    <property>
        <name>dfs.nameservices</name>
        <value>ns1</value>
    </property>
    <!-- ns1 下面有两个 NameNode,分别是 nn1,nn2 -->
    <property>
        <name>dfs.ha.namenodes.ns1</name>
        <value>nn1,nn2</value>
    </property>
    <!-- nn1 的 RPC 通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.ns1.nn1</name>
        <value>master1:9000</value>
    </property>
    <!-- nn1 的 http 通信地址 -->
    <property>
        <name>dfs.namenode.http-address.ns1.nn1</name>
        <value>master1:50070</value>
    </property>
    <!-- nn2 的 RPC 通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.ns1.nn2</name>
        <value>master2:9000</value>
    </property>
    <!-- nn2 的 http 通信地址 -->
    <property>
        <name>dfs.namenode.http-address.ns1.nn2</name>
        <value>master2:50070</value>
    </property>
    <!-- 指定 NameNode 的元数据在 JournalNode 上的存放位置 -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://node1:8485;node2:8485;node3:8485/ns1</value>
    </property>
    <!-- 指定 JournalNode 在本地磁盘存放数据的位置 -->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/hadoop/journaldata</value>
    </property>
    <!-- 开启 NameNode 失败自动切换 -->
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <!-- 配置失败自动切换实现方式 -->
    <property>
        <name>dfs.client.failover.proxy.provider.ns1</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行 -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>
            sshfence
            shell(/bin/true)
        </value>
    </property>
    <!-- 使用 sshfence 隔离机制时需要 ssh 免登陆 -->
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
        <!-- 这个是安装用户的 ssh 的 id_rsa 文件的路径,我使用的是 hadoop 用户 -->
    </property>
    <!-- 配置 sshfence 隔离机制超时时间 -->
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>
     <!-- 指定 namenode 名称空间的存储地址 -->
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///opt/hadoop/hdfs/name</value>
    </property>
    <!-- 指定 datanode 数据存储地址 -->
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///opt/hadoop/hdfs/data</value>
    </property>
    <!-- 指定数据冗余份数 -->
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>
vi mapred-site.xml

<configuration>
    <!-- 指定 mr 框架为 yarn 方式 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <!-- 配置 MapReduce JobHistory Server 地址,默认端口 10020 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>0.0.0.0:10020</value>
    </property>
    <!-- 配置 MapReduce JobHistory Server web ui 地址,默认端口 19888 -->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>0.0.0.0:19888</value>
    </property>
</configuration>
vi yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->

    <!-- 开启 RM 高可用 -->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <!-- 开启自动恢复功能 -->
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
    <!-- 指定 RM 的 cluster id -->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yrc</value>
    </property>
    <!-- 指定 RM 的名字 -->
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <!-- 分别指定 RM 的地址 -->
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>master1</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>master2</value>
    </property>
    <property>
    <name>yarn.resourcemanager.ha.id</name>
    <value>$ResourceManager_Id [rm1|rm2]</value>
    <description>If we want to launch more than one RM in single node,we need this configuration</description>
   </property>
    <!-- 指定 zk 集群地址 -->
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>node1:2181,node2:2181,node3:2181</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>node1:2181,node2:2181,node3:2181</value>
    </property>
    <!-- 配置与 zookeeper 的连接地址 -->
    <property>
        <name>yarn.resourcemanager.zk-state-store.address</name>
        <value>node1:2181,node2:2181,node3:2181</value>
    </property>
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name>
        <value>/yarn-leader-election</value>
        <description>Optionalsetting.Thedefaultvalueis/yarn-leader-election</description>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>
vi hadoop-env.sh 
vi mapred-env.sh
vi yarn-env.sh

export JAVA_HOME=/opt/env/jdk1.8.0_121
export CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib 
export HADOOP_HOME=/opt/hadoop/hadoop-2.8.4
export HADOOP_PID_DIR=/opt/hadoop/hadoop-2.8.4/pids 
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 
export HADOOP_OPTS="$HADOOP_OPTS-Djava.library.path=$HADOOP_HOME/lib/native" 
export HADOOP_PREFIX=$HADOOP_HOME 
export HADOOP_MAPRED_HOME=$HADOOP_HOME 
export HADOOP_COMMON_HOME=$HADOOP_HOME 
export HADOOP_HDFS_HOME=$HADOOP_HOME 
export YARN_HOME=$HADOOP_HOME 
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop 
export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop 
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop 
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native 
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
vi masters

master2
vi slaves

node1
node2
node3

启动集群

启动 zookeeper [node1 node2 node3]
 ./zkServer.sh start

[hadoop@node1 bin]$ ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader

[hadoop@node2 bin]$ ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower

[hadoop@node3 bin]$ ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
格式化 NameNode [master1]
bin/hdfs namenode -format
[hadoop@master2 ~]$ ll /opt/hadoop/data/dfs/name/current/
total 16
-rw-rw-r--. 1 hadoop hadoop 323 Jul 11 01:17 fsimage_0000000000000000000
-rw-rw-r--. 1 hadoop hadoop  62 Jul 11 01:17 fsimage_0000000000000000000.md5
-rw-rw-r--. 1 hadoop hadoop   2 Jul 11 01:17 seen_txid
-rw-rw-r--. 1 hadoop hadoop 219 Jul 11 01:17 VERSION
格式化 zkfc [master1 master2]
bin/hdfs zkfc -formatZK
启动 NameNode/ResourceManager [master1]
sbin/start-dfs.sh 
[hadoop@master1 sbin]$ sh start-dfs.sh 
which: no start-dfs.sh in (/opt/env/jdk1.8.0_121/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:/home/hadoop/bin)
Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with'execstack -c <libfile>', or link it with'-z noexecstack'.
19/07/25 01:00:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master1 master2]
master2: starting namenode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-namenode-master2.out
master1: starting namenode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-namenode-master1.out
node2: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node2.out
node1: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node1.out
node3: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node3.out
Starting journal nodes [node1 node2 node3]
node2: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node2.out
node3: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node3.out
node1: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node1.out
Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with'execstack -c <libfile>', or link it with'-z noexecstack'.
19/07/25 01:01:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [master1 master2]
master2: starting zkfc, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-zkfc-master2.out
master1: starting zkfc, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-zkfc-master1.out
[hadoop@master1 sbin]$ jps
5552 NameNode
5940 Jps
5869 DFSZKFailoverController
sbin/start-yarn.sh 
[hadoop@master1 sbin]$ sh start-yarn.sh 
starting yarn daemons
which: no start-yarn.sh in (/opt/env/jdk1.8.0_121/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:/home/hadoop/bin)
starting resourcemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-resourcemanager-master1.out
node2: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node2.out
node3: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node3.out
node1: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node1.out
[hadoop@master1 sbin]$ jps
5552 NameNode
5994 ResourceManager
6092 Jps
5869 DFSZKFailoverController

此时 DataNode

[hadoop@node1 hadoop-2.8.4]$ jps
3808 QuorumPeerMain
5062 Jps
4506 DataNode
4620 JournalNode
4732 NodeManager

此时 master2

[hadoop@master2 sbin]$ jps
6092 Jps
5869 DFSZKFailoverController
格式化 NameNode Standby [master2]
bin/hdfs namenode -bootstrapStandby
启动 NameNode Standby [master2]
sbin/hadoop-daemon.sh start namenode
启动 ResourceManager Standby [master2]
sbin/yarn-daemon.sh start resourcemanager
[hadoop@master2 hadoop-2.8.4]$ jps
4233 Jps
3885 DFSZKFailoverController
4189 ResourceManager
4030 NameNode

ResourceManager 状态

[hadoop@master2 hadoop-2.8.4]$  bin/yarn rmadmin -getServiceState rm2
Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with'execstack -c <libfile>', or link it with'-z noexecstack'.
19/07/25 01:48:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
active
[hadoop@master2 hadoop-2.8.4]$  bin/yarn rmadmin -getServiceState rm1
Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with'execstack -c <libfile>', or link it with'-z noexecstack'.
19/07/25 01:48:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
standby

enjoy

正文完
 0