[TOC]

上一份工作次要负责大数据平台的建设,在这个过程中积攒了一些Hadoop生态组件的搭建和应用笔记,因为工夫关系,不打算去批改其中的错别字和排版问题,间接释出原始笔记。

前置条件

我所在的集群有三台服务其,对应的host别离为master,slave1,slave2。hadoop服务的装置分部为

| 机器host| 组件状况|
| :-------- | --------:|
| master| namenode、datanode、journalnode、resourcemanager、nodemanager、jobhistoryserver|
| slave1| namenode、datanode、journalnode、resourcemanager、nodemanager|
|slave2 | datanode、journalnode、nodemanager|

kerberos相干

首先咱们要装置好kerberos,kerberos的装置搭建参考链接
https://www.cnblogs.com/nices...

给hadoop各组件创立kerberos账号

进入kerberos的admin.local后,顺次执行下述命令

//组件web服务的princialaddprinc -randkey HTTP/master@TEST.COMaddprinc -randkey HTTP/slave1@TEST.COMaddprinc -randkey HTTP/slave2@TEST.COM//namenode的princialaddprinc -randkey nn/master@TEST.COMaddprinc -randkey nn/slave1@TEST.COM//datanode的princialaddprinc -randkey dn/master@TEST.COM    addprinc -randkey dn/slave1@TEST.COMaddprinc -randkey dn/slave2@TEST.COM//journalnode的princialaddprinc -randkey jn/master@TEST.COM    addprinc -randkey jn/slave1@TEST.COMaddprinc -randkey jn/slave2@TEST.COM//resourcemanager 的princialaddprinc -randkey rm/master@TEST.COMaddprinc -randkey rm/slave1@TEST.COM//nodemanager的principaladdprinc -randkey nm/master@TEST.COM    addprinc -randkey nm/slave1@TEST.COMaddprinc -randkey nm/slave2@TEST.COM//job hisotry server的princialaddprinc -randkey jhs/master@TEST.COM

将这些账号做成keytab

同样是在admin.local中,将上述账号认证信息做成keytab

ktadd -k /opt/keytab_store/http.service.keytab HTTP/master@TEST.COMktadd -k /opt/keytab_store/http.service.keytab HTTP/slave1@TEST.COMktadd -k /opt/keytab_store/http.service.keytab HTTP/slave2@TEST.COMktadd -k /opt/keytab_store/nn.service.keytab nn/master@TEST.COMktadd -k /opt/keytab_store/nn.service.keytab nn/slave1@TEST.COMktadd -k /opt/keytab_store/dn.service.keytab dn/master@TEST.COM    ktadd -k /opt/keytab_store/dn.service.keytab dn/slave1@TEST.COMktadd -k /opt/keytab_store/dn.service.keytab dn/slave2@TEST.COMktadd -k /opt/keytab_store/jn.service.keytab jn/master@TEST.COM    ktadd -k /opt/keytab_store/jn.service.keytab jn/slave1@TEST.COMktadd -k /opt/keytab_store/jn.service.keytab jn/slave2@TEST.COMktadd -k /opt/keytab_store/rm.service.keytab rm/master@TEST.COMktadd -k /opt/keytab_store/rm.service.keytab rm/slave1@TEST.COMktadd -k /opt/keytab_store/nm.service.keytab nm/master@TEST.COM    ktadd -k /opt/keytab_store/nm.service.keytab nm/slave1@TEST.COMktadd -k /opt/keytab_store/nm.service.keytab nm/slave2@TEST.COMktadd -k /opt/keytab_store/jhs.service.keytab jhs/master@TEST.COM

多个账号能够做到一个keytab中去,上述的命令做了多个文件,不同组件角色的独自放到了一个keytab文件中。其实外部网络,能够把所有的hadoop相干组件做成一个大的keytab文件,升高配置复杂性。

将上述的keytab文件,散发到集群所有机器

core-site.xml

要害配置

        <property>                <name>hadoop.security.authentication</name>                <value>kerberos</value>        </property>        <property>                <name>hadoop.security.authorization</name>                <value>true</value>        </property>        <property>                <name>hadoop.security.auth_to_local</name>                <value>                        RULE:[2:$1/$2@$0]([ndj]n/.*@TEST.COM)s/.*/hdfs/                        RULE:[2:$1/$2@$0]([rn]m/.*@TEST.COM)s/.*/yarn/                        RULE:[2:$1/$2@$0](jhs/.*@TEST.COM)s/.*/mapred/                        DEFAULT                </value>        </property>

上述配置的意思是 在整个集群中费用kerberos作为平安认证和受权,
hadoop.security.auth_to_local 配置组件之间互访时被拜访的服务,如何从拜访的Principal中抽取出理论的用户。大抵规定以第一行为例,示意将namenode, 和datanode ,journalnode的principal 映射成为hdfs的user
而最终的default是上述规定都不匹配时的默认规定,默认规定会间接从principal中提取第一个斜杠后面的信息作为user。比方test/xxhost@DOMIAN.COM 会被辨认成明为test的user

HDFS

<property>        <name>dfs.block.access.token.enable</name>        <value>true</value>    </property><property>        <name>dfs.namenode.kerberos.principal</name>        <value>nn/_HOST@TEST.COM</value>    </property><property>        <name>dfs.namenode.keytab.file</name>        <value>/opt/keytab_store/nn.service.keytab</value>    </property><property>        <name>dfs.namenode.kerberos.internal.spnego.principal</name>        <value>${dfs.web.authentication.kerberos.principal}</value>    </property><property>        <name>dfs.journalnode.kerberos.principal</name>        <value>jn/_HOST@TEST.COM</value>    </property><property>        <name>dfs.journalnode.keytab.file</name>        <value>/opt/keytab_store/jn.service.keytab</value>    </property><property>        <name>dfs.journalnode.kerberos.internal.spnego.principal</name>        <value>${dfs.web.authentication.kerberos.principal}</value>    </property><property>        <name>dfs.datanode.kerberos.principal</name>        <value>dn/_HOST@TEST.COM</value>    </property><property>        <name>dfs.datanode.keytab.file</name>        <value>/opt/keytab_store/dn.service.keytab</value>    </property><property>        <name>dfs.web.authentication.kerberos.principal</name>        <value>HTTP/_HOST@TEST.COM</value>    </property><property>        <name>dfs.web.authentication.kerberos.keytab</name>        <value>/opt/keytab_store/http.service.keytab</value>    </property> <property>        <name>dfs.http.policy</name>        <value>HTTPS_ONLY</value>    </property><property>        <name>dfs.data.transfer.protection</name>        <value>authentication</value>    </property>

其中大体配置是配置各组件应用的principal是什么。其中的_HOST相当于语法糖,hadoop会依据本机hostname,替换该配置,从而实现不同机器雷同配置文件的目标

datanode的平安配置

因为datanode数据传输走的不是rpc,而是http。所以datanode无奈应用kerberos的形式进行认证。为了解决这个问题,有两种形式的配置,来实现datanode数据传输的安全性

  • JSVC
  • TLS/SSL

JSVC形式的大体原理是应用JSVC工具,让datanode可能应用特权端口启动,所谓特权端口是指1024以下的端口,这种平安配置假设攻击者无奈获取root权限,所以也就无奈操作datanode来实现。hadoop 2.6.0以前,只能应用这种形式,配置较为简单,不在这里赘述。hadoop 2.6.0当前引入了SASL形式,通过TLS/SSL来实现数据的平安传输,上面介绍这种形式

证书生成和装置

TLS/SSL相干原理见文档 ,这里粘贴地址

首先保障机器上曾经装置好了openssl。上面是具体的配置。核心思想是,做一个公有的CA,而后通过这个公有的CA证书给所有的其它证书签名,通过将公有CA的证书装置到各机器的信赖区里,实现一个各机器间的TLS/SSL通信

而后在集群中轻易找一台机器,学生成CA证书,这里在Master这台机器上操作

 openssl req -new -x509 -keyout ca_private.key -out ca_cert -days 9999 -subj '/C=CN/ST=chengdu/L=chengdu/O=bigdata/OU=bigdata/CN=master' 

将上述的CA私钥跟更要证书拷贝到各个机器。而后再各机器上做如下操作,当然如果咱们在生成证书时,用的明码齐全一样也能够在一个机器上做,最初把相干的keystore和truststore散发到所有的机器。

//生成本人的公私秘钥对keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "CN=slave2, OU=bigdata, O=bigdata, L=chengdu, ST=chengdu, C=CN"//将上述的CA公钥证书导入本机的信赖区truststorekeytool -keystore truststore -alias CARoot -import -file ca_cert //将上述的CA公钥导入本机的keystore中keytool -keystore keystore -alias CARoot -import -file ca_cert //将本机的公钥证书导出keytool -certreq -alias localhost -keystore keystore -file local_cert//对CA私钥,对本机的公钥证书进行签名openssl x509 -req -CA hd_ca_cert -CAkey ca_private.key -in local_cert -out local_cert_signed -days 9999 -CAcreateserial //将签名后的证书导入的本人的Keystorekeytool -keystore keystore -alias localhost -import -file local_cert_signed 

hdfs-site.xml的重点配置

配置dfs.http.policy的value为HTTPS_ONLY
配置dfs.data.transfer.protection的value为authenticationintegrity privacy任意一种。个别外部集群用authentication即可

  • authentication ,只认证签名
  • integrity 除了认证签名外,还验证数据是否被篡改
  • privacy,数据除了上述的认证和完整性验证之外还要加密传输

ssl-client.xml 和 ssl-server.xml配置

hadoop在在跟core-site.xml同级目录下个别有ssl-client.xml.example和ssl-server.xml.example两个模板文件,咱们能够间接去掉template来后作为配置文件来配置。他们是用来配置以后组件作为服务端时,本人的证书kestore地位,和作为客户端时,本人的信赖证书truststore地位

ssl-client.xml配置如下

<configuration>                                                                                                                                                                                                                                                 <property>                                                                                                                        <name>ssl.client.truststore.location</name>                                                                                     <value>/opt/ssl_store/truststore</value>                                                                                        <description>Truststore to be used by clients like distcp. Must be                                                              specified.                                                                                                                      </description>                                                                                                                </property>                                                                                                                                                                                                                                                     <property>                                                                                                                        <name>ssl.client.truststore.password</name>                                                                                     <value>123456</value>                                                                                                           <description>Optional. Default value is "".                                                                                     </description>                                                                                                                </property>                                                                                                                                                                                                                                                     <property>                                                                                                                        <name>ssl.client.truststore.type</name>                                                                                         <value>jks</value>                                                                                                              <description>Optional. The keystore file format, default value is "jks".                                                        </description>                                                                                                                </property>                                                                                                                                                                                                                                                     <property>                                                                                                                        <name>ssl.client.truststore.reload.interval</name>                                                                              <value>10000</value>                                                                                                            <description>Truststore reload check interval, in milliseconds.                                                                 Default value is 10000 (10 seconds).                                                                                            </description>                                                                                                                </property>                                                                                                                                                                                                                                                     <property>                                                                                                                        <name>ssl.client.keystore.location</name>                                                                                       <value>/opt/ssl_store/keystore</value>                                                                                          <description>Keystore to be used by clients like distcp. Must be                                                                specified.                                                                                                                      </description>                                                                                                                </property>      <property>  <name>ssl.client.keystore.password</name>  <value>123456</value>  <description>Optional. Default value is "".  </description></property><property>  <name>ssl.client.keystore.keypassword</name>  <value>123456</value>  <description>Optional. Default value is "".  </description></property><property>  <name>ssl.client.keystore.type</name>  <value>jks</value>  <description>Optional. The keystore file format, default value is "jks".  </description></property></configuration>                                                                                                               

ssl-server.xml

<property>  <name>ssl.server.keystore.password</name>  <value>123456</value>  <description>Must be specified.  </description></property><property>  <name>ssl.server.keystore.keypassword</name>  <value>123456</value>  <description>Must be specified.  </description></property><property>  <name>ssl.server.keystore.type</name>  <value>jks</value>  <description>Optional. The keystore file format, default value is "jks".  </description></property><property>  <name>ssl.server.exclude.cipher.list</name>  <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,  SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,  SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,  SSL_RSA_WITH_RC4_128_MD5</value>  <description>Optional. The weak security cipher suites that you want excluded  from SSL communication.</description></property></configuration>                                              

上述配置的123456是咱们在做证书时应用的明码

yarn

整体配置

<property>        <name>yarn.resourcemanager.principal</name>        <value>rm/_HOST@TEST.COM</value>    </property><property>        <name>yarn.resourcemanager.keytab</name>        <value>/opt/keytab_store/rm.service.keytab</value>    </property><property>        <name>yarn.nodemanager.principal</name>        <value>nm/_HOST@TEST.COM</value>    </property><property>        <name>yarn.nodemanager.keytab</name>        <value>/opt/keytab_store/nm.service.keytab</value>    </property><property>        <!--平安集群必须应用上面的LinuxContainerExecutor-->        <name>yarn.nodemanager.container-executor.class</name>        <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>    </property><property>        <name>yarn.nodemanager.linux-container-executor.group</name>        <value>hadoop</value>    </property><property>        <name>yarn.nodemanager.linux-container-executor.path</name>        <value>/opt/hadoop-3.1.3/bin/container-executor</value>    </property>

container-executor

build LinuxContainerExecutor

上述yarn.nodemanager.linux-container-executor.path指定了LinuxContainerExecutor对应的可执行文件container-executor的门路。
hadoop发行包在bin门路下,个别就曾经有这个文件了。
这个文件执行须要一个配置,container-executor.cfg 。其默认加载的是$HADOOP_HOME/etc/hadoop/container-executor.cfg这个门路的配置文件。

但因为这个门路自身又有hadoop的其它配置文件,而container-executor又要求container-executor.cfg所在门路所有层级权限都只能root拜访。这会导致咱们其其它组件启动呈现各种奇奇乖僻的问题。

所以咱们须要另外指定container-executor.cfg文件的地位。但问题是container-executor这个二进制文件在构建时,曾经写死了文件门路。如果咱们须要重指定配置文件门路,须要从新打包container-executor。构建步骤为

  • 首先下载同版本的hadoop源码
  • 进入到源码包的门路hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
  • 应用命令mvn package -DskipTests=true -Dcontainer-executor.conf.dir=/etc/hadoop/ 构建,container-executor.conf.dir参数即指定新的container-executor.cfg文件门路
  • 构建实现后,在构建门路下的target/native/target/usr/local/bin门路即可找到新构建的container-executor,将其拷贝到$HADOOP_HOME/bin下,替换原来的程序即可

    配置container-executor.cfg

    在/etc/hadoop/中,创立container-executor.cfg,其配置内容如下

yarn.nodemanager.linux-container-executor.group=hadoopbanned.users=hdfs,yarn,mapred,binmin.user.id=1000allowed.system.users=feature.tc.enabled=false

留神配置每行不要有空格,yarn.nodemanager.linux-container-executor.group这个配置值同yarn-site.xml中的统一

总结权限配置须要配置的项

文件权限批改

chown root:hadoop /opt/hadoop-3.1.3/bin/container-executorchmod 6050 /opt/hadoop-3.1.3/bin/container-executorchown root:hadoop /etc/hadoop/container-executor.cfgchmod 400 /etc/hadoop/container-executor.cfg

假如在yarn-site.xml的中yarn.nodemanager.local-dirs 配置 门路为/home/var/data/hadoop/nodemanager/data
yarn.nodemanager.log-dirs配置门路为 /home/var/data/hadoop/nodemanager/log,还须要做以下权限配置

chown yarn:hadoop /home/var/data/hadoop/nodemanager/data chown yarn:hadoop /home/var/data/hadoop/nodemanager/log  chmod 755 /home/var/data/hadoop/nodemanager/datachmod 755 /home/var/data/hadoop/nodemanager/log

mapreduce

 <property>        <name>mapreduce.jobhistory.keytab</name>        <value>/opt/keytab_store/jhs.service.keytab</value>    </property> <property>        <name>mapreduce.jobhistory.principal</name>        <value>jhs/_HOST@TEST.COM</value>    </property>

启动

配置完后,按原来的形式启动即可。只是因为hdfs开起了SSL/TLS ,其原来的9870端口,变成了9871, 且须要通过https拜访。比方咱们这地址为:https://master:9871

参考资料

https://hadoop.apache.org/doc...

https://secfree.github.io/blo...

https://blog.csdn.net/picway/...

https://developer.aliyun.com/...

https://makeling.github.io/bi...

https://makeling.github.io/bi...

http://secfree.github.io/blog...

欢送关注我的集体公众号"东南偏北UP",记录代码人生,行业思考,科技评论