关于大数据:开启-Kerberos-安全的大数据环境中Yarn-Container-启动失败导致-sparkhive-作业失败

6次阅读

共计 3300 个字符,预计需要花费 9 分钟才能阅读完成。

大数据问题排查系列 – 开启 Kerberos 平安的大数据环境中,Yarn Container 启动失败导致 spark/hive 作业失败

前言

大家好,我是明哥!

最近在若干个不同客户现场,都遇到了
大数据集群中开启 Kerberos 后,spark/hive 作业提交到 YARN 后,因 YARN Container 启动失败作业无奈执行的状况,在此总结下背地的知识点,跟大家分享下,心愿大家有所播种。

1 问题 1 问题景象

某客户现场,大数据集群中开启了 kerberos 平安认证,提交 hive on mr/hive on spark 工作给 yarn 后执行失败,查看 yarn web ui 可见报错信息:

Application xxx failed 2 times due to AM container for xxx exited with exitCode -1000
......
main : run as user is hs_cic
main : requested yarn user is hs_cic
User hs_cic not found 
Failing the application.

2 问题 2 问题景象

某客户现场,大数据集群中开启了 kerberos 平安认证,提交 spark on hive 工作给 yarn 后执行失败,查看 yarn web ui 可见报错信息:

main : run as user is app-user
main : requested yarn user is app-user
User app-user not found 
Failing the application.

3 问题剖析

上述问题呈现后,在剖析过程中,笔者留神到,应用命令 yarn logs -applicationId xxx 查问作业具体日志时,查问不到任何相干日志 (以确认 yarn 曾经开启了日志聚合 yarn.log-aggregation-enable),且查看 hdfs 文件系统时发现曾经创立了该作业日志对应的目录但该目录下没有文件;

另外在 hive on mr/spark 作业失败的集群中,笔者留意到集群中启用了 hive 代理:hive.server2.enable.doAs=true.

联合 yarn web ui 中的要害报错信息 “Application xxx failed 2 times due to AM container for xxx exited with exitCode -1000…
User hs_cic not found
Failing the application.”, 能够确认,是因为集群中 YARN nodeManager 节点上没有相干业务用户,所以启动 yarn container 失败,导致作业无奈执行。

4 问题起因

  • 在没有开启 Kerberos 平安的集群里,启动 yarn container 过程时,yarn.nodemanager.container-executor.class 能够应用 DefaultContainerExecutor 或 LinuxContainerExecutor;
  • 在启用了 Kerberos 平安的集群里,启动 yarn container 过程时,yarn.nodemanager.container-executor.class 只能应用 LinuxContainerExecutor,其在底层会应用 setuid 切换到业务用户以启动 container 过程,所以要求所有 nodemanager 节点必须有业务用户;
  • 当集群中仅仅在 KDC 中增加了业务用户,而没有在 yarn nodemanager 节点建设对应的业务用用户时,nodemanager 节点就会因为没有相干用户而无奈启动 container 过程,而作业也就会因为无奈获取到 contariner 资源从而无奈执行而报错了。

5 解决方案

  • 解决方案很简略,就是在集群中各个节点上(至多是 yarn nodemanager 节点)应用命令 useradd 创立对应的业务用户即可(底层会创立相干用户和用户组并写入到文件 /etc/passwd 中);
  • 如果节点过多厌弃操作麻烦的话,也能够配置应用 ldap 并在 ldap 中集中创立相干业务用户,留神是配置 NodeManager 从 LDAP 中查找相干用户,不是应用 ldap 认证相干用户(–enableldap vs –enableldapauth), 具体细节这里不再赘述;
  • 针对 hive on mr/spark 的情景,也能够敞开 hive 的代理(hive.server2.enable.doAs=false),此时 hiveserver2 编译提交 sql 作业到 yarn 时,会应用零碎用户 hive 的身份进行提交,因为 cdh 装置时曾经主动在集群各节点创立了 hdfs/yarn/hive 等零碎用户,所以执行不会有问题;

6 技术背景

  • DefaultContainerExecutor: When using the default value for yarn.nodemanager.container-executor.class,which is org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor, the launched container process has the same Unix user as the NodeManager,which normally is yarn;
  • LinuxContainerExecutor: The secure container executor on Linux environmentis org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor, this executor runs the containers as either the YARN user who submitted the application (when full security is enabled) or as a dedicated user (defaults to nobody) when full security is not enabled.
  • When full security is enabled, the LinuxContainerExecutor requires all user accounts to be created on the cluster nodes where the containers are launched. It uses a setuid executable that is included in the Hadoop distribution. The NodeManager uses this executable to launch and kill containers. The setuid executable switches to the user who has submitted the application and launches or kills the containers.
  • The LinuxContainerExecutor does have some requirements:If running in non-secure mode, by default, the LCE runs all jobs as user“nobody”. This user can be changed by setting“yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user”to the desired user. However, it can also be configured to run jobs as the user submitting the job. In that case“yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users”should be set to false.
  • after integrating hadoop with openldap, hdfs/hive/sentry can find user in openldap, but yarn cannot(the only exceptional is yarn)
正文完
 0