关于大数据:搭建Hadoop272和Hive233以及Spark312

6次阅读

共计 15448 个字符,预计需要花费 39 分钟才能阅读完成。

Hadoop 简介

Hadoop 是一个用 Java 编写的 Apache 开源框架,容许应用简略的编程模型跨计算机集群分布式解决大型数据集。Hadoop 框架工作的应用程序在跨计算机集群提供分布式存储和计算的环境中工作。Hadoop 旨在从单个服务器扩大到数千个机器,每个都提供本地计算和存储。

Hive 简介

Apache Hive 是一个构建于 Hadoop 顶层的数据仓库,能够将结构化的数据文件映射为一张数据库表,并提供简略的 SQL 查问性能,能够将 SQL 语句转换为 MapReduce 工作进行运行。须要留神的是,Hive 它并不是数据库。

Hive 依赖于 HDFS 和 MapReduce,其对 HDFS 的操作相似于 SQL,咱们称之为 HQL,它提供了丰盛的 SQL 查问形式来剖析存储在 HDFS 中的数据。HQL 能够编译转为 MapReduce 作业,实现查问、汇总、剖析数据等工作。这样一来,即便不相熟 MapReduce 的用户也能够很不便地利用 SQL 语言查问、汇总、剖析数据。而 MapReduce 开发人员能够把己写的 mapper 和 reducer 作为插件来反对 Hive 做更简单的数据分析。

Apache Spark 简介

用于大数据工作负载的分布式开源解决零碎

Apache Spark 是一种用于大数据工作负载的分布式开源解决零碎。它应用内存中缓存和优化的查问执行形式,可针对任何规模的数据进行疾速剖析查问。它提供应用 Java、Scala、Python 和 R 语言的开发 API,反对跨多个工作负载重用代码—批处理、交互式查问、实时剖析、机器学习和图形处理等。

本文将先搭建 jdk1.8 + MySQL5.7 根底环境

之后搭建 Hadoop2.7.2 和 Hive2.3.3 以及 Spark3.1.2

此文章搭建为单机版

1. 创立目录并解压 jdk 安装包

[root@localhost ~]# mkdir  jdk
[root@localhost ~]# cd jdk/
[root@localhost jdk]# ls
jdk-8u202-linux-x64.tar.gz
[root@localhost jdk]# ll
total 189496
-rw-r--r--. 1 root root 194042837 Oct 18 12:05 jdk-8u202-linux-x64.tar.gz
[root@localhost jdk]# 
[root@localhost jdk]# tar xvf jdk-8u202-linux-x64.tar.gz

2. 配置环境变量

[root@localhost ~]# vim /etc/profile
[root@localhost ~]# tail -n 3 /etc/profile
export JAVA_HOME=/root/jdk/jdk1.8.0_202/
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
[root@localhost ~]# 
[root@localhost ~]# source /etc/profile

3. 下载安装 MySQL 并设置为开机自启

[root@localhost ~]# mkdir mysql
[root@localhost ~]# cd mysql
[root@localhost mysql]# wget https://downloads.mysql.com/archives/get/p/23/file/mysql-5.7.35-1.el7.x86_64.rpm-bundle.tar
[root@localhost mysql]# tar xvf mysql-5.7.35-1.el7.x86_64.rpm-bundle.tar 
[root@localhost mysql]# yum install ./*.rpm
[root@localhost mysql]# systemctl start mysqld.service 
[root@localhost mysql]# 
[root@localhost mysql]# systemctl enable mysqld.service 
[root@localhost mysql]# 
[root@localhost mysql]#

4. 查看 MySQL 默认明码,并批改默认明码,同时创立新的用户,将其设置为能够近程登录

[root@localhost mysql]# sudo grep 'temporary password' /var/log/mysqld.log
2021-10-18T06:12:35.519726Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: eNHu<sXHt3rq
[root@localhost mysql]# 
[root@localhost mysql]# 
[root@localhost mysql]# mysql -u root -p 
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.0.25


Copyright (c) 2000, 2021, Oracle and/or its affiliates.


Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


mysql> 
mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'Cby123..';
Query OK, 0 rows affected (0.02 sec)


mysql> 
mysql> 
mysql> use mysql;
Database changed
mysql> 
mysql> update user set host='%' where user ='root';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0


mysql> set global validate_password_policy=0;
Query OK, 0 rows affected (0.01 sec)


mysql> set global validate_password_mixed_case_count=0;
Query OK, 0 rows affected (0.00 sec)


mysql>  set global validate_password_number_count=3;
Query OK, 0 rows affected (0.00 sec)


mysql> set global validate_password_special_char_count=0;
Query OK, 0 rows affected (0.00 sec)


mysql> set global validate_password_length=3;
Query OK, 0 rows affected (0.00 sec)


mysql> SHOW VARIABLES LIKE 'validate_password%';
+--------------------------------------+-------+
| Variable_name                        | Value |
+--------------------------------------+-------+
| validate_password_check_user_name    | OFF   |
| validate_password_dictionary_file    |       |
| validate_password_length             | 3     |
| validate_password_mixed_case_count   | 0     |
| validate_password_number_count       | 3     |
| validate_password_policy             | LOW   |
| validate_password_special_char_count | 0     |
+--------------------------------------+-------+
7 rows in set (0.00 sec)


mysql> create user 'cby'@'%' identified by 'cby';
Query OK, 0 rows affected (0.00 sec)


mysql> grant all on *.* to 'cby'@'%';
Query OK, 0 rows affected (0.00 sec)


mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)


mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)


mysql> CREATE DATABASE dss_dev;
Query OK, 1 row affected (0.00 sec)


mysql> 
mysql> select host,user,plugin from user;
+-----------+---------------+-----------------------+
| host      | user          | plugin                |
+-----------+---------------+-----------------------+
| %         | root          | mysql_native_password |
| localhost | mysql.session | mysql_native_password |
| localhost | mysql.sys     | mysql_native_password |
+-----------+---------------+-----------------------+
3 rows in set (0.01 sec)




mysql>

注:若下面 root 不是 mysql\_native\_password 应用以下命令将其改掉

update user set plugin='mysql_native_password' where user='root';

5. 增加 hosts 解析,同时设置免密登录

[root@localhost ~]# mkdir Hadoop
[root@localhost ~]# 
[root@localhost ~]# vim /etc/hosts
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 namenode
[root@localhost ~]# ssh-keygen
[root@localhost ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@127.0.0.1
[root@localhost ~]#

6. 下载 Hadoop,解压后创立所需目录

[root@localhost ~]# cd Hadoop/
[root@localhost Hadoop]# ls
[root@localhost Hadoop]# wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.2/hadoop-2.7.2.tar.gz
[root@localhost Hadoop]# tar xvf hadoop-2.7.2.tar.gz 
[root@localhost Hadoop]# 
[root@localhost Hadoop]# mkdir  -p /root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/namenode
[root@localhost Hadoop]# 
[root@localhost Hadoop]# 
[root@localhost Hadoop]# mkdir  -p /root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/datanode

7. 增加 Hadoop 环境变量

[root@localhost ~]# vim /etc/profile
[root@localhost ~]# tail -n 8 /etc/profile
export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2/
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
[root@localhost ~]# 
[root@localhost ~]# source /etc/profile

8. 批改 Hadoop 配置

[root@localhost ~]# cd Hadoop/hadoop-2.7.2/
[root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/core-site.xml
[root@localhost hadoop]# 
[root@localhost hadoop]# 
[root@localhost hadoop]# tail /root/Hadoop/hadoop-2.7.2/etc/hadoop/core-site.xml
<!-- Put site-specific property overrides in this file. -->


<configuration>
    <!-- 指定 HDFS 中 NameNode 的地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://127.0.0.1:9000</value>
    </property>


    <!-- 指定 Hadoop 运行时产生文件的存储目录 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/hadoop-2.7.2/data/tmp</value>
    </property>
</configuration>

9. 批改 Hadoop 的 hdfs 目录配置

[root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/hdfs-site.xml 
[root@localhost hadoop]# 
[root@localhost hadoop]# 
[root@localhost hadoop]# 
[root@localhost hadoop]# tail -n 15  /root/Hadoop/hadoop-2.7.2/etc/hadoop/hdfs-site.xml 
<configuration>
   <property> 
      <name>dfs.replication</name> 
      <value>1</value> 
   </property> 
   <property> 
      <name>dfs.name.dir</name> 
      <value>/root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/namenode</value> 
   </property> 
   <property> 
      <name>dfs.data.dir</name>
      <value>/root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/datanode</value> 
   </property>
</configuration>

10. 批改 Hadoop 的 yarn 配置

[root@localhost hadoop]# 
[root@localhost hadoop]# vim  /root/Hadoop/hadoop-2.7.2/etc/hadoop/yarn-site.xml
[root@localhost hadoop]# 
[root@localhost hadoop]# 
[root@localhost hadoop]# tail -n 6 /root/Hadoop/hadoop-2.7.2/etc/hadoop/yarn-site.xml
<configuration>


   <property> 
      <name>yarn.nodemanager.aux-services</name> 
      <value>mapreduce_shuffle</value> 
   </property>


</configuration>
[root@localhost hadoop]# 
[root@localhost hadoop]# cp /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml.template /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml
[root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml
[root@localhost hadoop]# 
[root@localhost hadoop]# 
[root@localhost hadoop]# tail /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml
<configuration>
   <property> 
      <name>mapreduce.framework.name</name> 
      <value>yarn</value> 
   </property>
</configuration>
[root@localhost hadoop]#

11. 批改 Hadoop 环境配置文件

[root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/hadoop-env.sh


批改 JAVA_HOME
export JAVA_HOME=/root/jdk/jdk1.8.0_202/


[root@localhost ~]# hdfs namenode -format
[root@localhost ~]# start-dfs.sh 
[root@localhost ~]# start-yarn.sh

若重置太屡次会导致 clusterID 不匹配,datanode 起不来,删除版本后在初始化启动

[root@localhost ~]# rm -rf /root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/datanode/current/VERSION 
[root@localhost ~]# hadoop namenode -format
[root@localhost ~]# hdfs namenode -format
[root@localhost ~]# start-dfs.sh 
[root@localhost ~]# start-yarn.sh

在浏览器拜访 Hadoop

拜访 Hadoop 的默认端口号为 50070. 应用以下网址,以获取浏览器 Hadoop 服务。

http://localhost:50070/

验证集群的所有应用程序

拜访集群中的所有应用程序的默认端口号为 8088。应用以下 URL 拜访该服务。

http://localhost:8088/

12. 创立 hive 目录并解压

[root@localhost ~]# mkdir hive
[root@localhost ~]# cd hive
[root@localhost hive]# wget https://archive.apache.org/dist/hive/hive-2.3.3/apache-hive-2.3.3-bin.tar.gz
[root@localhost hive]# tar xvf apache-hive-2.3.3-bin.tar.gz 
[root@localhost hive]#

13. 备份 hive 配置文件

[root@localhost hive]# cd /root/hive/apache-hive-2.3.3-bin/conf/
[root@localhost conf]# cp hive-env.sh.template hive-env.sh
[root@localhost conf]# cp hive-default.xml.template hive-site.xml
[root@localhost conf]# cp hive-log4j2.properties.template hive-log4j2.properties
[root@localhost conf]# cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties

14. 在 Hadoop 中创立文件夹并设置权限

[root@localhost conf]# hadoop fs -mkdir -p /data/hive/warehouse
21/10/18 14:27:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@localhost conf]# 
[root@localhost conf]# hadoop fs -mkdir /data/hive/tmp
21/10/18 14:27:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@localhost conf]# 
[root@localhost conf]# hadoop fs -mkdir /data/hive/log
21/10/18 14:27:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@localhost conf]# 
[root@localhost conf]# hadoop fs -chmod -R 777 /data/hive/warehouse
21/10/18 14:27:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@localhost conf]# hadoop fs -chmod -R 777 /data/hive/tmp
21/10/18 14:27:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@localhost conf]# hadoop fs -chmod -R 777 /data/hive/log
21/10/18 14:27:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@localhost conf]#

15. 批改 hive 配置文件

[root@localhost conf]# vim hive-site.xml


hive 配置入下:<property>
  <name>hive.exec.scratchdir</name>
  <value>hdfs://127.0.0.1:9000/data/hive/tmp</value>
</property>
<property>
   <name>hive.metastore.warehouse.dir</name>
  <value>hdfs://127.0.0.1:9000/data/hive/warehouse</value>
</property>
<property>
  <name>hive.querylog.location</name>
  <value>hdfs://127.0.0.1:9000/data/hive/log</value>
</property>


<!—该配置是敞开 hive 元数据版本认证,否则会在启动 spark 程序时报错 -->
<property>
  <name>hive.metastore.schema.verification</name>
  <value>false</value>
</property>




配置 mysql IP 端口以及放元数据的库名称


<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<!—配置 mysql 启动器名称 -->
<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
   <value>com.mysql.jdbc.Driver</value>
</property>
<!—配置连贯 mysql 用户名 -->
<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>root</value>
</property>
<!—配置连贯 mysql 用户名登录明码 -->
<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>Cby123..</value>
</property>

批改配置文件中 system:java.io.tmpdir 和 system:user.name 相干信息,改为理论目录和用户名,或者退出如下配置

  <property>
    <name>system:java.io.tmpdir</name>
    <value>/tmp/hive/java</value>
  </property>
  <property>
    <name>system:user.name</name>
    <value>${user.name}</value>
  </property>

并批改长期门路:

 <property>
    <name>hive.exec.local.scratchdir</name>
    <value>/root/hive/apache-hive-2.3.3-bin/tmp/${system:user.name}</value>
    <description>Local scratch space for Hive jobs</description>
  </property>
  <property>
    <name>hive.downloaded.resources.dir</name>
    <value>/root/hive/apache-hive-2.3.3-bin/tmp/${hive.session.id}_resources</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
  </property>


<property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/root/hive/apache-hive-2.3.3-bin/tmp/root/operation_logs</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
  </property>

16. 配置 hive 中 jdbc 的 MySQL 驱动

[root@localhost lib]# cd /root/hive/apache-hive-2.3.3-bin/lib/
[root@localhost lib]# wget https://downloads.mysql.com/archives/get/p/3/file/mysql-connector-java-5.1.49.tar.gz
[root@localhost lib]# tar xvf mysql-connector-java-5.1.49.tar.gz 
[root@localhost lib]# cp mysql-connector-java-5.1.49/mysql-connector-java-5.1.49.jar .
[root@localhost bin]# 
[root@localhost bin]# vim /root/hive/apache-hive-2.3.3-bin/conf/hive-env.sh
[root@localhost bin]# tail -n 3 /root/hive/apache-hive-2.3.3-bin/conf/hive-env.sh
export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2/
export HIVE_CONF_DIR=/root/hive/apache-hive-2.3.3-bin/conf
export HIVE_AUX_JARS_PATH=/root/hive/apache-hive-2.3.3-bin/lib
[root@localhost bin]#

17. 配置 hive 环境变量

[root@localhost ~]# vim /etc/profile
[root@localhost ~]# tail -n 6 /etc/profile
export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2/
export HIVE_CONF_DIR=/root/hive/apache-hive-2.3.3-bin/conf
export HIVE_AUX_JARS_PATH=/root/hive/apache-hive-2.3.3-bin/lib
export HIVE_PATH=/root/hive/apache-hive-2.3.3-bin/
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$HIVE_PATH/bin


[root@localhost bin]# ./schematool -dbType mysql -initSchema

初始化实现后批改 MySQL 链接信息,之后配置 mysql IP 端口以及放元数据的库名称

[root@localhost conf]# vim hive-site.xml

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://127.0.0.1:3306/hive?characterEncoding=utf8&amp;useSSL=false</value>
</property>

[root@localhost bin]# nohup hive --service metastore &
[root@localhost bin]# nohup hive --service hiveserver2 &

**18. 创立 spark 目录并下载所需文件
**

[root@localhost ~]# mkdir  spark
[root@localhost ~]# 
[root@localhost ~]# cd spark
[root@localhost spark]# 
[root@localhost spark]# wget https://dlcdn.apache.org/spark/spark-3.1.2/spark-3.1.2-bin-without-hadoop.tgz --no-check-certificate
[root@localhost spark]# tar xvf  spark-3.1.2-bin-without-hadoop.tgz

19. 配置 spark 环境变量以及备份配置文件

[root@localhost ~]# vim /etc/profile
[root@localhost ~]# 
[root@localhost ~]# tail -n 3 /etc/profile
export SPARK_HOME=/root/spark/spark-3.1.2-bin-without-hadoop/
export PATH=$PATH:$SPARK_HOME/bin


[root@localhost spark]# cd /root/spark/spark-3.1.2-bin-without-hadoop/conf/
[root@localhost conf]# cp spark-env.sh.template spark-env.sh
[root@localhost conf]# cp spark-defaults.conf.template spark-defaults.conf
[root@localhost conf]# cp metrics.properties.template metrics.properties

20. 配置程序的环境变量

[root@localhost conf]# cp workers.template workers
[root@localhost conf]# vim spark-env.sh


export JAVA_HOME=/root/jdk/jdk1.8.0_202
export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2
export HADOOP_CONF_DIR=/root/Hadoop/hadoop-2.7.2/etc/hadoop
export SPARK_DIST_CLASSPATH=$(/root/Hadoop/hadoop-2.7.2/bin/hadoop classpath)
export SPARK_MASTER_HOST=127.0.0.1
export SPARK_MASTER_PORT=7077
export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 -
Dspark.history.retainedApplications=50 -
Dspark.history.fs.logDirectory=hdfs://127.0.0.1:9000/spark-eventlog"

21. 批改默认的配置文件

[root@localhost conf]# vim spark-defaults.conf


spark.master                     spark://127.0.0.1:7077
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://127.0.0.1:9000/spark-eventlog
spark.serializer                 org.apache.spark.serializer.KryoSerializer
spark.driver.memory              3g
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://127.0.0.1:9000/spark-eventlog
spark.eventLog.compress          true

22. 配置工作节点

[root@localhost conf]# vim workers


[root@localhost conf]# cat workers
127.0.0.1


[root@localhost conf]# 


[root@localhost sbin]# /root/spark/spark-3.1.2-bin-without-hadoop/sbin/start-all.sh

验证应用程序

拜访集群中的所有应用程序的默认端口号为 8080。应用以下 URL 拜访该服务。

http://localhost:8080/

Linux 运维交换社区

Linux 运维交换社区,互联网新闻以及技术交换。

41 篇原创内容

公众号

本文应用 文章同步助手 同步

正文完
 0