欢送拜访我的GitHub
https://github.com/zq2599/blog_demos
内容:所有原创文章分类汇总及配套源码,波及Java、Docker、Kubernetes、DevOPS等;
《hive学习笔记》系列导航
- 根本数据类型
- 简单数据类型
- 外部表和内部表
- 分区表
- 分桶
- HiveQL根底
- 内置函数
- Sqoop
- 根底UDF
- 用户自定义聚合函数(UDAF)
- UDTF
本篇概览
- 作为《hive学习笔记》的第二篇,后面咱们理解了根本类型,本篇要学习的是简单数据类型;
- 简单数据类型一共有四种:
- ARRAY:数组
- MAP:键值对
- STRUCT:命名字段汇合
- UNION:从几种数据类型中指明抉择一种,UNION的值必须于这些数据类型之一齐全匹配;
- 接下来一一学习;
筹备环境
- 确保hadoop曾经启动;
- 进入hive控制台的交互模式;
- 执行以下命令,使查问后果中带有字段名:
set hive.cli.print.header=true;
ARRAY
- 创立名为<font color="blue">t2</font>的表,只有person和friends两个字段,<font color="blue">person</font>是字符串类型,<font color="blue">friends</font>是数组类型,通过文本文件导入数据时,person和friends之间的分隔符是<font color="red">竖线</font>,friends外部的多个元素之间的分隔符是<font color="blue">逗号</font>,留神申明分隔符的语法:
create table if not exists t2(person string,friends array<string>)row format delimited fields terminated by '|'collection items terminated by ',';
- 创立文本文件<font color="blue">002.txt</font>,内容如下,可见只有两条记录,第一条person字段值为tom,friends字段外面有三个元素,用逗号分隔:
tom|tom_friend_0,tom_friend_1,tom_friend_2jerry|jerry_friend_0,jerry_friend_1,jerry_friend_2,jerry_friend_3,jerry_friend_4,jerry_friend_5
- 执行以下语句,从本地的002.txt文件导入数据到t2表:
load data local inpath '/home/hadoop/temp/202010/25/002.txt' into table t2;
- 查看全副数据:
hive> select * from t2;OKt2.person t2.friendstom ["tom_friend_0","tom_friend_1","tom_friend_2"]jerry ["jerry_friend_0","jerry_friend_1","jerry_friend_2","jerry_friend_3","jerry_friend_4","jerry_friend_5"]Time taken: 0.052 seconds, Fetched: 2 row(s)
- 查问friends中的某个元素的SQL:
select person, friends[0], friends[3] from t2;
执行后果如下,第一条记录没有friends[3],显示为NULL:
hive> select person, friends[0], friends[3] from t2; OKperson _c1 _c2tom tom_friend_0 NULLjerry jerry_friend_0 jerry_friend_3Time taken: 0.052 seconds, Fetched: 2 row(s)
- 数组元素中是否蕴含某值的SQL:
select person, array_contains(friends, 'tom_friend_0') from t2;
执行后果如下,第一条记录friends数组中有<font color="red">tom_friend_0</font>,显示为true,第二条记录不蕴含,就显示false:
hive> select person, array_contains(friends, 'tom_friend_0') from t2;OKperson _c1tom truejerry falseTime taken: 0.061 seconds, Fetched: 2 row(s)
- 第一条记录的friends数组中有三个元素,借助<font color="blue">LATERAL VIEW</font>语法能够把这三个元素拆成三行,SQL如下:
select t.person, single_friendfrom ( select person, friends from t2 where person='tom') t LATERAL VIEW explode(t.friends) v as single_friend;
执行后果如下,可见数组中的每个元素都能拆成独自一行:
OKt.person single_friendtom tom_friend_0tom tom_friend_1tom tom_friend_2Time taken: 0.058 seconds, Fetched: 3 row(s)
- 以上就是数组的基本操作,接下来是键值对;
MAP,建表,导入数据
- 接下来打算创立名为<font color="blue">t3</font>的表,只有person和address两个字段,<font color="blue">person</font>是字符串类型,<font color="blue">address</font>是MAP类型,通过文本文件导入数据时,对分隔符的定义如下:
- person和address之间的分隔符是<font color="red">竖线</font>;
- address外部有多个键值对,它们的分隔符是<font color="red">逗号</font>;
- 而每个键值对的键和值的分隔符是<font color="red">冒号</font>;
- 满足上述要求的建表语句如下所示:
create table if not exists t3(person string,address map<string, string>)row format delimited fields terminated by '|'collection items terminated by ',' map keys terminated by ':';
- 创立文本文件<font color="blue">003.txt</font>,可见用了三种分隔符来分隔字段、MAP中的多个元素、每个元素键和值:
tom|province:guangdong,city:shenzhenjerry|province:jiangsu,city:nanjing
- 导入003.txt的数据到t3表:
load data local inpath '/home/hadoop/temp/202010/25/003.txt' into table t3;
MAP,查问
- 查看全副数据:
hive> select * from t3;OKt3.person t3.addresstom {"province":"guangdong","city":"shenzhen"}jerry {"province":"jiangsu","city":"nanjing"}Time taken: 0.075 seconds, Fetched: 2 row(s)
- 查看MAP中的某个key,语法是<font color="blue">field["xxx"]</font>:
hive> select person, address["province"] from t3;OKperson _c1tom guangdongjerry jiangsuTime taken: 0.075 seconds, Fetched: 2 row(s)
- 应用<font color="blue">if</font>函数,上面的SQL是判断address字段中是否有"street"键,如果有就显示对应的值,没有就显示<font color="blue">filed street not exists</font>:
select person, if(address['street'] is null, "filed street not exists", address['street']) from t3;
输入如下,因为address字段只有<font color="blue">province</font>和<font color="blue">city</font>两个键,因而会显示<font color="blue">filed street not exists</font>:
OKtom filed street not existsjerry filed street not existsTime taken: 0.087 seconds, Fetched: 2 row(s)
- 应用<font color="blue">explode</font>将address字段的每个键值对展现成一行:
hive> select explode(address) from t3;OKprovince guangdongcity shenzhenprovince jiangsucity nanjingTime taken: 0.081 seconds, Fetched: 4 row(s)
- 下面的<font color="blue">explode</font>函数只能展现address字段,如果还要展现其余字段就要持续<font color="blue">LATERAL VIEW</font>语法,如下,可见后面的数组开展为一个字段,MAP开展为两个字段,别离是key和value:
select t.person, address_key, address_valuefrom ( select person, address from t3 where person='tom') t LATERAL VIEW explode(t.address) v as address_key, address_value;
后果如下:
OKtom province guangdongtom city shenzhenTime taken: 0.118 seconds, Fetched: 2 row(s)
- <font color="blue">size</font>函数能够查看MAP中键值对的数量:
hive> select person, size(address) from t3;OKtom 2jerry 2Time taken: 0.082 seconds, Fetched: 2 row(s)
STRUCT
- STRUCT是一种记录类型,它封装了一个命名的字段汇合,外面有很多属性,新建名为<font color="blue">t4</font>的表,其info字段就是<font color="blue">STRUCT</font>类型,外面有age和city两个属性,person和info之间的分隔符是<font color="red">竖线</font>,info外部的多个元素之间的分隔符是<font color="red">逗号</font>,留神申明分隔符的语法:
create table if not exists t4(person string,info struct<age:int, city:string>)row format delimited fields terminated by '|'collection items terminated by ',';
- 筹备好名为004.txt的文本文件,内容如下:
tom|11,shenzhenjerry|12,nanjing
- 加载004.txt的数据到t4表:
load data local inpath '/home/hadoop/temp/202010/25/004.txt' into table t4;
- 查看t4的所有数据:
hive> select * from t4;OKtom {"age":11,"city":"shenzhen"}jerry {"age":12,"city":"nanjing"}Time taken: 0.063 seconds, Fetched: 2 row(s)
- 查看指定字段,用filedname.xxx语法:
hive> select person, info.city from t4;OKtom shenzhenjerry nanjingTime taken: 0.141 seconds, Fetched: 2 row(s)
UNION
- 最初一种是UNIONTYPE,这是从几种数据类型中指明抉择一种,因为UNIONTYPE数据的创立设计到UDF(create_union),这里先不开展了,先看看建表语句:
CREATE TABLE union_test(foo UNIONTYPE<int, double, array<string>, struct<a:int,b:string>>);
- 查问后果:
SELECT foo FROM union_test;{0:1}{1:2.0}{2:["three","four"]}{3:{"a":5,"b":"five"}}{2:["six","seven"]}{3:{"a":8,"b":"eight"}}{0:9}{1:10.0}
- 至此,hive的根底数据类型和简单数据类型咱们都实际操作过一遍了,接下来的文章将开展更多hive常识,期待与您共同进步;
你不孤独,欣宸原创一路相伴
- Java系列
- Spring系列
- Docker系列
- kubernetes系列
- 数据库+中间件系列
- DevOps系列
欢送关注公众号:程序员欣宸
微信搜寻「程序员欣宸」,我是欣宸,期待与您一起畅游Java世界...
https://github.com/zq2599/blog_demos