一、 动态分区
1.创立动态分区格局:
create table employees ( name string, salary float, subordinated array<string>, deductions map<string,float>, address struct<street:string,city:string,state:string,zip:int> ) partitioned by (country string,state string) row format delimited fields terminated by "\t" collection items terminated by "," map keys terminated by ":" lines terminated by "\n" stored as textfile;
创立成绩后发现他的存储门路和一般的外部表的门路是一样的而且多了分区表的字段,因为咱们创立的分区表并没内容,事实上,除非须要优化查问性能,否则实现表的用户不须要关系"字段是否是分区字段"
2.增加分区表
alter table employees add partition (country="china",state="Asia");
查看分区表信息: show partitions employees;
hdfs上的门路:/user/hive/warehouse/zxz.db/employees/country=china/state=Asia 他们都是以目录及子目录模式存储的
3.插入数据:
格局:
INSERT INTO TABLE tablename [PARTITION (partcol1[=val1], partcol2[=val2] ...)] VALUES values_row [, values_row …];
格局2:(举荐应用)
load data local inpath '/home/had/data1.txt' into table employees partition (country =china,state=Asia)
4.利用分区表查问:(个别分区表都是利用where语句查问的)
5.CTAS语句和like
创立表,携带数据
create table employees1 as select * from employees1
创立表,携带表构造
create table employees2 like employees
6.内部分区表:
内部表同样能够应用分区,事实上,用户会发现,只是治理大型生产数据集最常见的状况,这种联合给用户提供一个和其余工具共享数据的形式,同时也能够优化查问性能
create external table employees_ex( name string, salary float, subordinated array<string>, deductions map<string,float>, address struct<street:string,city:string,state:string,zip:int> ) partitioned by (country string,state string) row format delimited fields terminated by "\t" collection items terminated by "," map keys terminated by ":" lines terminated by "\n" stored as textfile; location "/user/had/data/" //他其实和一般的动态分区表一样就是多了一个external关键字 这样咱们就能够把数据门路扭转而不影响数据的失落,这是外部分区表远远不能做的事件:
(因为咱们创立的是内部表)所有咱们能够把表数据放到hdfs上的轻易一个中央这里主动数据加载到/user/had/data/下(当然咱们之前在内部表上指定了门路)
load data local inpath '/home/had/data.txt' into table employees_ex partition (country="china",state="Asia");
如果咱们加载的数据要拆散一些旧数据的时候就能够hadoop的distcp命令来copy数据到某个门路
hadoop distcp /user/had/data/country=china/state=Asia /user/had/data_old/country=china/state=Asia
批改表,把移走的数据的门路在hive里批改
alter table employees partition(country="china",state="Asia") set location '/user/had/data_old/country=china/state=Asia'
应用hdfs的rm命令删除之前门路的数据
hdfs dfs -rmr /user/had/data/country=china/state=Asia
如果感觉忽然遗记了数据的地位应用应用上面的形式查看
describe extend employees_ex partition (country="china",state="Asia");
7.删除分区表
alter table employees drop partition(country="china",state="Asia");
8.泛滥的批改语句
把一个分区打包成一个har包
alter table employees archive partition (country="china",state="Asia")
把一个分区har包还原成原来的分区
alter table employees unarchive partition (country="china",state="Asia")
爱护分区避免被删除
alter table employees partition (country="china",state="Asia") enable no_drop
爱护分区避免被查问
alter table employees partition (country="china",state="Asia") enable offline
容许分区删除和查问
alter table employees partition (country="china",state="Asia") disable no_drop
alter table employees partition (country="china",state="Asia") disable offline
9.通过查问语句向表中插入数据
insert overwrite/into table copy_employees partition (country="china",state="Asia") select * from employees es where es.country="china" and es.state ="Asia"
二、动静分区:
为什么要应用动静分区呢,咱们举个例子,如果中国有50个省,每个省有50个市,每个市都有100个区,那咱们都要应用动态分区要应用多久能力搞完。所有咱们要应用动静分区。
动静分区默认是没有开启。开启后默认是以严格模式执行的,在这种模式下须要至多一个分区字段是动态的。这有助于阻止因设计谬误导致导致查问差生大量的分区。列如:用户可能谬误应用工夫戳作为分区表字段。而后导致每秒都对应一个分区!这样咱们也能够采纳相应的措施:
敞开严格分区模式
动静分区模式时是严格模式,也就是至多有一个动态分区。
set hive.exec.dynamic.partition.mode=nonstrict //分区模式,默认nostrict
set hive.exec.dynamic.partition=true //开启动静分区,默认true
set hive.exec.max.dynamic.partitions=1000 //最大动静分区数,默认1000
1,创立一个一般动静分区表:
create table if not exists zxz_5( name string, nid int, phone string, ntime date ) partitioned by (year int,month int) row format delimited fields terminated by "|" lines terminated by "\n" stored as textfile;
2.动静分区表入数据:留神插入数据的列名需定义测和分区字段名雷同
insert overwrite table zxz_5 partition (year,month) select name,nid,phone,ntime,year(ntime) as year ,month(ntime) as month from zxz_dy;