hive窗口函数/剖析函数

在sql中有一类函数叫做聚合函数,例如sum()、avg()、max()等等,这类函数能够将多行数据依照规定汇集为一行,一般来讲汇集后的行数是要少于汇集前的行数的。然而有时咱们想要既显示汇集前的数据,又要显示汇集后的数据,这时咱们便引入了窗口函数。窗口函数又叫OLAP函数/剖析函数,窗口函数兼具分组和排序功能。

窗口函数最重要的关键字是 partition byorder by。

具体语法如下:over (partition by xxx order by xxx)

sum,avg,min,max 函数

筹备数据

建表语句:create table bigdata_t1(cookieid string,createtime string,   --day pv int) row format delimited fields terminated by ',';加载数据:load data local inpath '/root/hivedata/bigdata_t1.dat' into table bigdata_t1;cookie1,2018-04-10,1cookie1,2018-04-11,5cookie1,2018-04-12,7cookie1,2018-04-13,3cookie1,2018-04-14,2cookie1,2018-04-15,4cookie1,2018-04-16,4开启智能本地模式SET hive.exec.mode.local.auto=true;

SUM函数和窗口函数的配合应用:后果和ORDER BY相干,默认为升序。

#pv1select cookieid,createtime,pv,sum(pv) over(partition by cookieid order by createtime) as pv1 from bigdata_t1;#pv2select cookieid,createtime,pv,sum(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv2from bigdata_t1;#pv3select cookieid,createtime,pv,sum(pv) over(partition by cookieid) as pv3from bigdata_t1;#pv4select cookieid,createtime,pv,sum(pv) over(partition by cookieid order by createtime rows between 3 preceding and current row) as pv4from bigdata_t1;#pv5select cookieid,createtime,pv,sum(pv) over(partition by cookieid order by createtime rows between 3 preceding and 1 following) as pv5from bigdata_t1;#pv6select cookieid,createtime,pv,sum(pv) over(partition by cookieid order by createtime rows between current row and unbounded following) as pv6from bigdata_t1;pv1: 分组内从终点到以后行的pv累积,如,11号的pv1=10号的pv+11号的pv, 12号=10号+11号+12号pv2: 同pv1pv3: 分组内(cookie1)所有的pv累加pv4: 分组内以后行+往前3行,如,11号=10号+11号, 12号=10号+11号+12号,                           13号=10号+11号+12号+13号, 14号=11号+12号+13号+14号pv5: 分组内以后行+往前3行+往后1行,如,14号=11号+12号+13号+14号+15号=5+7+3+2+4=21pv6: 分组内以后行+往后所有行,如,13号=13号+14号+15号+16号=3+2+4+4=13,                             14号=14号+15号+16号=2+4+4=10

如果不指定rows between,默认为从终点到以后行;

如果不指定order by,则将分组内所有值累加;

要害是了解rows between含意,也叫做window子句

preceding:往前

following:往后

current row:以后行

unbounded:终点

unbounded preceding 示意从后面的终点

unbounded following:示意到前面的起点

AVG,MIN,MAX,和SUM用法一样。

row_number,rank,dense_rank,ntile 函数

筹备数据

cookie1,2018-04-10,1cookie1,2018-04-11,5cookie1,2018-04-12,7cookie1,2018-04-13,3cookie1,2018-04-14,2cookie1,2018-04-15,4cookie1,2018-04-16,4cookie2,2018-04-10,2cookie2,2018-04-11,3cookie2,2018-04-12,5cookie2,2018-04-13,6cookie2,2018-04-14,3cookie2,2018-04-15,9cookie2,2018-04-16,7 CREATE TABLE bigdata_t2 (cookieid string,createtime string,   --day pv INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile;  加载数据:load data local inpath '/root/hivedata/bigdata_t2.dat' into table bigdata_t2;
  • ROW_NUMBER()应用

    ROW_NUMBER()从1开始,依照程序,生成分组内记录的序列。

SELECT cookieid,createtime,pv,ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn FROM bigdata_t2;
  • RANK 和 DENSE_RANK应用

    RANK() 生成数据项在分组中的排名,排名相等会在名次中留下空位 。

    DENSE_RANK()生成数据项在分组中的排名,排名相等会在名次中不会留下空位。

SELECT cookieid,createtime,pv,RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn1,DENSE_RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn2,ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv DESC) AS rn3 FROM bigdata_t2 WHERE cookieid = 'cookie1';
  • NTILE

    有时会有这样的需要:如果数据排序后分为三局部,业务人员只关怀其中的一部分,如何将这两头的三分之一数据拿进去呢?NTILE函数即能够满足。

    ntile能够看成是:把有序的数据汇合平均分配到指定的数量(num)个桶中, 将桶号调配给每一行。如果不能平均分配,则优先调配较小编号的桶,并且各个桶中能放的行数最多相差1。

    而后能够依据桶号,选取前或后 n分之几的数据。数据会残缺展现进去,只是给相应的数据打标签;具体要取几分之几的数据,须要再嵌套一层依据标签取出。

SELECT cookieid,createtime,pv,NTILE(2) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn1,NTILE(3) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn2,NTILE(4) OVER(ORDER BY createtime) AS rn3FROM bigdata_t2 ORDER BY cookieid,createtime;

其余一些窗口函数

lag,lead,first_value,last_value 函数

  • LAG
    LAG(col,n,DEFAULT) 用于统计窗口内往上第n行值第一个参数为列名,第二个参数为往上第n行(可选,默认为1),第三个参数为默认值(当往上第n行为NULL时候,取默认值,如不指定,则为NULL)
  SELECT cookieid,  createtime,  url,  ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn,  LAG(createtime,1,'1970-01-01 00:00:00') OVER(PARTITION BY cookieid ORDER BY createtime) AS last_1_time,  LAG(createtime,2) OVER(PARTITION BY cookieid ORDER BY createtime) AS last_2_time   FROM bigdata_t4;      last_1_time: 指定了往上第1行的值,default为'1970-01-01 00:00:00'                              cookie1第一行,往上1行为NULL,因而取默认值 1970-01-01 00:00:00                            cookie1第三行,往上1行值为第二行值,2015-04-10 10:00:02                            cookie1第六行,往上1行值为第五行值,2015-04-10 10:50:01  last_2_time: 指定了往上第2行的值,为指定默认值                           cookie1第一行,往上2行为NULL                           cookie1第二行,往上2行为NULL                           cookie1第四行,往上2行为第二行值,2015-04-10 10:00:02                           cookie1第七行,往上2行为第五行值,2015-04-10 10:50:01
  • LEAD

    与LAG相同
    LEAD(col,n,DEFAULT) 用于统计窗口内往下第n行值
    第一个参数为列名,第二个参数为往下第n行(可选,默认为1),第三个参数为默认值(当往下第n行为NULL时候,取默认值,如不指定,则为NULL)

  SELECT cookieid,  createtime,  url,  ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn,  LEAD(createtime,1,'1970-01-01 00:00:00') OVER(PARTITION BY cookieid ORDER BY createtime) AS next_1_time,  LEAD(createtime,2) OVER(PARTITION BY cookieid ORDER BY createtime) AS next_2_time   FROM bigdata_t4;
  • FIRST_VALUE

    取分组内排序后,截止到以后行,第一个值

  SELECT cookieid,  createtime,  url,  ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn,  FIRST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS first1   FROM bigdata_t4;
  • LAST_VALUE

    取分组内排序后,截止到以后行,最初一个值

  SELECT cookieid,  createtime,  url,  ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn,  LAST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS last1   FROM bigdata_t4;

如果想要取分组内排序后最初一个值,则须要变通一下:

  SELECT cookieid,  createtime,  url,  ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn,  LAST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS last1,  FIRST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime DESC) AS last2   FROM bigdata_t4   ORDER BY cookieid,createtime;

特地留神order by

如果不指定ORDER BY,则进行排序凌乱,会呈现谬误的后果

  SELECT cookieid,  createtime,  url,  FIRST_VALUE(url) OVER(PARTITION BY cookieid) AS first2    FROM bigdata_t4;

cume_dist,percent_rank 函数

这两个序列剖析函数不是很罕用,留神: 序列函数不反对WINDOW子句

  • 数据筹备
  d1,user1,1000  d1,user2,2000  d1,user3,3000  d2,user4,4000  d2,user5,5000     CREATE EXTERNAL TABLE bigdata_t3 (  dept STRING,  userid string,  sal INT  ) ROW FORMAT DELIMITED   FIELDS TERMINATED BY ','   stored as textfile;    加载数据:  load data local inpath '/root/hivedata/bigdata_t3.dat' into table bigdata_t3;
  • CUME_DIST 和order by的排序程序有关系

    CUME_DIST 小于等于以后值的行数/分组内总行数 order 默认程序 正序 升序
    比方,统计小于等于以后薪水的人数,所占总人数的比例

  SELECT   dept,  userid,  sal,  CUME_DIST() OVER(ORDER BY sal) AS rn1,  CUME_DIST() OVER(PARTITION BY dept ORDER BY sal) AS rn2   FROM bigdata_t3;    rn1: 没有partition,所有数据均为1组,总行数为5,       第一行:小于等于1000的行数为1,因而,1/5=0.2       第三行:小于等于3000的行数为3,因而,3/5=0.6  rn2: 依照部门分组,dpet=d1的行数为3,       第二行:小于等于2000的行数为2,因而,2/3=0.6666666666666666
  • PERCENT_RANK

    PERCENT_RANK 分组内以后行的RANK值-1/分组内总行数-1

  SELECT   dept,  userid,  sal,  PERCENT_RANK() OVER(ORDER BY sal) AS rn1,   --分组内  RANK() OVER(ORDER BY sal) AS rn11,          --分组内RANK值  SUM(1) OVER(PARTITION BY NULL) AS rn12,     --分组内总行数  PERCENT_RANK() OVER(PARTITION BY dept ORDER BY sal) AS rn2   FROM bigdata_t3;    rn1: rn1 = (rn11-1) / (rn12-1)          第一行,(1-1)/(5-1)=0/4=0         第二行,(2-1)/(5-1)=1/4=0.25         第四行,(4-1)/(5-1)=3/4=0.75  rn2: 依照dept分组,       dept=d1的总行数为3       第一行,(1-1)/(3-1)=0       第三行,(3-1)/(3-1)=1

grouping sets,grouping__id,cube,rollup 函数

这几个剖析函数通常用于OLAP中,不能累加,而且须要依据不同维度上钻和下钻的指标统计,比方,分小时、天、月的UV数。

  • 数据筹备
  2018-03,2018-03-10,cookie1  2018-03,2018-03-10,cookie5  2018-03,2018-03-12,cookie7  2018-04,2018-04-12,cookie3  2018-04,2018-04-13,cookie2  2018-04,2018-04-13,cookie4  2018-04,2018-04-16,cookie4  2018-03,2018-03-10,cookie2  2018-03,2018-03-10,cookie3  2018-04,2018-04-12,cookie5  2018-04,2018-04-13,cookie6  2018-04,2018-04-15,cookie3  2018-04,2018-04-15,cookie2  2018-04,2018-04-16,cookie1     CREATE TABLE bigdata_t5 (  month STRING,  day STRING,   cookieid STRING   ) ROW FORMAT DELIMITED   FIELDS TERMINATED BY ','   stored as textfile;    加载数据:  load data local inpath '/root/hivedata/bigdata_t5.dat' into table bigdata_t5;
  • GROUPING SETS

    grouping sets是一种将多个group by 逻辑写在一个sql语句中的便当写法。

    等价于将不同维度的GROUP BY后果集进行UNION ALL。

    GROUPING__ID,示意后果属于哪一个分组汇合。

  SELECT   month,  day,  COUNT(DISTINCT cookieid) AS uv,  GROUPING__ID   FROM bigdata_t5   GROUP BY month,day   GROUPING SETS (month,day)   ORDER BY GROUPING__ID;    grouping_id示意这一组后果属于哪个分组汇合,  依据grouping sets中的分组条件month,day,1是代表month,2是代表day    等价于   SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM bigdata_t5 GROUP BY month UNION ALL   SELECT NULL as month,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM bigdata_t5 GROUP BY day;

再如:

  SELECT   month,  day,  COUNT(DISTINCT cookieid) AS uv,  GROUPING__ID   FROM bigdata_t5   GROUP BY month,day   GROUPING SETS (month,day,(month,day))   ORDER BY GROUPING__ID;    等价于  SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM bigdata_t5 GROUP BY month   UNION ALL   SELECT NULL,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM bigdata_t5 GROUP BY day  UNION ALL   SELECT month,day,COUNT(DISTINCT cookieid) AS uv,3 AS GROUPING__ID FROM bigdata_t5 GROUP BY month,day;
  • CUBE

    依据GROUP BY的维度的所有组合进行聚合。

  SELECT   month,  day,  COUNT(DISTINCT cookieid) AS uv,  GROUPING__ID   FROM bigdata_t5   GROUP BY month,day   WITH CUBE   ORDER BY GROUPING__ID;    等价于  SELECT NULL,NULL,COUNT(DISTINCT cookieid) AS uv,0 AS GROUPING__ID FROM bigdata_t5  UNION ALL   SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM bigdata_t5 GROUP BY month   UNION ALL   SELECT NULL,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM bigdata_t5 GROUP BY day  UNION ALL   SELECT month,day,COUNT(DISTINCT cookieid) AS uv,3 AS GROUPING__ID FROM bigdata_t5 GROUP BY month,day;
  • ROLLUP

    是CUBE的子集,以最左侧的维度为主,从该维度进行层级聚合。

  比方,以month维度进行层级聚合:  SELECT   month,  day,  COUNT(DISTINCT cookieid) AS uv,  GROUPING__ID    FROM bigdata_t5   GROUP BY month,day  WITH ROLLUP   ORDER BY GROUPING__ID;    --把month和day调换程序,则以day维度进行层级聚合:     SELECT   day,  month,  COUNT(DISTINCT cookieid) AS uv,  GROUPING__ID    FROM bigdata_t5   GROUP BY day,month   WITH ROLLUP   ORDER BY GROUPING__ID;  (这里,依据天和月进行聚合,和依据天聚合后果一样,因为有父子关系,如果是其余维度组合的话,就会不一样)

搜寻公众号:五分钟学大数据,获取大数据学习秘籍,你的大数据能力将实现质的飞跃