欢送拜访我的GitHub

https://github.com/zq2599/blog_demos

内容:所有原创文章分类汇总及配套源码,波及Java、Docker、Kubernetes、DevOPS等;

本篇概览

本文是《Flink的sink实战》系列的第三篇,次要内容是体验Flink官网的cassandra connector,整个实战如下图所示,咱们先从kafka获取字符串,再执行wordcount操作,而后将后果同时打印和写入cassandra:

全系列链接

  1. 《Flink的sink实战之一:初探》
  2. 《Flink的sink实战之二:kafka》
  3. 《Flink的sink实战之三:cassandra3》
  4. 《Flink的sink实战之四:自定义》

软件版本

本次实战的软件版本信息如下:

  1. cassandra:3.11.6
  2. kafka:2.4.0(scala:2.12)
  3. jdk:1.8.0_191
  4. flink:1.9.2
  5. maven:3.6.0
  6. flink所在操作系统:CentOS Linux release 7.7.1908
  7. cassandra所在操作系统:CentOS Linux release 7.7.1908
  8. IDEA:2018.3.5 (Ultimate Edition)

对于cassandra

本次用到的cassandra是三台集群部署的集群,搭建形式请参考《ansible疾速部署cassandra3集群》

筹备cassandra的keyspace和表

先创立keyspace和table:

  1. <font color="blue">cqlsh</font>登录cassandra:
cqlsh 192.168.133.168
  1. 创立keyspace(3正本):
CREATE KEYSPACE IF NOT EXISTS example    WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'};
  1. 建表:
CREATE TABLE IF NOT EXISTS example.wordcount (    word text,    count bigint,    PRIMARY KEY(word)    );

筹备kafka的topic

  1. 启动kafka服务;
  2. 创立名为test001的topic,参考命令如下:
./kafka-topics.sh \--create \--bootstrap-server 127.0.0.1:9092 \--replication-factor 1 \--partitions 1 \--topic test001
  1. 进入发送音讯的会话模式,参考命令如下:
./kafka-console-producer.sh \--broker-list kafka:9092 \--topic test001
  1. 在会话模式下,输出任意字符串而后回车,都会将字符串音讯发送到broker;

源码下载

如果您不想写代码,整个系列的源码可在GitHub下载到,地址和链接信息如下表所示(https://github.com/zq2599/blo...:

名称链接备注
我的项目主页https://github.com/zq2599/blo...该我的项目在GitHub上的主页
git仓库地址(https)https://github.com/zq2599/blo...该我的项目源码的仓库地址,https协定
git仓库地址(ssh)git@github.com:zq2599/blog_demos.git该我的项目源码的仓库地址,ssh协定

这个git我的项目中有多个文件夹,本章的利用在<font color="blue">flinksinkdemo</font>文件夹下,如下图红框所示:

两种写入cassandra的形式

flink官网的connector反对两种形式写入cassandra:

  1. Tuple类型写入:将Tuple对象的字段对齐到指定的SQL的参数中;
  2. POJO类型写入:通过DataStax,将POJO对象对应到注解配置的表和字段中;

接下来别离应用这两种形式;

开发(Tuple写入)

  1. 《Flink的sink实战之二:kafka》中创立了<font color="blue">flinksinkdemo</font>工程,在此持续应用;
  2. 在pom.xml中减少casandra的connector依赖:
<dependency>  <groupId>org.apache.flink</groupId>  <artifactId>flink-connector-cassandra_2.11</artifactId>  <version>1.10.0</version></dependency>
  1. 另外还要增加<font color="blue">flink-streaming-scala</font>依赖,否则编译<font color="blue">CassandraSink.addSink</font>这段代码会失败:
<dependency>  <groupId>org.apache.flink</groupId>  <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>  <version>${flink.version}</version>  <scope>provided</scope></dependency>
  1. 新增CassandraTuple2Sink.java,这就是Job类,外面从kafka获取字符串音讯,而后转成Tuple2类型的数据集写入cassandra,写入的关键点是Tuple内容和指定SQL中的参数的匹配:
package com.bolingcavalry.addsink;import org.apache.flink.api.common.functions.FlatMapFunction;import org.apache.flink.api.common.serialization.SimpleStringSchema;import org.apache.flink.api.java.tuple.Tuple2;import org.apache.flink.streaming.api.datastream.DataStream;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;import org.apache.flink.streaming.api.functions.sink.PrintSinkFunction;import org.apache.flink.streaming.api.windowing.time.Time;import org.apache.flink.streaming.connectors.cassandra.CassandraSink;import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;import org.apache.flink.util.Collector;import java.util.Properties;public class CassandraTuple2Sink {    public static void main(String[] args) throws Exception {        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();        //设置并行度        env.setParallelism(1);        //连贯kafka用到的属性对象        Properties properties = new Properties();        //broker地址        properties.setProperty("bootstrap.servers", "192.168.50.43:9092");        //zookeeper地址        properties.setProperty("zookeeper.connect", "192.168.50.43:2181");        //消费者的groupId        properties.setProperty("group.id", "flink-connector");        //实例化Consumer类        FlinkKafkaConsumer<String> flinkKafkaConsumer = new FlinkKafkaConsumer<>(                "test001",                new SimpleStringSchema(),                properties        );        //指定从最新地位开始生产,相当于放弃历史音讯        flinkKafkaConsumer.setStartFromLatest();        //通过addSource办法失去DataSource        DataStream<String> dataStream = env.addSource(flinkKafkaConsumer);        DataStream<Tuple2<String, Long>> result = dataStream                .flatMap(new FlatMapFunction<String, Tuple2<String, Long>>() {                             @Override                             public void flatMap(String value, Collector<Tuple2<String, Long>> out) {                                 String[] words = value.toLowerCase().split("\\s");                                 for (String word : words) {                                     //cassandra的表中,每个word都是主键,因而不能为空                                     if (!word.isEmpty()) {                                         out.collect(new Tuple2<String, Long>(word, 1L));                                     }                                 }                             }                         }                )                .keyBy(0)                .timeWindow(Time.seconds(5))                .sum(1);        result.addSink(new PrintSinkFunction<>())                .name("print Sink")                .disableChaining();        CassandraSink.addSink(result)                .setQuery("INSERT INTO example.wordcount(word, count) values (?, ?);")                .setHost("192.168.133.168")                .build()                .name("cassandra Sink")                .disableChaining();        env.execute("kafka-2.4 source, cassandra-3.11.6 sink, tuple2");    }}
  1. 上述代码中,从kafka获得数据,做了word count解决后写入到cassandra,留神addSink办法后的一连串API(蕴含了数据库连贯的参数),这是flink官网举荐的操作,另外为了在Flink web UI看清楚DAG状况,这里调用disableChaining办法勾销了operator chain,生产环境中这一行能够去掉;
  2. 编码实现后,执行<font color="blue">mvn clean package -U -DskipTests</font>构建,在target目录失去文件<font color="blue">flinksinkdemo-1.0-SNAPSHOT.jar</font>;
  3. 在Flink的web UI上传<font color="blue">flinksinkdemo-1.0-SNAPSHOT.jar</font>,并指定执行类,如下图红框所示:

  1. 启动工作后DAG如下:

  1. 去后面创立的发送kafka音讯的会话模式窗口,发送一个字符串"aaa bbb ccc aaa aaa aaa";
  2. 查看cassandra数据,发现曾经新增了三条记录,内容合乎预期:

  1. 查看TaskManager控制台输入,外面有Tuple2数据集的打印后果,和cassandra的统一:

  1. DAG上所有SubTask的记录数也合乎预期:

开发(POJO写入)

接下来尝试POJO写入,即业务逻辑中的数据结构实例被写入cassandra,无需指定SQL:

  1. 实现POJO写入数据库,须要datastax库的反对,在pom.xml中减少以下依赖:
<dependency>  <groupId>com.datastax.cassandra</groupId>  <artifactId>cassandra-driver-core</artifactId>  <version>3.1.4</version>  <classifier>shaded</classifier>  <!-- Because the shaded JAR uses the original POM, you still need                 to exclude this dependency explicitly: -->  <exclusions>    <exclusion>    <groupId>io.netty</groupId>    <artifactId>*</artifactId>    </exclusion>  </exclusions></dependency>
  1. 请留神下面配置的<font color="blue">exclusions</font>节点,依赖datastax的时候,依照官网领导对netty相干的间接依赖做排除,官网地址:https://docs.datastax.com/en/...
  2. 创立带有数据库相干注解的实体类WordCount:
package com.bolingcavalry.addsink;import com.datastax.driver.mapping.annotations.Column;import com.datastax.driver.mapping.annotations.Table;@Table(keyspace = "example", name = "wordcount")public class WordCount {    @Column(name = "word")    private String word = "";    @Column(name = "count")    private long count = 0;    public WordCount() {    }    public WordCount(String word, long count) {        this.setWord(word);        this.setCount(count);    }    public String getWord() {        return word;    }    public void setWord(String word) {        this.word = word;    }    public long getCount() {        return count;    }    public void setCount(long count) {        this.count = count;    }    @Override    public String toString() {        return getWord() + " : " + getCount();    }}
  1. 而后创立工作类CassandraPojoSink:
package com.bolingcavalry.addsink;import com.datastax.driver.mapping.Mapper;import com.datastax.shaded.netty.util.Recycler;import org.apache.flink.api.common.functions.FlatMapFunction;import org.apache.flink.api.common.functions.ReduceFunction;import org.apache.flink.api.common.serialization.SimpleStringSchema;import org.apache.flink.api.java.tuple.Tuple2;import org.apache.flink.streaming.api.datastream.DataStream;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;import org.apache.flink.streaming.api.functions.sink.PrintSinkFunction;import org.apache.flink.streaming.api.windowing.time.Time;import org.apache.flink.streaming.connectors.cassandra.CassandraSink;import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;import org.apache.flink.util.Collector;import java.util.Properties;public class CassandraPojoSink {    public static void main(String[] args) throws Exception {        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();        //设置并行度        env.setParallelism(1);        //连贯kafka用到的属性对象        Properties properties = new Properties();        //broker地址        properties.setProperty("bootstrap.servers", "192.168.50.43:9092");        //zookeeper地址        properties.setProperty("zookeeper.connect", "192.168.50.43:2181");        //消费者的groupId        properties.setProperty("group.id", "flink-connector");        //实例化Consumer类        FlinkKafkaConsumer<String> flinkKafkaConsumer = new FlinkKafkaConsumer<>(                "test001",                new SimpleStringSchema(),                properties        );        //指定从最新地位开始生产,相当于放弃历史音讯        flinkKafkaConsumer.setStartFromLatest();        //通过addSource办法失去DataSource        DataStream<String> dataStream = env.addSource(flinkKafkaConsumer);        DataStream<WordCount> result = dataStream                .flatMap(new FlatMapFunction<String, WordCount>() {                    @Override                    public void flatMap(String s, Collector<WordCount> collector) throws Exception {                        String[] words = s.toLowerCase().split("\\s");                        for (String word : words) {                            if (!word.isEmpty()) {                                //cassandra的表中,每个word都是主键,因而不能为空                                collector.collect(new WordCount(word, 1L));                            }                        }                    }                })                .keyBy("word")                .timeWindow(Time.seconds(5))                .reduce(new ReduceFunction<WordCount>() {                    @Override                    public WordCount reduce(WordCount wordCount, WordCount t1) throws Exception {                        return new WordCount(wordCount.getWord(), wordCount.getCount() + t1.getCount());                    }                });        result.addSink(new PrintSinkFunction<>())                .name("print Sink")                .disableChaining();        CassandraSink.addSink(result)                .setHost("192.168.133.168")                .setMapperOptions(() -> new Mapper.Option[] { Mapper.Option.saveNullFields(true) })                .build()                .name("cassandra Sink")                .disableChaining();        env.execute("kafka-2.4 source, cassandra-3.11.6 sink, pojo");    }}
  1. 从上述代码可见,和后面的Tuple写入类型有很大差异,为了筹备好POJO类型的数据集,除了flatMap的匿名类入参要改写,还要写好reduce办法的匿名类入参,并且还要调用setMapperOptions设置映射规定;
  2. 编译构建后,上传jar到flink,并且指定工作类为CassandraPojoSink:

  1. 清理之前的数据,在cassandra的cqlsh上执行<font color="blue">TRUNCATE example.wordcount;</font>
  2. 像之前那样发送字符串音讯到kafka:

  1. 查看数据库,发现后果合乎预期:

  1. DAG和SubTask状况如下:

至此,flink的后果数据写入cassandra的实战就实现了,心愿能给您一些参考;

欢送关注公众号:程序员欣宸

微信搜寻「程序员欣宸」,我是欣宸,期待与您一起畅游Java世界...
https://github.com/zq2599/blog_demos